2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/licenses/publicdomain. Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
9 Note: There may be an updated version of this malloc obtainable at
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11 Check before installing!
15 This library is all in one file to simplify the most common usage:
16 ftp it, compile it (-O3), and link it into another program. All of
17 the compile-time options default to reasonable values for use on
18 most platforms. You might later want to step through various
19 compile-time and dynamic tuning options.
21 For convenience, an include file for code using this malloc is at:
22 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23 You don't really need this .h file unless you call functions not
24 defined in your system include files. The .h file contains only the
25 excerpts from this file needed for using this malloc on ANSI C/C++
26 systems, so long as you haven't changed compile-time options about
27 naming and tuning parameters. If you do, then you can create your
28 own malloc.h that does include all settings by cutting at the point
29 indicated below. Note that you may already by default be using a C
30 library containing a malloc that is based on some version of this
31 malloc (for example in linux). You might still want to use the one
32 in this file to customize settings or to avoid overheads associated
33 with library versions.
37 Supported pointer/size_t representation: 4 or 8 bytes
38 size_t MUST be an unsigned type of the same width as
39 pointers. (If you are using an ancient system that declares
40 size_t as a signed type, or need it to be a different width
41 than pointers, you can use a previous release of this malloc
42 (e.g. 2.7.2) supporting these.)
44 Alignment: 8 bytes (default)
45 This suffices for nearly all current machines and C compilers.
46 However, you can define MALLOC_ALIGNMENT to be wider than this
47 if necessary (up to 128bytes), at the expense of using more space.
49 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
50 8 or 16 bytes (if 8byte sizes)
51 Each malloced chunk has a hidden word of overhead holding size
52 and status information, and additional cross-check word
53 if FOOTERS is defined.
55 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
56 8-byte ptrs: 32 bytes (including overhead)
58 Even a request for zero bytes (i.e., malloc(0)) returns a
59 pointer to something of the minimum allocatable size.
60 The maximum overhead wastage (i.e., number of extra bytes
61 allocated than were requested in malloc) is less than or equal
62 to the minimum size, except for requests >= mmap_threshold that
63 are serviced via mmap(), where the worst case wastage is about
64 32 bytes plus the remainder from a system page (the minimal
65 mmap unit); typically 4096 or 8192 bytes.
67 Security: static-safe; optionally more or less
68 The "security" of malloc refers to the ability of malicious
69 code to accentuate the effects of errors (for example, freeing
70 space that is not currently malloc'ed or overwriting past the
71 ends of chunks) in code that calls malloc. This malloc
72 guarantees not to modify any memory locations below the base of
73 heap, i.e., static variables, even in the presence of usage
74 errors. The routines additionally detect most improper frees
75 and reallocs. All this holds as long as the static bookkeeping
76 for malloc itself is not corrupted by some other means. This
77 is only one aspect of security -- these checks do not, and
78 cannot, detect all possible programming errors.
80 If FOOTERS is defined nonzero, then each allocated chunk
81 carries an additional check word to verify that it was malloced
82 from its space. These check words are the same within each
83 execution of a program using malloc, but differ across
84 executions, so externally crafted fake chunks cannot be
85 freed. This improves security by rejecting frees/reallocs that
86 could corrupt heap memory, in addition to the checks preventing
87 writes to statics that are always on. This may further improve
88 security at the expense of time and space overhead. (Note that
89 FOOTERS may also be worth using with MSPACES.)
91 By default detected errors cause the program to abort (calling
92 "abort()"). You can override this to instead proceed past
93 errors by defining PROCEED_ON_ERROR. In this case, a bad free
94 has no effect, and a malloc that encounters a bad address
95 caused by user overwrites will ignore the bad address by
96 dropping pointers and indices to all known memory. This may
97 be appropriate for programs that should continue if at all
98 possible in the face of programming errors, although they may
99 run out of memory because dropped memory is never reclaimed.
101 If you don't like either of these options, you can define
102 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103 else. And if if you are sure that your program using malloc has
104 no errors or vulnerabilities, you can define INSECURE to 1,
105 which might (or might not) provide a small performance improvement.
107 Thread-safety: NOT thread-safe unless USE_LOCKS defined
108 When USE_LOCKS is defined, each public call to malloc, free,
109 etc is surrounded with either a pthread mutex or a win32
110 spinlock (depending on WIN32). This is not especially fast, and
111 can be a major bottleneck. It is designed only to provide
112 minimal protection in concurrent environments, and to provide a
113 basis for extensions. If you are using malloc in a concurrent
114 program, consider instead using ptmalloc, which is derived from
115 a version of this malloc. (See http://www.malloc.de).
117 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118 This malloc can use unix sbrk or any emulation (invoked using
119 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121 memory. On most unix systems, it tends to work best if both
122 MORECORE and MMAP are enabled. On Win32, it uses emulations
123 based on VirtualAlloc. It also uses common C library functions
126 Compliance: I believe it is compliant with the Single Unix Specification
127 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
130 * Overview of algorithms
132 This is not the fastest, most space-conserving, most portable, or
133 most tunable malloc ever written. However it is among the fastest
134 while also being among the most space-conserving, portable and
135 tunable. Consistent balance across these factors results in a good
136 general-purpose allocator for malloc-intensive programs.
138 In most ways, this malloc is a best-fit allocator. Generally, it
139 chooses the best-fitting existing chunk for a request, with ties
140 broken in approximately least-recently-used order. (This strategy
141 normally maintains low fragmentation.) However, for requests less
142 than 256bytes, it deviates from best-fit when there is not an
143 exactly fitting available chunk by preferring to use space adjacent
144 to that used for the previous small request, as well as by breaking
145 ties in approximately most-recently-used order. (These enhance
146 locality of series of small allocations.) And for very large requests
147 (>= 256Kb by default), it relies on system memory mapping
148 facilities, if supported. (This helps avoid carrying around and
149 possibly fragmenting memory used only for large chunks.)
151 All operations (except malloc_stats and mallinfo) have execution
152 times that are bounded by a constant factor of the number of bits in
153 a size_t, not counting any clearing in calloc or copying in realloc,
154 or actions surrounding MORECORE and MMAP that have times
155 proportional to the number of non-contiguous regions returned by
156 system allocation routines, which is often just 1.
158 The implementation is not very modular and seriously overuses
159 macros. Perhaps someday all C compilers will do as good a job
160 inlining modular code as can now be done by brute-force expansion,
161 but now, enough of them seem not to.
163 Some compilers issue a lot of warnings about code that is
164 dead/unreachable only on some platforms, and also about intentional
165 uses of negation on unsigned types. All known cases of each can be
168 For a longer but out of date high-level description, see
169 http://gee.cs.oswego.edu/dl/html/malloc.html
172 If MSPACES is defined, then in addition to malloc, free, etc.,
173 this file also defines mspace_malloc, mspace_free, etc. These
174 are versions of malloc routines that take an "mspace" argument
175 obtained using create_mspace, to control all internal bookkeeping.
176 If ONLY_MSPACES is defined, only these versions are compiled.
177 So if you would like to use this allocator for only some allocations,
178 and your system malloc for others, you can compile with
179 ONLY_MSPACES and then do something like...
180 static mspace mymspace = create_mspace(0,0); // for example
181 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
183 (Note: If you only need one instance of an mspace, you can instead
184 use "USE_DL_PREFIX" to relabel the global malloc.)
186 You can similarly create thread-local allocators by storing
187 mspaces as thread-locals. For example:
188 static __thread mspace tlms = 0;
189 void* tlmalloc(size_t bytes) {
190 if (tlms == 0) tlms = create_mspace(0, 0);
191 return mspace_malloc(tlms, bytes);
193 void tlfree(void* mem) { mspace_free(tlms, mem); }
195 Unless FOOTERS is defined, each mspace is completely independent.
196 You cannot allocate from one and free to another (although
197 conformance is only weakly checked, so usage errors are not always
198 caught). If FOOTERS is defined, then each chunk carries around a tag
199 indicating its originating mspace, and frees are directed to their
202 ------------------------- Compile-time options ---------------------------
204 Be careful in setting #define values for numerical constants of type
205 size_t. On some systems, literal values are not automatically extended
206 to size_t precision unless they are explicitly casted.
208 WIN32 default: defined if _WIN32 defined
209 Defining WIN32 sets up defaults for MS environment and compilers.
210 Otherwise defaults are for unix.
212 MALLOC_ALIGNMENT default: (size_t)8
213 Controls the minimum alignment for malloc'ed chunks. It must be a
214 power of two and at least 8, even on machines for which smaller
215 alignments would suffice. It may be defined as larger than this
216 though. Note however that code and data structures are optimized for
217 the case of 8-byte alignment.
219 MSPACES default: 0 (false)
220 If true, compile in support for independent allocation spaces.
221 This is only supported if HAVE_MMAP is true.
223 ONLY_MSPACES default: 0 (false)
224 If true, only compile in mspace versions, not regular versions.
226 USE_LOCKS default: 0 (false)
227 Causes each call to each public routine to be surrounded with
228 pthread or WIN32 mutex lock/unlock. (If set true, this can be
229 overridden on a per-mspace basis for mspace versions.)
232 If true, provide extra checking and dispatching by placing
233 information in the footers of allocated chunks. This adds
234 space and time overhead.
237 If true, omit checks for usage errors and heap space overwrites.
239 USE_DL_PREFIX default: NOT defined
240 Causes compiler to prefix all public routines with the string 'dl'.
241 This can be useful when you only want to use this malloc in one part
242 of a program, using your regular system malloc elsewhere.
244 ABORT default: defined as abort()
245 Defines how to abort on failed checks. On most systems, a failed
246 check cannot die with an "assert" or even print an informative
247 message, because the underlying print routines in turn call malloc,
248 which will fail again. Generally, the best policy is to simply call
249 abort(). It's not very useful to do more than this because many
250 errors due to overwriting will show up as address faults (null, odd
251 addresses etc) rather than malloc-triggered checks, so will also
252 abort. Also, most compilers know that abort() does not return, so
253 can better optimize code conditionally calling it.
255 PROCEED_ON_ERROR default: defined as 0 (false)
256 Controls whether detected bad addresses cause them to bypassed
257 rather than aborting. If set, detected bad arguments to free and
258 realloc are ignored. And all bookkeeping information is zeroed out
259 upon a detected overwrite of freed heap space, thus losing the
260 ability to ever return it from malloc again, but enabling the
261 application to proceed. If PROCEED_ON_ERROR is defined, the
262 static variable malloc_corruption_error_count is compiled in
263 and can be examined to see if errors have occurred. This option
264 generates slower code than the default abort policy.
266 DEBUG default: NOT defined
267 The DEBUG setting is mainly intended for people trying to modify
268 this code or diagnose problems when porting to new platforms.
269 However, it may also be able to better isolate user errors than just
270 using runtime checks. The assertions in the check routines spell
271 out in more detail the assumptions and invariants underlying the
272 algorithms. The checking is fairly extensive, and will slow down
273 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274 set will attempt to check every non-mmapped allocated and free chunk
275 in the course of computing the summaries.
277 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
278 Debugging assertion failures can be nearly impossible if your
279 version of the assert macro causes malloc to be called, which will
280 lead to a cascade of further failures, blowing the runtime stack.
281 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282 which will usually make debugging easier.
284 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
285 The action to take before "return 0" when malloc fails to be able to
286 return memory because there is none available.
288 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
289 True if this system supports sbrk or an emulation of it.
291 MORECORE default: sbrk
292 The name of the sbrk-style system routine to call to obtain more
293 memory. See below for guidance on writing custom MORECORE
294 functions. The type of the argument to sbrk/MORECORE varies across
295 systems. It cannot be size_t, because it supports negative
296 arguments, so it is normally the signed type of the same width as
297 size_t (sometimes declared as "intptr_t"). It doesn't much matter
298 though. Internally, we only call it with arguments less than half
299 the max value of a size_t, which should work across all reasonable
300 possibilities, although sometimes generating compiler warnings. See
301 near the end of this file for guidelines for creating a custom
304 MORECORE_CONTIGUOUS default: 1 (true)
305 If true, take advantage of fact that consecutive calls to MORECORE
306 with positive arguments always return contiguous increasing
307 addresses. This is true of unix sbrk. It does not hurt too much to
308 set it true anyway, since malloc copes with non-contiguities.
309 Setting it false when definitely non-contiguous saves time
310 and possibly wasted space it would take to discover this though.
312 MORECORE_CANNOT_TRIM default: NOT defined
313 True if MORECORE cannot release space back to the system when given
314 negative arguments. This is generally necessary only if you are
315 using a hand-crafted MORECORE function that cannot handle negative
318 HAVE_MMAP default: 1 (true)
319 True if this system supports mmap or an emulation of it. If so, and
320 HAVE_MORECORE is not true, MMAP is used for all system
321 allocation. If set and HAVE_MORECORE is true as well, MMAP is
322 primarily used to directly allocate very large blocks. It is also
323 used as a backup strategy in cases where MORECORE fails to provide
324 space from system. Note: A single call to MUNMAP is assumed to be
325 able to unmap memory that may have be allocated using multiple calls
326 to MMAP, so long as they are adjacent.
328 HAVE_MREMAP default: 1 on linux, else 0
329 If true realloc() uses mremap() to re-allocate large blocks and
330 extend or shrink allocation spaces.
332 MMAP_CLEARS default: 1 on unix
333 True if mmap clears memory so calloc doesn't need to. This is true
334 for standard unix mmap using /dev/zero.
336 USE_BUILTIN_FFS default: 0 (i.e., not used)
337 Causes malloc to use the builtin ffs() function to compute indices.
338 Some compilers may recognize and intrinsify ffs to be faster than the
339 supplied C version. Also, the case of x86 using gcc is special-cased
340 to an asm instruction, so is already as fast as it can be, and so
341 this setting has no effect. (On most x86s, the asm version is only
342 slightly faster than the C version.)
344 malloc_getpagesize default: derive from system includes, or 4096.
345 The system page size. To the extent possible, this malloc manages
346 memory from the system in page-size units. This may be (and
347 usually is) a function rather than a constant. This is ignored
348 if WIN32, where page size is determined using getSystemInfo during
351 USE_DEV_RANDOM default: 0 (i.e., not used)
352 Causes malloc to use /dev/random to initialize secure magic seed for
353 stamping footers. Otherwise, the current time is used.
355 NO_MALLINFO default: 0
356 If defined, don't compile "mallinfo". This can be a simple way
357 of dealing with mismatches between system declarations and
360 MALLINFO_FIELD_TYPE default: size_t
361 The type of the fields in the mallinfo struct. This was originally
362 defined as "int" in SVID etc, but is more usefully defined as
363 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
365 REALLOC_ZERO_BYTES_FREES default: not defined
366 This should be set if a call to realloc with zero bytes should
367 be the same as a call to free. Some people think it should. Otherwise,
368 since this malloc returns a unique pointer for malloc(0), so does
371 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
373 LACKS_STDLIB_H default: NOT defined unless on WIN32
374 Define these if your system does not have these header files.
375 You might need to manually insert some of the declarations they provide.
377 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
378 system_info.dwAllocationGranularity in WIN32,
380 Also settable using mallopt(M_GRANULARITY, x)
381 The unit for allocating and deallocating memory from the system. On
382 most systems with contiguous MORECORE, there is no reason to
383 make this more than a page. However, systems with MMAP tend to
384 either require or encourage larger granularities. You can increase
385 this value to prevent system allocation functions to be called so
386 often, especially if they are slow. The value must be at least one
387 page and must be a power of two. Setting to 0 causes initialization
388 to either page size or win32 region size. (Note: In previous
389 versions of malloc, the equivalent of this option was called
392 DEFAULT_TRIM_THRESHOLD default: 2MB
393 Also settable using mallopt(M_TRIM_THRESHOLD, x)
394 The maximum amount of unused top-most memory to keep before
395 releasing via malloc_trim in free(). Automatic trimming is mainly
396 useful in long-lived programs using contiguous MORECORE. Because
397 trimming via sbrk can be slow on some systems, and can sometimes be
398 wasteful (in cases where programs immediately afterward allocate
399 more large chunks) the value should be high enough so that your
400 overall system performance would improve by releasing this much
401 memory. As a rough guide, you might set to a value close to the
402 average size of a process (program) running on your system.
403 Releasing this much memory would allow such a process to run in
404 memory. Generally, it is worth tuning trim thresholds when a
405 program undergoes phases where several large chunks are allocated
406 and released in ways that can reuse each other's storage, perhaps
407 mixed with phases where there are no such chunks at all. The trim
408 value must be greater than page size to have any useful effect. To
409 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410 some people use of mallocing a huge space and then freeing it at
411 program startup, in an attempt to reserve system memory, doesn't
412 have the intended effect under automatic trimming, since that memory
413 will immediately be returned to the system.
415 DEFAULT_MMAP_THRESHOLD default: 256K
416 Also settable using mallopt(M_MMAP_THRESHOLD, x)
417 The request size threshold for using MMAP to directly service a
418 request. Requests of at least this size that cannot be allocated
419 using already-existing space will be serviced via mmap. (If enough
420 normal freed space already exists it is used instead.) Using mmap
421 segregates relatively large chunks of memory so that they can be
422 individually obtained and released from the host system. A request
423 serviced through mmap is never reused by any other request (at least
424 not directly; the system may just so happen to remap successive
425 requests to the same locations). Segregating space in this way has
426 the benefits that: Mmapped space can always be individually released
427 back to the system, which helps keep the system level memory demands
428 of a long-lived program low. Also, mapped memory doesn't become
429 `locked' between other chunks, as can happen with normally allocated
430 chunks, which means that even trimming via malloc_trim would not
431 release them. However, it has the disadvantage that the space
432 cannot be reclaimed, consolidated, and then used to service later
433 requests, as happens with normal chunks. The advantages of mmap
434 nearly always outweigh disadvantages for "large" chunks, but the
435 value of "large" may vary across systems. The default is an
436 empirically derived value that works well in most systems. You can
437 disable mmap by setting to MAX_SIZE_T.
441 // Wii / bootmii environment settings
443 #define malloc_getpagesize 8
444 #define ABORT {printf("malloc fail! file=%s line=%d\n", __FILE__, __LINE__); while (1);}
445 #define MALLOC_FAILURE_ACTION /**/
448 #include "bootmii_ppc.h"
451 extern unsigned int _sbrk_start, _sbrk_end;
453 void *sbrk(int incr) {
454 static unsigned int limit = 0;
455 // printf ("sbrk(%d) limit=%x\n", incr, limit);
456 if (limit == 0) limit = (unsigned int) &_sbrk_start;
457 if (incr < 0) return 0;
458 if ((limit + incr) > (unsigned int) &_sbrk_end) return 0;
459 void *retval = (void*)limit;
461 // printf("Returning %p\n", retval);
471 #define WIN32_LEAN_AND_MEAN
474 #define HAVE_MORECORE 0
475 #define LACKS_UNISTD_H
476 #define LACKS_SYS_PARAM_H
477 #define LACKS_SYS_MMAN_H
478 #define LACKS_STRING_H
479 #define LACKS_STRINGS_H
480 #define LACKS_SYS_TYPES_H
481 #define LACKS_ERRNO_H
482 #define MALLOC_FAILURE_ACTION
483 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
486 #if defined(DARWIN) || defined(_DARWIN)
487 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
488 #ifndef HAVE_MORECORE
489 #define HAVE_MORECORE 0
491 #endif /* HAVE_MORECORE */
494 #ifndef LACKS_SYS_TYPES_H
495 #include <sys/types.h> /* For size_t */
496 #endif /* LACKS_SYS_TYPES_H */
498 /* The maximum possible size_t value has all bits set */
499 #define MAX_SIZE_T (~(size_t)0)
502 #define ONLY_MSPACES 0
503 #endif /* ONLY_MSPACES */
507 #else /* ONLY_MSPACES */
509 #endif /* ONLY_MSPACES */
511 #ifndef MALLOC_ALIGNMENT
512 #define MALLOC_ALIGNMENT ((size_t)8U)
513 #endif /* MALLOC_ALIGNMENT */
518 #define ABORT abort()
520 #ifndef ABORT_ON_ASSERT_FAILURE
521 #define ABORT_ON_ASSERT_FAILURE 1
522 #endif /* ABORT_ON_ASSERT_FAILURE */
523 #ifndef PROCEED_ON_ERROR
524 #define PROCEED_ON_ERROR 0
525 #endif /* PROCEED_ON_ERROR */
528 #endif /* USE_LOCKS */
531 #endif /* INSECURE */
534 #endif /* HAVE_MMAP */
536 #define MMAP_CLEARS 1
537 #endif /* MMAP_CLEARS */
540 #define HAVE_MREMAP 1
542 #define HAVE_MREMAP 0
544 #endif /* HAVE_MREMAP */
545 #ifndef MALLOC_FAILURE_ACTION
546 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
547 #endif /* MALLOC_FAILURE_ACTION */
548 #ifndef HAVE_MORECORE
550 #define HAVE_MORECORE 0
551 #else /* ONLY_MSPACES */
552 #define HAVE_MORECORE 1
553 #endif /* ONLY_MSPACES */
554 #endif /* HAVE_MORECORE */
556 #define MORECORE_CONTIGUOUS 0
557 #else /* !HAVE_MORECORE */
559 #define MORECORE sbrk
560 #endif /* MORECORE */
561 #ifndef MORECORE_CONTIGUOUS
562 #define MORECORE_CONTIGUOUS 1
563 #endif /* MORECORE_CONTIGUOUS */
564 #endif /* HAVE_MORECORE */
565 #ifndef DEFAULT_GRANULARITY
566 #if MORECORE_CONTIGUOUS
567 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
568 #else /* MORECORE_CONTIGUOUS */
569 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
570 #endif /* MORECORE_CONTIGUOUS */
571 #endif /* DEFAULT_GRANULARITY */
572 #ifndef DEFAULT_TRIM_THRESHOLD
573 #ifndef MORECORE_CANNOT_TRIM
574 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
575 #else /* MORECORE_CANNOT_TRIM */
576 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
577 #endif /* MORECORE_CANNOT_TRIM */
578 #endif /* DEFAULT_TRIM_THRESHOLD */
579 #ifndef DEFAULT_MMAP_THRESHOLD
581 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
582 #else /* HAVE_MMAP */
583 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
584 #endif /* HAVE_MMAP */
585 #endif /* DEFAULT_MMAP_THRESHOLD */
586 #ifndef USE_BUILTIN_FFS
587 #define USE_BUILTIN_FFS 0
588 #endif /* USE_BUILTIN_FFS */
589 #ifndef USE_DEV_RANDOM
590 #define USE_DEV_RANDOM 0
591 #endif /* USE_DEV_RANDOM */
593 #define NO_MALLINFO 0
594 #endif /* NO_MALLINFO */
595 #ifndef MALLINFO_FIELD_TYPE
596 #define MALLINFO_FIELD_TYPE size_t
597 #endif /* MALLINFO_FIELD_TYPE */
600 mallopt tuning options. SVID/XPG defines four standard parameter
601 numbers for mallopt, normally defined in malloc.h. None of these
602 are used in this malloc, so setting them has no effect. But this
603 malloc does support the following options.
606 #define M_TRIM_THRESHOLD (-1)
607 #define M_GRANULARITY (-2)
608 #define M_MMAP_THRESHOLD (-3)
610 /* ------------------------ Mallinfo declarations ------------------------ */
614 This version of malloc supports the standard SVID/XPG mallinfo
615 routine that returns a struct containing usage properties and
616 statistics. It should work on any system that has a
617 /usr/include/malloc.h defining struct mallinfo. The main
618 declaration needed is the mallinfo struct that is returned (by-copy)
619 by mallinfo(). The malloinfo struct contains a bunch of fields that
620 are not even meaningful in this version of malloc. These fields are
621 are instead filled by mallinfo() with other numbers that might be of
624 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
625 /usr/include/malloc.h file that includes a declaration of struct
626 mallinfo. If so, it is included; else a compliant version is
627 declared below. These must be precisely the same for mallinfo() to
628 work. The original SVID version of this struct, defined on most
629 systems with mallinfo, declares all fields as ints. But some others
630 define as unsigned long. If your system defines the fields using a
631 type of different width than listed here, you MUST #include your
632 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
635 /* #define HAVE_USR_INCLUDE_MALLOC_H */
637 #ifdef HAVE_USR_INCLUDE_MALLOC_H
638 #include "/usr/include/malloc.h"
639 #else /* HAVE_USR_INCLUDE_MALLOC_H */
642 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
643 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
644 MALLINFO_FIELD_TYPE smblks; /* always 0 */
645 MALLINFO_FIELD_TYPE hblks; /* always 0 */
646 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
647 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
648 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
649 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
650 MALLINFO_FIELD_TYPE fordblks; /* total free space */
651 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
654 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
655 #endif /* NO_MALLINFO */
659 #endif /* __cplusplus */
663 /* ------------------- Declarations of public routines ------------------- */
665 #ifndef USE_DL_PREFIX
666 #define dlcalloc calloc
668 #define dlmalloc malloc
669 #define dlmemalign memalign
670 #define dlrealloc realloc
671 #define dlvalloc valloc
672 #define dlpvalloc pvalloc
673 #define dlmallinfo mallinfo
674 #define dlmallopt mallopt
675 #define dlmalloc_trim malloc_trim
676 #define dlmalloc_stats malloc_stats
677 #define dlmalloc_usable_size malloc_usable_size
678 #define dlmalloc_footprint malloc_footprint
679 #define dlmalloc_max_footprint malloc_max_footprint
680 #define dlindependent_calloc independent_calloc
681 #define dlindependent_comalloc independent_comalloc
682 #endif /* USE_DL_PREFIX */
687 Returns a pointer to a newly allocated chunk of at least n bytes, or
688 null if no space is available, in which case errno is set to ENOMEM
691 If n is zero, malloc returns a minimum-sized chunk. (The minimum
692 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
693 systems.) Note that size_t is an unsigned type, so calls with
694 arguments that would be negative if signed are interpreted as
695 requests for huge amounts of space, which will often fail. The
696 maximum supported value of n differs across systems, but is in all
697 cases less than the maximum representable value of a size_t.
699 void* dlmalloc(size_t);
703 Releases the chunk of memory pointed to by p, that had been previously
704 allocated using malloc or a related routine such as realloc.
705 It has no effect if p is null. If p was not malloced or already
706 freed, free(p) will by default cause the current program to abort.
711 calloc(size_t n_elements, size_t element_size);
712 Returns a pointer to n_elements * element_size bytes, with all locations
715 void* dlcalloc(size_t, size_t);
718 realloc(void* p, size_t n)
719 Returns a pointer to a chunk of size n that contains the same data
720 as does chunk p up to the minimum of (n, p's size) bytes, or null
721 if no space is available.
723 The returned pointer may or may not be the same as p. The algorithm
724 prefers extending p in most cases when possible, otherwise it
725 employs the equivalent of a malloc-copy-free sequence.
727 If p is null, realloc is equivalent to malloc.
729 If space is not available, realloc returns null, errno is set (if on
730 ANSI) and p is NOT freed.
732 if n is for fewer bytes than already held by p, the newly unused
733 space is lopped off and freed if possible. realloc with a size
734 argument of zero (re)allocates a minimum-sized chunk.
736 The old unix realloc convention of allowing the last-free'd chunk
737 to be used as an argument to realloc is not supported.
740 void* dlrealloc(void*, size_t);
743 memalign(size_t alignment, size_t n);
744 Returns a pointer to a newly allocated chunk of n bytes, aligned
745 in accord with the alignment argument.
747 The alignment argument should be a power of two. If the argument is
748 not a power of two, the nearest greater power is used.
749 8-byte alignment is guaranteed by normal malloc calls, so don't
750 bother calling memalign with an argument of 8 or less.
752 Overreliance on memalign is a sure way to fragment space.
754 void* dlmemalign(size_t, size_t);
758 Equivalent to memalign(pagesize, n), where pagesize is the page
759 size of the system. If the pagesize is unknown, 4096 is used.
761 void* dlvalloc(size_t);
764 mallopt(int parameter_number, int parameter_value)
765 Sets tunable parameters The format is to provide a
766 (parameter-number, parameter-value) pair. mallopt then sets the
767 corresponding parameter to the argument value if it can (i.e., so
768 long as the value is meaningful), and returns 1 if successful else
769 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
770 normally defined in malloc.h. None of these are use in this malloc,
771 so setting them has no effect. But this malloc also supports other
772 options in mallopt. See below for details. Briefly, supported
773 parameters are as follows (listed defaults are for "typical"
776 Symbol param # default allowed param values
777 M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
778 M_GRANULARITY -2 page size any power of 2 >= page size
779 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
781 int dlmallopt(int, int);
785 Returns the number of bytes obtained from the system. The total
786 number of bytes allocated by malloc, realloc etc., is less than this
787 value. Unlike mallinfo, this function returns only a precomputed
788 result, so can be called frequently to monitor memory consumption.
789 Even if locks are otherwise defined, this function does not use them,
790 so results might not be up to date.
792 size_t dlmalloc_footprint(void);
795 malloc_max_footprint();
796 Returns the maximum number of bytes obtained from the system. This
797 value will be greater than current footprint if deallocated space
798 has been reclaimed by the system. The peak number of bytes allocated
799 by malloc, realloc etc., is less than this value. Unlike mallinfo,
800 this function returns only a precomputed result, so can be called
801 frequently to monitor memory consumption. Even if locks are
802 otherwise defined, this function does not use them, so results might
805 size_t dlmalloc_max_footprint(void);
810 Returns (by copy) a struct containing various summary statistics:
812 arena: current total non-mmapped bytes allocated from system
813 ordblks: the number of free chunks
815 hblks: current number of mmapped regions
816 hblkhd: total bytes held in mmapped regions
817 usmblks: the maximum total allocated space. This will be greater
818 than current total if trimming has occurred.
820 uordblks: current total allocated space (normal or mmapped)
821 fordblks: total free space
822 keepcost: the maximum number of bytes that could ideally be released
823 back to system via malloc_trim. ("ideally" means that
824 it ignores page restrictions etc.)
826 Because these fields are ints, but internal bookkeeping may
827 be kept as longs, the reported values may wrap around zero and
830 struct mallinfo dlmallinfo(void);
831 #endif /* NO_MALLINFO */
834 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
836 independent_calloc is similar to calloc, but instead of returning a
837 single cleared space, it returns an array of pointers to n_elements
838 independent elements that can hold contents of size elem_size, each
839 of which starts out cleared, and can be independently freed,
840 realloc'ed etc. The elements are guaranteed to be adjacently
841 allocated (this is not guaranteed to occur with multiple callocs or
842 mallocs), which may also improve cache locality in some
845 The "chunks" argument is optional (i.e., may be null, which is
846 probably the most typical usage). If it is null, the returned array
847 is itself dynamically allocated and should also be freed when it is
848 no longer needed. Otherwise, the chunks array must be of at least
849 n_elements in length. It is filled in with the pointers to the
852 In either case, independent_calloc returns this pointer array, or
853 null if the allocation failed. If n_elements is zero and "chunks"
854 is null, it returns a chunk representing an array with zero elements
855 (which should be freed if not wanted).
857 Each element must be individually freed when it is no longer
858 needed. If you'd like to instead be able to free all at once, you
859 should instead use regular calloc and assign pointers into this
860 space to represent elements. (In this case though, you cannot
861 independently free elements.)
863 independent_calloc simplifies and speeds up implementations of many
864 kinds of pools. It may also be useful when constructing large data
865 structures that initially have a fixed number of fixed-sized nodes,
866 but the number is not known at compile time, and some of the nodes
867 may later need to be freed. For example:
869 struct Node { int item; struct Node* next; };
871 struct Node* build_list() {
873 int n = read_number_of_nodes_needed();
874 if (n <= 0) return 0;
875 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
876 if (pool == 0) die();
877 // organize into a linked list...
878 struct Node* first = pool[0];
879 for (i = 0; i < n-1; ++i)
880 pool[i]->next = pool[i+1];
881 free(pool); // Can now free the array (or not, if it is needed later)
885 void** dlindependent_calloc(size_t, size_t, void**);
888 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
890 independent_comalloc allocates, all at once, a set of n_elements
891 chunks with sizes indicated in the "sizes" array. It returns
892 an array of pointers to these elements, each of which can be
893 independently freed, realloc'ed etc. The elements are guaranteed to
894 be adjacently allocated (this is not guaranteed to occur with
895 multiple callocs or mallocs), which may also improve cache locality
896 in some applications.
898 The "chunks" argument is optional (i.e., may be null). If it is null
899 the returned array is itself dynamically allocated and should also
900 be freed when it is no longer needed. Otherwise, the chunks array
901 must be of at least n_elements in length. It is filled in with the
902 pointers to the chunks.
904 In either case, independent_comalloc returns this pointer array, or
905 null if the allocation failed. If n_elements is zero and chunks is
906 null, it returns a chunk representing an array with zero elements
907 (which should be freed if not wanted).
909 Each element must be individually freed when it is no longer
910 needed. If you'd like to instead be able to free all at once, you
911 should instead use a single regular malloc, and assign pointers at
912 particular offsets in the aggregate space. (In this case though, you
913 cannot independently free elements.)
915 independent_comallac differs from independent_calloc in that each
916 element may have a different size, and also that it does not
917 automatically clear elements.
919 independent_comalloc can be used to speed up allocation in cases
920 where several structs or objects must always be allocated at the
921 same time. For example:
926 void send_message(char* msg) {
927 int msglen = strlen(msg);
928 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
930 if (independent_comalloc(3, sizes, chunks) == 0)
932 struct Head* head = (struct Head*)(chunks[0]);
933 char* body = (char*)(chunks[1]);
934 struct Foot* foot = (struct Foot*)(chunks[2]);
938 In general though, independent_comalloc is worth using only for
939 larger values of n_elements. For small values, you probably won't
940 detect enough difference from series of malloc calls to bother.
942 Overuse of independent_comalloc can increase overall memory usage,
943 since it cannot reuse existing noncontiguous small chunks that
944 might be available for some of the elements.
946 void** dlindependent_comalloc(size_t, size_t*, void**);
951 Equivalent to valloc(minimum-page-that-holds(n)), that is,
952 round up n to nearest pagesize.
954 void* dlpvalloc(size_t);
957 malloc_trim(size_t pad);
959 If possible, gives memory back to the system (via negative arguments
960 to sbrk) if there is unused memory at the `high' end of the malloc
961 pool or in unused MMAP segments. You can call this after freeing
962 large blocks of memory to potentially reduce the system-level memory
963 requirements of a program. However, it cannot guarantee to reduce
964 memory. Under some allocation patterns, some large free blocks of
965 memory will be locked between two used chunks, so they cannot be
966 given back to the system.
968 The `pad' argument to malloc_trim represents the amount of free
969 trailing space to leave untrimmed. If this argument is zero, only
970 the minimum amount of memory to maintain internal data structures
971 will be left. Non-zero arguments can be supplied to maintain enough
972 trailing space to service future expected allocations without having
973 to re-obtain memory from the system.
975 Malloc_trim returns 1 if it actually released any memory, else 0.
977 int dlmalloc_trim(size_t);
980 malloc_usable_size(void* p);
982 Returns the number of bytes you can actually use in
983 an allocated chunk, which may be more than you requested (although
984 often not) due to alignment and minimum size constraints.
985 You can use this many bytes without worrying about
986 overwriting other allocated objects. This is not a particularly great
987 programming practice. malloc_usable_size can be more useful in
988 debugging and assertions, for example:
991 assert(malloc_usable_size(p) >= 256);
993 size_t dlmalloc_usable_size(void*);
997 Prints on stderr the amount of space obtained from the system (both
998 via sbrk and mmap), the maximum amount (which may be more than
999 current if malloc_trim and/or munmap got called), and the current
1000 number of bytes allocated via malloc (or realloc, etc) but not yet
1001 freed. Note that this is the number of bytes allocated, not the
1002 number requested. It will be larger than the number requested
1003 because of alignment and bookkeeping overhead. Because it includes
1004 alignment wastage as being in use, this figure may be greater than
1005 zero even when no user-level chunks are allocated.
1007 The reported current and maximum system memory can be inaccurate if
1008 a program makes other calls to system memory allocation functions
1009 (normally sbrk) outside of malloc.
1011 malloc_stats prints only the most commonly interesting statistics.
1012 More information can be obtained by calling mallinfo.
1014 void dlmalloc_stats(void);
1016 #endif /* ONLY_MSPACES */
1021 mspace is an opaque type representing an independent
1022 region of space that supports mspace_malloc, etc.
1024 typedef void* mspace;
1027 create_mspace creates and returns a new independent space with the
1028 given initial capacity, or, if 0, the default granularity size. It
1029 returns null if there is no system memory available to create the
1030 space. If argument locked is non-zero, the space uses a separate
1031 lock to control access. The capacity of the space will grow
1032 dynamically as needed to service mspace_malloc requests. You can
1033 control the sizes of incremental increases of this space by
1034 compiling with a different DEFAULT_GRANULARITY or dynamically
1035 setting with mallopt(M_GRANULARITY, value).
1037 mspace create_mspace(size_t capacity, int locked);
1040 destroy_mspace destroys the given space, and attempts to return all
1041 of its memory back to the system, returning the total number of
1042 bytes freed. After destruction, the results of access to all memory
1043 used by the space become undefined.
1045 size_t destroy_mspace(mspace msp);
1048 create_mspace_with_base uses the memory supplied as the initial base
1049 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1050 space is used for bookkeeping, so the capacity must be at least this
1051 large. (Otherwise 0 is returned.) When this initial space is
1052 exhausted, additional memory will be obtained from the system.
1053 Destroying this space will deallocate all additionally allocated
1054 space (if possible) but not the initial base.
1056 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1059 mspace_malloc behaves as malloc, but operates within
1062 void* mspace_malloc(mspace msp, size_t bytes);
1065 mspace_free behaves as free, but operates within
1068 If compiled with FOOTERS==1, mspace_free is not actually needed.
1069 free may be called instead of mspace_free because freed chunks from
1070 any space are handled by their originating spaces.
1072 void mspace_free(mspace msp, void* mem);
1075 mspace_realloc behaves as realloc, but operates within
1078 If compiled with FOOTERS==1, mspace_realloc is not actually
1079 needed. realloc may be called instead of mspace_realloc because
1080 realloced chunks from any space are handled by their originating
1083 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1086 mspace_calloc behaves as calloc, but operates within
1089 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1092 mspace_memalign behaves as memalign, but operates within
1095 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1098 mspace_independent_calloc behaves as independent_calloc, but
1099 operates within the given space.
1101 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1102 size_t elem_size, void* chunks[]);
1105 mspace_independent_comalloc behaves as independent_comalloc, but
1106 operates within the given space.
1108 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1109 size_t sizes[], void* chunks[]);
1112 mspace_footprint() returns the number of bytes obtained from the
1113 system for this space.
1115 size_t mspace_footprint(mspace msp);
1118 mspace_max_footprint() returns the peak number of bytes obtained from the
1119 system for this space.
1121 size_t mspace_max_footprint(mspace msp);
1126 mspace_mallinfo behaves as mallinfo, but reports properties of
1129 struct mallinfo mspace_mallinfo(mspace msp);
1130 #endif /* NO_MALLINFO */
1133 mspace_malloc_stats behaves as malloc_stats, but reports
1134 properties of the given space.
1136 void mspace_malloc_stats(mspace msp);
1139 mspace_trim behaves as malloc_trim, but
1140 operates within the given space.
1142 int mspace_trim(mspace msp, size_t pad);
1145 An alias for mallopt.
1147 int mspace_mallopt(int, int);
1149 #endif /* MSPACES */
1152 }; /* end of extern "C" */
1153 #endif /* __cplusplus */
1156 ========================================================================
1157 To make a fully customizable malloc.h header file, cut everything
1158 above this line, put into file malloc.h, edit to suit, and #include it
1159 on the next line, as well as in programs that use this malloc.
1160 ========================================================================
1163 /* #include "malloc.h" */
1165 /*------------------------------ internal #includes ---------------------- */
1168 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1171 //#include <stdio.h> /* for printing in malloc_stats */
1173 #ifndef LACKS_ERRNO_H
1174 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1175 #endif /* LACKS_ERRNO_H */
1177 #include <time.h> /* for magic initialization */
1178 #endif /* FOOTERS */
1179 #ifndef LACKS_STDLIB_H
1180 #include <stdlib.h> /* for abort() */
1181 #endif /* LACKS_STDLIB_H */
1183 #if ABORT_ON_ASSERT_FAILURE
1184 #define assert(x) if(!(x)) ABORT
1185 #else /* ABORT_ON_ASSERT_FAILURE */
1186 //#include <assert.h>
1187 #define assert(x) if(!(x)) {printf("Assert failed: \"" #x "\"\n"; while(1);)}
1188 #endif /* ABORT_ON_ASSERT_FAILURE */
1192 #ifndef LACKS_STRING_H
1193 #include <string.h> /* for memset etc */
1194 #endif /* LACKS_STRING_H */
1196 #ifndef LACKS_STRINGS_H
1197 #include <strings.h> /* for ffs */
1198 #endif /* LACKS_STRINGS_H */
1199 #endif /* USE_BUILTIN_FFS */
1201 #ifndef LACKS_SYS_MMAN_H
1202 #include <sys/mman.h> /* for mmap */
1203 #endif /* LACKS_SYS_MMAN_H */
1204 #ifndef LACKS_FCNTL_H
1206 #endif /* LACKS_FCNTL_H */
1207 #endif /* HAVE_MMAP */
1209 #ifndef LACKS_UNISTD_H
1210 #include <unistd.h> /* for sbrk */
1211 #else /* LACKS_UNISTD_H */
1212 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1213 extern void* sbrk(int);
1214 #endif /* FreeBSD etc */
1215 #endif /* LACKS_UNISTD_H */
1216 #endif /* HAVE_MMAP */
1219 #ifndef malloc_getpagesize
1220 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1221 # ifndef _SC_PAGE_SIZE
1222 # define _SC_PAGE_SIZE _SC_PAGESIZE
1225 # ifdef _SC_PAGE_SIZE
1226 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1228 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1229 extern size_t getpagesize();
1230 # define malloc_getpagesize getpagesize()
1232 # ifdef WIN32 /* use supplied emulation of getpagesize */
1233 # define malloc_getpagesize getpagesize()
1235 # ifndef LACKS_SYS_PARAM_H
1236 # include <sys/param.h>
1238 # ifdef EXEC_PAGESIZE
1239 # define malloc_getpagesize EXEC_PAGESIZE
1243 # define malloc_getpagesize NBPG
1245 # define malloc_getpagesize (NBPG * CLSIZE)
1249 # define malloc_getpagesize NBPC
1252 # define malloc_getpagesize PAGESIZE
1253 # else /* just guess */
1254 # define malloc_getpagesize ((size_t)4096U)
1265 /* ------------------- size_t and alignment properties -------------------- */
1267 /* The byte and bit size of a size_t */
1268 #define SIZE_T_SIZE (sizeof(size_t))
1269 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1271 /* Some constants coerced to size_t */
1272 /* Annoying but necessary to avoid errors on some plaftorms */
1273 #define SIZE_T_ZERO ((size_t)0)
1274 #define SIZE_T_ONE ((size_t)1)
1275 #define SIZE_T_TWO ((size_t)2)
1276 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1277 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1278 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1279 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1281 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1282 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1284 /* True if address a has acceptable alignment */
1285 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1287 /* the number of bytes to offset an address to align it */
1288 #define align_offset(A)\
1289 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1290 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1292 /* -------------------------- MMAP preliminaries ------------------------- */
1295 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1296 checks to fail so compiler optimizer can delete code rather than
1297 using so many "#if"s.
1301 /* MORECORE and MMAP must return MFAIL on failure */
1302 #define MFAIL ((void*)(MAX_SIZE_T))
1303 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1306 #define IS_MMAPPED_BIT (SIZE_T_ZERO)
1307 #define USE_MMAP_BIT (SIZE_T_ZERO)
1308 #define CALL_MMAP(s) MFAIL
1309 #define CALL_MUNMAP(a, s) (-1)
1310 #define DIRECT_MMAP(s) MFAIL
1312 #else /* HAVE_MMAP */
1313 #define IS_MMAPPED_BIT (SIZE_T_ONE)
1314 #define USE_MMAP_BIT (SIZE_T_ONE)
1317 #define CALL_MUNMAP(a, s) munmap((a), (s))
1318 #define MMAP_PROT (PROT_READ|PROT_WRITE)
1319 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1320 #define MAP_ANONYMOUS MAP_ANON
1321 #endif /* MAP_ANON */
1322 #ifdef MAP_ANONYMOUS
1323 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1324 #define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1325 #else /* MAP_ANONYMOUS */
1327 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1328 is unlikely to be needed, but is supplied just in case.
1330 #define MMAP_FLAGS (MAP_PRIVATE)
1331 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1332 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1333 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1334 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1335 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1336 #endif /* MAP_ANONYMOUS */
1338 #define DIRECT_MMAP(s) CALL_MMAP(s)
1341 /* Win32 MMAP via VirtualAlloc */
1342 static void* win32mmap(size_t size) {
1343 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1344 return (ptr != 0)? ptr: MFAIL;
1347 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1348 static void* win32direct_mmap(size_t size) {
1349 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1351 return (ptr != 0)? ptr: MFAIL;
1354 /* This function supports releasing coalesed segments */
1355 static int win32munmap(void* ptr, size_t size) {
1356 MEMORY_BASIC_INFORMATION minfo;
1359 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1361 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1362 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1364 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1366 cptr += minfo.RegionSize;
1367 size -= minfo.RegionSize;
1372 #define CALL_MMAP(s) win32mmap(s)
1373 #define CALL_MUNMAP(a, s) win32munmap((a), (s))
1374 #define DIRECT_MMAP(s) win32direct_mmap(s)
1376 #endif /* HAVE_MMAP */
1378 #if HAVE_MMAP && HAVE_MREMAP
1379 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1380 #else /* HAVE_MMAP && HAVE_MREMAP */
1381 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1382 #endif /* HAVE_MMAP && HAVE_MREMAP */
1385 #define CALL_MORECORE(S) MORECORE(S)
1386 #else /* HAVE_MORECORE */
1387 #define CALL_MORECORE(S) MFAIL
1388 #endif /* HAVE_MORECORE */
1390 /* mstate bit set if continguous morecore disabled or failed */
1391 #define USE_NONCONTIGUOUS_BIT (4U)
1393 /* segment bit set in create_mspace_with_base */
1394 #define EXTERN_BIT (8U)
1397 /* --------------------------- Lock preliminaries ------------------------ */
1402 When locks are defined, there are up to two global locks:
1404 * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1405 MORECORE. In many cases sys_alloc requires two calls, that should
1406 not be interleaved with calls by other threads. This does not
1407 protect against direct calls to MORECORE by other threads not
1408 using this lock, so there is still code to cope the best we can on
1411 * magic_init_mutex ensures that mparams.magic and other
1412 unique mparams values are initialized only once.
1416 /* By default use posix locks */
1417 #include <pthread.h>
1418 #define MLOCK_T pthread_mutex_t
1419 #define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
1420 #define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
1421 #define RELEASE_LOCK(l) pthread_mutex_unlock(l)
1424 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1425 #endif /* HAVE_MORECORE */
1427 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1431 Because lock-protected regions have bounded times, and there
1432 are no recursive lock calls, we can use simple spinlocks.
1435 #define MLOCK_T long
1436 static int win32_acquire_lock (MLOCK_T *sl) {
1438 #ifdef InterlockedCompareExchangePointer
1439 if (!InterlockedCompareExchange(sl, 1, 0))
1441 #else /* Use older void* version */
1442 if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1444 #endif /* InterlockedCompareExchangePointer */
1449 static void win32_release_lock (MLOCK_T *sl) {
1450 InterlockedExchange (sl, 0);
1453 #define INITIAL_LOCK(l) *(l)=0
1454 #define ACQUIRE_LOCK(l) win32_acquire_lock(l)
1455 #define RELEASE_LOCK(l) win32_release_lock(l)
1457 static MLOCK_T morecore_mutex;
1458 #endif /* HAVE_MORECORE */
1459 static MLOCK_T magic_init_mutex;
1462 #define USE_LOCK_BIT (2U)
1463 #else /* USE_LOCKS */
1464 #define USE_LOCK_BIT (0U)
1465 #define INITIAL_LOCK(l)
1466 #endif /* USE_LOCKS */
1468 #if USE_LOCKS && HAVE_MORECORE
1469 #define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
1470 #define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
1471 #else /* USE_LOCKS && HAVE_MORECORE */
1472 #define ACQUIRE_MORECORE_LOCK()
1473 #define RELEASE_MORECORE_LOCK()
1474 #endif /* USE_LOCKS && HAVE_MORECORE */
1477 #define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
1478 #define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
1479 #else /* USE_LOCKS */
1480 #define ACQUIRE_MAGIC_INIT_LOCK()
1481 #define RELEASE_MAGIC_INIT_LOCK()
1482 #endif /* USE_LOCKS */
1485 /* ----------------------- Chunk representations ------------------------ */
1488 (The following includes lightly edited explanations by Colin Plumb.)
1490 The malloc_chunk declaration below is misleading (but accurate and
1491 necessary). It declares a "view" into memory allowing access to
1492 necessary fields at known offsets from a given base.
1494 Chunks of memory are maintained using a `boundary tag' method as
1495 originally described by Knuth. (See the paper by Paul Wilson
1496 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1497 techniques.) Sizes of free chunks are stored both in the front of
1498 each chunk and at the end. This makes consolidating fragmented
1499 chunks into bigger chunks fast. The head fields also hold bits
1500 representing whether chunks are free or in use.
1502 Here are some pictures to make it clearer. They are "exploded" to
1503 show that the state of a chunk can be thought of as extending from
1504 the high 31 bits of the head field of its header through the
1505 prev_foot and PINUSE_BIT bit of the following chunk header.
1507 A chunk that's in use looks like:
1509 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1510 | Size of previous chunk (if P = 1) |
1511 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1512 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1513 | Size of this chunk 1| +-+
1514 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1520 +- size - sizeof(size_t) available payload bytes -+
1524 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1525 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1526 | Size of next chunk (may or may not be in use) | +-+
1527 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1529 And if it's free, it looks like this:
1532 | User payload (must be in use, or we would have merged!) |
1533 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1534 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1535 | Size of this chunk 0| +-+
1536 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1538 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1540 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1542 +- size - sizeof(struct chunk) unused bytes -+
1544 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1545 | Size of this chunk |
1546 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1547 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1548 | Size of next chunk (must be in use, or we would have merged)| +-+
1549 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1553 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1556 Note that since we always merge adjacent free chunks, the chunks
1557 adjacent to a free chunk must be in use.
1559 Given a pointer to a chunk (which can be derived trivially from the
1560 payload pointer) we can, in O(1) time, find out whether the adjacent
1561 chunks are free, and if so, unlink them from the lists that they
1562 are on and merge them with the current chunk.
1564 Chunks always begin on even word boundaries, so the mem portion
1565 (which is returned to the user) is also on an even word boundary, and
1566 thus at least double-word aligned.
1568 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1569 chunk size (which is always a multiple of two words), is an in-use
1570 bit for the *previous* chunk. If that bit is *clear*, then the
1571 word before the current chunk size contains the previous chunk
1572 size, and can be used to find the front of the previous chunk.
1573 The very first chunk allocated always has this bit set, preventing
1574 access to non-existent (or non-owned) memory. If pinuse is set for
1575 any given chunk, then you CANNOT determine the size of the
1576 previous chunk, and might even get a memory addressing fault when
1579 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1580 the chunk size redundantly records whether the current chunk is
1581 inuse. This redundancy enables usage checks within free and realloc,
1582 and reduces indirection when freeing and consolidating chunks.
1584 Each freshly allocated chunk must have both cinuse and pinuse set.
1585 That is, each allocated chunk borders either a previously allocated
1586 and still in-use chunk, or the base of its memory arena. This is
1587 ensured by making all allocations from the the `lowest' part of any
1588 found chunk. Further, no free chunk physically borders another one,
1589 so each free chunk is known to be preceded and followed by either
1590 inuse chunks or the ends of memory.
1592 Note that the `foot' of the current chunk is actually represented
1593 as the prev_foot of the NEXT chunk. This makes it easier to
1594 deal with alignments etc but can be very confusing when trying
1595 to extend or adapt this code.
1597 The exceptions to all this are
1599 1. The special chunk `top' is the top-most available chunk (i.e.,
1600 the one bordering the end of available memory). It is treated
1601 specially. Top is never included in any bin, is used only if
1602 no other chunk is available, and is released back to the
1603 system if it is very large (see M_TRIM_THRESHOLD). In effect,
1604 the top chunk is treated as larger (and thus less well
1605 fitting) than any other available chunk. The top chunk
1606 doesn't update its trailing size field since there is no next
1607 contiguous chunk that would have to index off it. However,
1608 space is still allocated for it (TOP_FOOT_SIZE) to enable
1609 separation or merging when space is extended.
1611 3. Chunks allocated via mmap, which have the lowest-order bit
1612 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1613 PINUSE_BIT in their head fields. Because they are allocated
1614 one-by-one, each must carry its own prev_foot field, which is
1615 also used to hold the offset this chunk has within its mmapped
1616 region, which is needed to preserve alignment. Each mmapped
1617 chunk is trailed by the first two fields of a fake next-chunk
1618 for sake of usage checks.
1622 struct malloc_chunk {
1623 size_t prev_foot; /* Size of previous chunk (if free). */
1624 size_t head; /* Size and inuse bits. */
1625 struct malloc_chunk* fd; /* double links -- used only if free. */
1626 struct malloc_chunk* bk;
1629 typedef struct malloc_chunk mchunk;
1630 typedef struct malloc_chunk* mchunkptr;
1631 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
1632 typedef unsigned int bindex_t; /* Described below */
1633 typedef unsigned int binmap_t; /* Described below */
1634 typedef unsigned int flag_t; /* The type of various bit flag sets */
1636 /* ------------------- Chunks sizes and alignments ----------------------- */
1638 #define MCHUNK_SIZE (sizeof(mchunk))
1641 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1643 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
1644 #endif /* FOOTERS */
1646 /* MMapped chunks need a second word of overhead ... */
1647 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1648 /* ... and additional padding for fake next-chunk at foot */
1649 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
1651 /* The smallest size we can malloc is an aligned minimal chunk */
1652 #define MIN_CHUNK_SIZE\
1653 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1655 /* conversion from malloc headers to user pointers, and back */
1656 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
1657 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1658 /* chunk associated with aligned address A */
1659 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
1661 /* Bounds on request (not chunk) sizes. */
1662 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
1663 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1665 /* pad request bytes into a usable size */
1666 #define pad_request(req) \
1667 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1669 /* pad request, checking for minimum (but not maximum) */
1670 #define request2size(req) \
1671 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1674 /* ------------------ Operations on head and foot fields ----------------- */
1677 The head field of a chunk is or'ed with PINUSE_BIT when previous
1678 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1679 use. If the chunk was obtained with mmap, the prev_foot field has
1680 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1681 mmapped region to the base of the chunk.
1684 #define PINUSE_BIT (SIZE_T_ONE)
1685 #define CINUSE_BIT (SIZE_T_TWO)
1686 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
1688 /* Head value for fenceposts */
1689 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
1691 /* extraction of fields from head words */
1692 #define cinuse(p) ((p)->head & CINUSE_BIT)
1693 #define pinuse(p) ((p)->head & PINUSE_BIT)
1694 #define chunksize(p) ((p)->head & ~(INUSE_BITS))
1696 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
1697 #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
1699 /* Treat space at ptr +/- offset as a chunk */
1700 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1701 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1703 /* Ptr to next or previous physical malloc_chunk. */
1704 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1705 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1707 /* extract next chunk's pinuse bit */
1708 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
1710 /* Get/set size at footer */
1711 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1712 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1714 /* Set size, pinuse bit, and foot */
1715 #define set_size_and_pinuse_of_free_chunk(p, s)\
1716 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1718 /* Set size, pinuse bit, foot, and clear next pinuse */
1719 #define set_free_with_pinuse(p, s, n)\
1720 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1722 #define is_mmapped(p)\
1723 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1725 /* Get the internal overhead associated with chunk p */
1726 #define overhead_for(p)\
1727 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1729 /* Return true if malloced space is not necessarily cleared */
1731 #define calloc_must_clear(p) (!is_mmapped(p))
1732 #else /* MMAP_CLEARS */
1733 #define calloc_must_clear(p) (1)
1734 #endif /* MMAP_CLEARS */
1736 /* ---------------------- Overlaid data structures ----------------------- */
1739 When chunks are not in use, they are treated as nodes of either
1742 "Small" chunks are stored in circular doubly-linked lists, and look
1745 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1746 | Size of previous chunk |
1747 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1748 `head:' | Size of chunk, in bytes |P|
1749 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1750 | Forward pointer to next chunk in list |
1751 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1752 | Back pointer to previous chunk in list |
1753 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1754 | Unused space (may be 0 bytes long) .
1757 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1758 `foot:' | Size of chunk, in bytes |
1759 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1761 Larger chunks are kept in a form of bitwise digital trees (aka
1762 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
1763 free chunks greater than 256 bytes, their size doesn't impose any
1764 constraints on user chunk sizes. Each node looks like:
1766 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1767 | Size of previous chunk |
1768 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1769 `head:' | Size of chunk, in bytes |P|
1770 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1771 | Forward pointer to next chunk of same size |
1772 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1773 | Back pointer to previous chunk of same size |
1774 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1775 | Pointer to left child (child[0]) |
1776 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1777 | Pointer to right child (child[1]) |
1778 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1779 | Pointer to parent |
1780 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1781 | bin index of this chunk |
1782 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1785 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1786 `foot:' | Size of chunk, in bytes |
1787 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1789 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
1790 of the same size are arranged in a circularly-linked list, with only
1791 the oldest chunk (the next to be used, in our FIFO ordering)
1792 actually in the tree. (Tree members are distinguished by a non-null
1793 parent pointer.) If a chunk with the same size an an existing node
1794 is inserted, it is linked off the existing node using pointers that
1795 work in the same way as fd/bk pointers of small chunks.
1797 Each tree contains a power of 2 sized range of chunk sizes (the
1798 smallest is 0x100 <= x < 0x180), which is is divided in half at each
1799 tree level, with the chunks in the smaller half of the range (0x100
1800 <= x < 0x140 for the top nose) in the left subtree and the larger
1801 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
1802 done by inspecting individual bits.
1804 Using these rules, each node's left subtree contains all smaller
1805 sizes than its right subtree. However, the node at the root of each
1806 subtree has no particular ordering relationship to either. (The
1807 dividing line between the subtree sizes is based on trie relation.)
1808 If we remove the last chunk of a given size from the interior of the
1809 tree, we need to replace it with a leaf node. The tree ordering
1810 rules permit a node to be replaced by any leaf below it.
1812 The smallest chunk in a tree (a common operation in a best-fit
1813 allocator) can be found by walking a path to the leftmost leaf in
1814 the tree. Unlike a usual binary tree, where we follow left child
1815 pointers until we reach a null, here we follow the right child
1816 pointer any time the left one is null, until we reach a leaf with
1817 both child pointers null. The smallest chunk in the tree will be
1818 somewhere along that path.
1820 The worst case number of steps to add, find, or remove a node is
1821 bounded by the number of bits differentiating chunks within
1822 bins. Under current bin calculations, this ranges from 6 up to 21
1823 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1824 is of course much better.
1827 struct malloc_tree_chunk {
1828 /* The first four fields must be compatible with malloc_chunk */
1831 struct malloc_tree_chunk* fd;
1832 struct malloc_tree_chunk* bk;
1834 struct malloc_tree_chunk* child[2];
1835 struct malloc_tree_chunk* parent;
1839 typedef struct malloc_tree_chunk tchunk;
1840 typedef struct malloc_tree_chunk* tchunkptr;
1841 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1843 /* A little helper macro for trees */
1844 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1846 /* ----------------------------- Segments -------------------------------- */
1849 Each malloc space may include non-contiguous segments, held in a
1850 list headed by an embedded malloc_segment record representing the
1851 top-most space. Segments also include flags holding properties of
1852 the space. Large chunks that are directly allocated by mmap are not
1853 included in this list. They are instead independently created and
1854 destroyed without otherwise keeping track of them.
1856 Segment management mainly comes into play for spaces allocated by
1857 MMAP. Any call to MMAP might or might not return memory that is
1858 adjacent to an existing segment. MORECORE normally contiguously
1859 extends the current space, so this space is almost always adjacent,
1860 which is simpler and faster to deal with. (This is why MORECORE is
1861 used preferentially to MMAP when both are available -- see
1862 sys_alloc.) When allocating using MMAP, we don't use any of the
1863 hinting mechanisms (inconsistently) supported in various
1864 implementations of unix mmap, or distinguish reserving from
1865 committing memory. Instead, we just ask for space, and exploit
1866 contiguity when we get it. It is probably possible to do
1867 better than this on some systems, but no general scheme seems
1868 to be significantly better.
1870 Management entails a simpler variant of the consolidation scheme
1871 used for chunks to reduce fragmentation -- new adjacent memory is
1872 normally prepended or appended to an existing segment. However,
1873 there are limitations compared to chunk consolidation that mostly
1874 reflect the fact that segment processing is relatively infrequent
1875 (occurring only when getting memory from system) and that we
1876 don't expect to have huge numbers of segments:
1878 * Segments are not indexed, so traversal requires linear scans. (It
1879 would be possible to index these, but is not worth the extra
1880 overhead and complexity for most programs on most platforms.)
1881 * New segments are only appended to old ones when holding top-most
1882 memory; if they cannot be prepended to others, they are held in
1885 Except for the top-most segment of an mstate, each segment record
1886 is kept at the tail of its segment. Segments are added by pushing
1887 segment records onto the list headed by &mstate.seg for the
1890 Segment flags control allocation/merge/deallocation policies:
1891 * If EXTERN_BIT set, then we did not allocate this segment,
1892 and so should not try to deallocate or merge with others.
1893 (This currently holds only for the initial segment passed
1894 into create_mspace_with_base.)
1895 * If IS_MMAPPED_BIT set, the segment may be merged with
1896 other surrounding mmapped segments and trimmed/de-allocated
1898 * If neither bit is set, then the segment was obtained using
1899 MORECORE so can be merged with surrounding MORECORE'd segments
1900 and deallocated/trimmed using MORECORE with negative arguments.
1903 struct malloc_segment {
1904 char* base; /* base address */
1905 size_t size; /* allocated size */
1906 struct malloc_segment* next; /* ptr to next segment */
1907 flag_t sflags; /* mmap and extern flag */
1910 #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
1911 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
1913 typedef struct malloc_segment msegment;
1914 typedef struct malloc_segment* msegmentptr;
1916 /* ---------------------------- malloc_state ----------------------------- */
1919 A malloc_state holds all of the bookkeeping for a space.
1920 The main fields are:
1923 The topmost chunk of the currently active segment. Its size is
1924 cached in topsize. The actual size of topmost space is
1925 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1926 fenceposts and segment records if necessary when getting more
1927 space from the system. The size at which to autotrim top is
1928 cached from mparams in trim_check, except that it is disabled if
1931 Designated victim (dv)
1932 This is the preferred chunk for servicing small requests that
1933 don't have exact fits. It is normally the chunk split off most
1934 recently to service another small request. Its size is cached in
1935 dvsize. The link fields of this chunk are not maintained since it
1936 is not kept in a bin.
1939 An array of bin headers for free chunks. These bins hold chunks
1940 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1941 chunks of all the same size, spaced 8 bytes apart. To simplify
1942 use in double-linked lists, each bin header acts as a malloc_chunk
1943 pointing to the real first node, if it exists (else pointing to
1944 itself). This avoids special-casing for headers. But to avoid
1945 waste, we allocate only the fd/bk pointers of bins, and then use
1946 repositioning tricks to treat these as the fields of a chunk.
1949 Treebins are pointers to the roots of trees holding a range of
1950 sizes. There are 2 equally spaced treebins for each power of two
1951 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1955 There is one bit map for small bins ("smallmap") and one for
1956 treebins ("treemap). Each bin sets its bit when non-empty, and
1957 clears the bit when empty. Bit operations are then used to avoid
1958 bin-by-bin searching -- nearly all "search" is done without ever
1959 looking at bins that won't be selected. The bit maps
1960 conservatively use 32 bits per map word, even if on 64bit system.
1961 For a good description of some of the bit-based techniques used
1962 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1963 supplement at http://hackersdelight.org/). Many of these are
1964 intended to reduce the branchiness of paths through malloc etc, as
1965 well as to reduce the number of memory locations read or written.
1968 A list of segments headed by an embedded malloc_segment record
1969 representing the initial space.
1971 Address check support
1972 The least_addr field is the least address ever obtained from
1973 MORECORE or MMAP. Attempted frees and reallocs of any address less
1974 than this are trapped (unless INSECURE is defined).
1977 A cross-check field that should always hold same value as mparams.magic.
1980 Bits recording whether to use MMAP, locks, or contiguous MORECORE
1983 Each space keeps track of current and maximum system memory
1984 obtained via MORECORE or MMAP.
1987 If USE_LOCKS is defined, the "mutex" lock is acquired and released
1988 around every public call using this mspace.
1991 /* Bin types, widths and sizes */
1992 #define NSMALLBINS (32U)
1993 #define NTREEBINS (32U)
1994 #define SMALLBIN_SHIFT (3U)
1995 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
1996 #define TREEBIN_SHIFT (8U)
1997 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
1998 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
1999 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2001 struct malloc_state {
2011 mchunkptr smallbins[(NSMALLBINS+1)*2];
2012 tbinptr treebins[NTREEBINS];
2014 size_t max_footprint;
2017 MLOCK_T mutex; /* locate lock among fields that rarely change */
2018 #endif /* USE_LOCKS */
2022 typedef struct malloc_state* mstate;
2024 /* ------------- Global malloc_state and malloc_params ------------------- */
2027 malloc_params holds global properties, including those that can be
2028 dynamically set using mallopt. There is a single instance, mparams,
2029 initialized in init_mparams.
2032 struct malloc_params {
2036 size_t mmap_threshold;
2037 size_t trim_threshold;
2038 flag_t default_mflags;
2041 static struct malloc_params mparams;
2043 /* The global malloc_state used for all non-"mspace" calls */
2044 static struct malloc_state _gm_;
2046 #define is_global(M) ((M) == &_gm_)
2047 #define is_initialized(M) ((M)->top != 0)
2049 /* -------------------------- system alloc setup ------------------------- */
2051 /* Operations on mflags */
2053 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2054 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2055 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2057 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2058 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2059 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2061 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2062 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2064 #define set_lock(M,L)\
2065 ((M)->mflags = (L)?\
2066 ((M)->mflags | USE_LOCK_BIT) :\
2067 ((M)->mflags & ~USE_LOCK_BIT))
2069 /* page-align a size */
2070 #define page_align(S)\
2071 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2073 /* granularity-align a size */
2074 #define granularity_align(S)\
2075 (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2077 #define is_page_aligned(S)\
2078 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2079 #define is_granularity_aligned(S)\
2080 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2082 /* True if segment S holds address A */
2083 #define segment_holds(S, A)\
2084 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2086 /* Return segment holding given address */
2087 static msegmentptr segment_holding(mstate m, char* addr) {
2088 msegmentptr sp = &m->seg;
2090 if (addr >= sp->base && addr < sp->base + sp->size)
2092 if ((sp = sp->next) == 0)
2097 /* Return true if segment contains a segment link */
2098 static int has_segment_link(mstate m, msegmentptr ss) {
2099 msegmentptr sp = &m->seg;
2101 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2103 if ((sp = sp->next) == 0)
2108 #ifndef MORECORE_CANNOT_TRIM
2109 #define should_trim(M,s) ((s) > (M)->trim_check)
2110 #else /* MORECORE_CANNOT_TRIM */
2111 #define should_trim(M,s) (0)
2112 #endif /* MORECORE_CANNOT_TRIM */
2115 TOP_FOOT_SIZE is padding at the end of a segment, including space
2116 that may be needed to place segment records and fenceposts when new
2117 noncontiguous segments are added.
2119 #define TOP_FOOT_SIZE\
2120 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2123 /* ------------------------------- Hooks -------------------------------- */
2126 PREACTION should be defined to return 0 on success, and nonzero on
2127 failure. If you are not using locking, you can redefine these to do
2133 /* Ensure locks are initialized */
2134 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2136 #define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2137 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2138 #else /* USE_LOCKS */
2141 #define PREACTION(M) (0)
2142 #endif /* PREACTION */
2145 #define POSTACTION(M)
2146 #endif /* POSTACTION */
2148 #endif /* USE_LOCKS */
2151 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2152 USAGE_ERROR_ACTION is triggered on detected bad frees and
2153 reallocs. The argument p is an address that might have triggered the
2154 fault. It is ignored by the two predefined actions, but might be
2155 useful in custom actions that try to help diagnose errors.
2158 #if PROCEED_ON_ERROR
2160 /* A count of the number of corruption errors causing resets */
2161 int malloc_corruption_error_count;
2163 /* default corruption action */
2164 static void reset_on_error(mstate m);
2166 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2167 #define USAGE_ERROR_ACTION(m, p)
2169 #else /* PROCEED_ON_ERROR */
2171 #ifndef CORRUPTION_ERROR_ACTION
2172 #define CORRUPTION_ERROR_ACTION(m) ABORT
2173 #endif /* CORRUPTION_ERROR_ACTION */
2175 #ifndef USAGE_ERROR_ACTION
2176 #define USAGE_ERROR_ACTION(m,p) ABORT
2177 #endif /* USAGE_ERROR_ACTION */
2179 #endif /* PROCEED_ON_ERROR */
2181 /* -------------------------- Debugging setup ---------------------------- */
2185 #define check_free_chunk(M,P)
2186 #define check_inuse_chunk(M,P)
2187 #define check_malloced_chunk(M,P,N)
2188 #define check_mmapped_chunk(M,P)
2189 #define check_malloc_state(M)
2190 #define check_top_chunk(M,P)
2193 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2194 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2195 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2196 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2197 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2198 #define check_malloc_state(M) do_check_malloc_state(M)
2200 static void do_check_any_chunk(mstate m, mchunkptr p);
2201 static void do_check_top_chunk(mstate m, mchunkptr p);
2202 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2203 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2204 static void do_check_free_chunk(mstate m, mchunkptr p);
2205 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2206 static void do_check_tree(mstate m, tchunkptr t);
2207 static void do_check_treebin(mstate m, bindex_t i);
2208 static void do_check_smallbin(mstate m, bindex_t i);
2209 static void do_check_malloc_state(mstate m);
2210 static int bin_find(mstate m, mchunkptr x);
2211 static size_t traverse_and_check(mstate m);
2214 /* ---------------------------- Indexing Bins ---------------------------- */
2216 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2217 #define small_index(s) ((s) >> SMALLBIN_SHIFT)
2218 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2219 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2221 /* addressing by index. See above about smallbin repositioning */
2222 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2223 #define treebin_at(M,i) (&((M)->treebins[i]))
2225 /* assign tree index for size S to variable I */
2226 #if defined(__GNUC__) && defined(i386)
2227 #define compute_tree_index(S, I)\
2229 size_t X = S >> TREEBIN_SHIFT;\
2232 else if (X > 0xFFFF)\
2236 __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
2237 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2241 #define compute_tree_index(S, I)\
2243 size_t X = S >> TREEBIN_SHIFT;\
2246 else if (X > 0xFFFF)\
2249 unsigned int Y = (unsigned int)X;\
2250 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2251 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2253 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2254 K = 14 - N + ((Y <<= K) >> 15);\
2255 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2260 /* Bit representing maximum resolved size in a treebin at i */
2261 #define bit_for_tree_index(i) \
2262 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2264 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2265 #define leftshift_for_tree_index(i) \
2266 ((i == NTREEBINS-1)? 0 : \
2267 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2269 /* The size of the smallest chunk held in bin with index i */
2270 #define minsize_for_tree_index(i) \
2271 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2272 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2275 /* ------------------------ Operations on bin maps ----------------------- */
2277 /* bit corresponding to given index */
2278 #define idx2bit(i) ((binmap_t)(1) << (i))
2280 /* Mark/Clear bits with given index */
2281 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2282 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2283 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2285 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2286 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2287 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2289 /* index corresponding to given bit */
2291 #if defined(__GNUC__) && defined(i386)
2292 #define compute_bit2idx(X, I)\
2295 __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2301 #define compute_bit2idx(X, I) I = ffs(X)-1
2303 #else /* USE_BUILTIN_FFS */
2304 #define compute_bit2idx(X, I)\
2306 unsigned int Y = X - 1;\
2307 unsigned int K = Y >> (16-4) & 16;\
2308 unsigned int N = K; Y >>= K;\
2309 N += K = Y >> (8-3) & 8; Y >>= K;\
2310 N += K = Y >> (4-2) & 4; Y >>= K;\
2311 N += K = Y >> (2-1) & 2; Y >>= K;\
2312 N += K = Y >> (1-0) & 1; Y >>= K;\
2313 I = (bindex_t)(N + Y);\
2315 #endif /* USE_BUILTIN_FFS */
2318 /* isolate the least set bit of a bitmap */
2319 #define least_bit(x) ((x) & -(x))
2321 /* mask with all bits to left of least bit of x on */
2322 #define left_bits(x) ((x<<1) | -(x<<1))
2324 /* mask with all bits to left of or equal to least bit of x on */
2325 #define same_or_left_bits(x) ((x) | -(x))
2328 /* ----------------------- Runtime Check Support ------------------------- */
2331 For security, the main invariant is that malloc/free/etc never
2332 writes to a static address other than malloc_state, unless static
2333 malloc_state itself has been corrupted, which cannot occur via
2334 malloc (because of these checks). In essence this means that we
2335 believe all pointers, sizes, maps etc held in malloc_state, but
2336 check all of those linked or offsetted from other embedded data
2337 structures. These checks are interspersed with main code in a way
2338 that tends to minimize their run-time cost.
2340 When FOOTERS is defined, in addition to range checking, we also
2341 verify footer fields of inuse chunks, which can be used guarantee
2342 that the mstate controlling malloc/free is intact. This is a
2343 streamlined version of the approach described by William Robertson
2344 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2345 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2346 of an inuse chunk holds the xor of its mstate and a random seed,
2347 that is checked upon calls to free() and realloc(). This is
2348 (probablistically) unguessable from outside the program, but can be
2349 computed by any code successfully malloc'ing any chunk, so does not
2350 itself provide protection against code that has already broken
2351 security through some other means. Unlike Robertson et al, we
2352 always dynamically check addresses of all offset chunks (previous,
2353 next, etc). This turns out to be cheaper than relying on hashes.
2357 /* Check if address a is at least as high as any from MORECORE or MMAP */
2358 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2359 /* Check if address of next chunk n is higher than base chunk p */
2360 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2361 /* Check if p has its cinuse bit on */
2362 #define ok_cinuse(p) cinuse(p)
2363 /* Check if p has its pinuse bit on */
2364 #define ok_pinuse(p) pinuse(p)
2366 #else /* !INSECURE */
2367 #define ok_address(M, a) (1)
2368 #define ok_next(b, n) (1)
2369 #define ok_cinuse(p) (1)
2370 #define ok_pinuse(p) (1)
2371 #endif /* !INSECURE */
2373 #if (FOOTERS && !INSECURE)
2374 /* Check if (alleged) mstate m has expected magic field */
2375 #define ok_magic(M) ((M)->magic == mparams.magic)
2376 #else /* (FOOTERS && !INSECURE) */
2377 #define ok_magic(M) (1)
2378 #endif /* (FOOTERS && !INSECURE) */
2381 /* In gcc, use __builtin_expect to minimize impact of checks */
2383 #if defined(__GNUC__) && __GNUC__ >= 3
2384 #define RTCHECK(e) __builtin_expect(e, 1)
2386 #define RTCHECK(e) (e)
2388 #else /* !INSECURE */
2389 #define RTCHECK(e) (1)
2390 #endif /* !INSECURE */
2392 /* macros to set up inuse chunks with or without footers */
2396 #define mark_inuse_foot(M,p,s)
2398 /* Set cinuse bit and pinuse bit of next chunk */
2399 #define set_inuse(M,p,s)\
2400 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2401 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2403 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2404 #define set_inuse_and_pinuse(M,p,s)\
2405 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2406 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2408 /* Set size, cinuse and pinuse bit of this chunk */
2409 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2410 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2414 /* Set foot of inuse chunk to be xor of mstate and seed */
2415 #define mark_inuse_foot(M,p,s)\
2416 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2418 #define get_mstate_for(p)\
2419 ((mstate)(((mchunkptr)((char*)(p) +\
2420 (chunksize(p))))->prev_foot ^ mparams.magic))
2422 #define set_inuse(M,p,s)\
2423 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2424 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2425 mark_inuse_foot(M,p,s))
2427 #define set_inuse_and_pinuse(M,p,s)\
2428 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2429 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2430 mark_inuse_foot(M,p,s))
2432 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2433 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2434 mark_inuse_foot(M, p, s))
2436 #endif /* !FOOTERS */
2438 /* ---------------------------- setting mparams -------------------------- */
2440 /* Initialize mparams */
2441 static int init_mparams(void) {
2442 if (mparams.page_size == 0) {
2445 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2446 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2447 #if MORECORE_CONTIGUOUS
2448 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2449 #else /* MORECORE_CONTIGUOUS */
2450 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2451 #endif /* MORECORE_CONTIGUOUS */
2453 #if (FOOTERS && !INSECURE)
2457 unsigned char buf[sizeof(size_t)];
2458 /* Try to use /dev/urandom, else fall back on using time */
2459 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2460 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2461 s = *((size_t *) buf);
2465 #endif /* USE_DEV_RANDOM */
2466 s = (size_t)(time(0) ^ (size_t)0x55555555U);
2468 s |= (size_t)8U; /* ensure nonzero */
2469 s &= ~(size_t)7U; /* improve chances of fault for bad values */
2472 #else /* (FOOTERS && !INSECURE) */
2473 s = (size_t)0x58585858U;
2474 #endif /* (FOOTERS && !INSECURE) */
2475 ACQUIRE_MAGIC_INIT_LOCK();
2476 if (mparams.magic == 0) {
2478 /* Set up lock for main malloc area */
2479 INITIAL_LOCK(&gm->mutex);
2480 gm->mflags = mparams.default_mflags;
2482 RELEASE_MAGIC_INIT_LOCK();
2485 mparams.page_size = malloc_getpagesize;
2486 mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2487 DEFAULT_GRANULARITY : mparams.page_size);
2490 SYSTEM_INFO system_info;
2491 GetSystemInfo(&system_info);
2492 mparams.page_size = system_info.dwPageSize;
2493 mparams.granularity = system_info.dwAllocationGranularity;
2497 /* Sanity-check configuration:
2498 size_t must be unsigned and as wide as pointer type.
2499 ints must be at least 4 bytes.
2500 alignment must be at least 8.
2501 Alignment, min chunk size, and page size must all be powers of 2.
2503 if ((sizeof(size_t) != sizeof(char*)) ||
2504 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
2505 (sizeof(int) < 4) ||
2506 (MALLOC_ALIGNMENT < (size_t)8U) ||
2507 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
2508 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
2509 ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2510 ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))
2516 /* support for mallopt */
2517 static int change_mparam(int param_number, int value) {
2518 size_t val = (size_t)value;
2520 switch(param_number) {
2521 case M_TRIM_THRESHOLD:
2522 mparams.trim_threshold = val;
2525 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2526 mparams.granularity = val;
2531 case M_MMAP_THRESHOLD:
2532 mparams.mmap_threshold = val;
2540 /* ------------------------- Debugging Support --------------------------- */
2542 /* Check properties of any chunk, whether free, inuse, mmapped etc */
2543 static void do_check_any_chunk(mstate m, mchunkptr p) {
2544 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2545 assert(ok_address(m, p));
2548 /* Check properties of top chunk */
2549 static void do_check_top_chunk(mstate m, mchunkptr p) {
2550 msegmentptr sp = segment_holding(m, (char*)p);
2551 size_t sz = chunksize(p);
2553 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2554 assert(ok_address(m, p));
2555 assert(sz == m->topsize);
2557 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2559 assert(!next_pinuse(p));
2562 /* Check properties of (inuse) mmapped chunks */
2563 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2564 size_t sz = chunksize(p);
2565 size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2566 assert(is_mmapped(p));
2567 assert(use_mmap(m));
2568 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2569 assert(ok_address(m, p));
2570 assert(!is_small(sz));
2571 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2572 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2573 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2576 /* Check properties of inuse chunks */
2577 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2578 do_check_any_chunk(m, p);
2580 assert(next_pinuse(p));
2581 /* If not pinuse and not mmapped, previous chunk has OK offset */
2582 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2584 do_check_mmapped_chunk(m, p);
2587 /* Check properties of free chunks */
2588 static void do_check_free_chunk(mstate m, mchunkptr p) {
2589 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2590 mchunkptr next = chunk_plus_offset(p, sz);
2591 do_check_any_chunk(m, p);
2593 assert(!next_pinuse(p));
2594 assert (!is_mmapped(p));
2595 if (p != m->dv && p != m->top) {
2596 if (sz >= MIN_CHUNK_SIZE) {
2597 assert((sz & CHUNK_ALIGN_MASK) == 0);
2598 assert(is_aligned(chunk2mem(p)));
2599 assert(next->prev_foot == sz);
2601 assert (next == m->top || cinuse(next));
2602 assert(p->fd->bk == p);
2603 assert(p->bk->fd == p);
2605 else /* markers are always of size SIZE_T_SIZE */
2606 assert(sz == SIZE_T_SIZE);
2610 /* Check properties of malloced chunks at the point they are malloced */
2611 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2613 mchunkptr p = mem2chunk(mem);
2614 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2615 do_check_inuse_chunk(m, p);
2616 assert((sz & CHUNK_ALIGN_MASK) == 0);
2617 assert(sz >= MIN_CHUNK_SIZE);
2619 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2620 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2624 /* Check a tree and its subtrees. */
2625 static void do_check_tree(mstate m, tchunkptr t) {
2628 bindex_t tindex = t->index;
2629 size_t tsize = chunksize(t);
2631 compute_tree_index(tsize, idx);
2632 assert(tindex == idx);
2633 assert(tsize >= MIN_LARGE_SIZE);
2634 assert(tsize >= minsize_for_tree_index(idx));
2635 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2637 do { /* traverse through chain of same-sized nodes */
2638 do_check_any_chunk(m, ((mchunkptr)u));
2639 assert(u->index == tindex);
2640 assert(chunksize(u) == tsize);
2642 assert(!next_pinuse(u));
2643 assert(u->fd->bk == u);
2644 assert(u->bk->fd == u);
2645 if (u->parent == 0) {
2646 assert(u->child[0] == 0);
2647 assert(u->child[1] == 0);
2650 assert(head == 0); /* only one node on chain has parent */
2652 assert(u->parent != u);
2653 assert (u->parent->child[0] == u ||
2654 u->parent->child[1] == u ||
2655 *((tbinptr*)(u->parent)) == u);
2656 if (u->child[0] != 0) {
2657 assert(u->child[0]->parent == u);
2658 assert(u->child[0] != u);
2659 do_check_tree(m, u->child[0]);
2661 if (u->child[1] != 0) {
2662 assert(u->child[1]->parent == u);
2663 assert(u->child[1] != u);
2664 do_check_tree(m, u->child[1]);
2666 if (u->child[0] != 0 && u->child[1] != 0) {
2667 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2675 /* Check all the chunks in a treebin. */
2676 static void do_check_treebin(mstate m, bindex_t i) {
2677 tbinptr* tb = treebin_at(m, i);
2679 int empty = (m->treemap & (1U << i)) == 0;
2683 do_check_tree(m, t);
2686 /* Check all the chunks in a smallbin. */
2687 static void do_check_smallbin(mstate m, bindex_t i) {
2688 sbinptr b = smallbin_at(m, i);
2689 mchunkptr p = b->bk;
2690 unsigned int empty = (m->smallmap & (1U << i)) == 0;
2694 for (; p != b; p = p->bk) {
2695 size_t size = chunksize(p);
2697 /* each chunk claims to be free */
2698 do_check_free_chunk(m, p);
2699 /* chunk belongs in bin */
2700 assert(small_index(size) == i);
2701 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2702 /* chunk is followed by an inuse chunk */
2704 if (q->head != FENCEPOST_HEAD)
2705 do_check_inuse_chunk(m, q);
2710 /* Find x in a bin. Used in other check functions. */
2711 static int bin_find(mstate m, mchunkptr x) {
2712 size_t size = chunksize(x);
2713 if (is_small(size)) {
2714 bindex_t sidx = small_index(size);
2715 sbinptr b = smallbin_at(m, sidx);
2716 if (smallmap_is_marked(m, sidx)) {
2721 } while ((p = p->fd) != b);
2726 compute_tree_index(size, tidx);
2727 if (treemap_is_marked(m, tidx)) {
2728 tchunkptr t = *treebin_at(m, tidx);
2729 size_t sizebits = size << leftshift_for_tree_index(tidx);
2730 while (t != 0 && chunksize(t) != size) {
2731 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2737 if (u == (tchunkptr)x)
2739 } while ((u = u->fd) != t);
2746 /* Traverse each chunk and check it; return total */
2747 static size_t traverse_and_check(mstate m) {
2749 if (is_initialized(m)) {
2750 msegmentptr s = &m->seg;
2751 sum += m->topsize + TOP_FOOT_SIZE;
2753 mchunkptr q = align_as_chunk(s->base);
2754 mchunkptr lastq = 0;
2756 while (segment_holds(s, q) &&
2757 q != m->top && q->head != FENCEPOST_HEAD) {
2758 sum += chunksize(q);
2760 assert(!bin_find(m, q));
2761 do_check_inuse_chunk(m, q);
2764 assert(q == m->dv || bin_find(m, q));
2765 assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2766 do_check_free_chunk(m, q);
2777 /* Check all properties of malloc_state. */
2778 static void do_check_malloc_state(mstate m) {
2782 for (i = 0; i < NSMALLBINS; ++i)
2783 do_check_smallbin(m, i);
2784 for (i = 0; i < NTREEBINS; ++i)
2785 do_check_treebin(m, i);
2787 if (m->dvsize != 0) { /* check dv chunk */
2788 do_check_any_chunk(m, m->dv);
2789 assert(m->dvsize == chunksize(m->dv));
2790 assert(m->dvsize >= MIN_CHUNK_SIZE);
2791 assert(bin_find(m, m->dv) == 0);
2794 if (m->top != 0) { /* check top chunk */
2795 do_check_top_chunk(m, m->top);
2796 assert(m->topsize == chunksize(m->top));
2797 assert(m->topsize > 0);
2798 assert(bin_find(m, m->top) == 0);
2801 total = traverse_and_check(m);
2802 assert(total <= m->footprint);
2803 assert(m->footprint <= m->max_footprint);
2807 /* ----------------------------- statistics ------------------------------ */
2810 static struct mallinfo internal_mallinfo(mstate m) {
2811 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2812 if (!PREACTION(m)) {
2813 check_malloc_state(m);
2814 if (is_initialized(m)) {
2815 size_t nfree = SIZE_T_ONE; /* top always free */
2816 size_t mfree = m->topsize + TOP_FOOT_SIZE;
2818 msegmentptr s = &m->seg;
2820 mchunkptr q = align_as_chunk(s->base);
2821 while (segment_holds(s, q) &&
2822 q != m->top && q->head != FENCEPOST_HEAD) {
2823 size_t sz = chunksize(q);
2836 nm.hblkhd = m->footprint - sum;
2837 nm.usmblks = m->max_footprint;
2838 nm.uordblks = m->footprint - mfree;
2839 nm.fordblks = mfree;
2840 nm.keepcost = m->topsize;
2847 #endif /* !NO_MALLINFO */
2849 static void internal_malloc_stats(mstate m) {
2850 if (!PREACTION(m)) {
2854 check_malloc_state(m);
2855 if (is_initialized(m)) {
2856 msegmentptr s = &m->seg;
2857 maxfp = m->max_footprint;
2859 used = fp - (m->topsize + TOP_FOOT_SIZE);
2862 mchunkptr q = align_as_chunk(s->base);
2863 while (segment_holds(s, q) &&
2864 q != m->top && q->head != FENCEPOST_HEAD) {
2866 used -= chunksize(q);
2873 printf("max system bytes = %10lu\n", (unsigned long)(maxfp));
2874 printf("system bytes = %10lu\n", (unsigned long)(fp));
2875 printf("in use bytes = %10lu\n", (unsigned long)(used));
2881 /* ----------------------- Operations on smallbins ----------------------- */
2884 Various forms of linking and unlinking are defined as macros. Even
2885 the ones for trees, which are very long but have very short typical
2886 paths. This is ugly but reduces reliance on inlining support of
2890 /* Link a free chunk into a smallbin */
2891 #define insert_small_chunk(M, P, S) {\
2892 bindex_t I = small_index(S);\
2893 mchunkptr B = smallbin_at(M, I);\
2895 assert(S >= MIN_CHUNK_SIZE);\
2896 if (!smallmap_is_marked(M, I))\
2897 mark_smallmap(M, I);\
2898 else if (RTCHECK(ok_address(M, B->fd)))\
2901 CORRUPTION_ERROR_ACTION(M);\
2909 /* Unlink a chunk from a smallbin */
2910 #define unlink_small_chunk(M, P, S) {\
2911 mchunkptr F = P->fd;\
2912 mchunkptr B = P->bk;\
2913 bindex_t I = small_index(S);\
2916 assert(chunksize(P) == small_index2size(I));\
2918 clear_smallmap(M, I);\
2919 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2920 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2925 CORRUPTION_ERROR_ACTION(M);\
2929 /* Unlink the first chunk from a smallbin */
2930 #define unlink_first_small_chunk(M, B, P, I) {\
2931 mchunkptr F = P->fd;\
2934 assert(chunksize(P) == small_index2size(I));\
2936 clear_smallmap(M, I);\
2937 else if (RTCHECK(ok_address(M, F))) {\
2942 CORRUPTION_ERROR_ACTION(M);\
2946 /* Replace dv node, binning the old one */
2947 /* Used only when dvsize known to be small */
2948 #define replace_dv(M, P, S) {\
2949 size_t DVS = M->dvsize;\
2951 mchunkptr DV = M->dv;\
2952 assert(is_small(DVS));\
2953 insert_small_chunk(M, DV, DVS);\
2959 /* ------------------------- Operations on trees ------------------------- */
2961 /* Insert chunk into tree */
2962 #define insert_large_chunk(M, X, S) {\
2965 compute_tree_index(S, I);\
2966 H = treebin_at(M, I);\
2968 X->child[0] = X->child[1] = 0;\
2969 if (!treemap_is_marked(M, I)) {\
2970 mark_treemap(M, I);\
2972 X->parent = (tchunkptr)H;\
2977 size_t K = S << leftshift_for_tree_index(I);\
2979 if (chunksize(T) != S) {\
2980 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2984 else if (RTCHECK(ok_address(M, C))) {\
2991 CORRUPTION_ERROR_ACTION(M);\
2996 tchunkptr F = T->fd;\
2997 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3005 CORRUPTION_ERROR_ACTION(M);\
3016 1. If x is a chained node, unlink it from its same-sized fd/bk links
3017 and choose its bk node as its replacement.
3018 2. If x was the last node of its size, but not a leaf node, it must
3019 be replaced with a leaf node (not merely one with an open left or
3020 right), to make sure that lefts and rights of descendents
3021 correspond properly to bit masks. We use the rightmost descendent
3022 of x. We could use any other leaf, but this is easy to locate and
3023 tends to counteract removal of leftmosts elsewhere, and so keeps
3024 paths shorter than minimally guaranteed. This doesn't loop much
3025 because on average a node in a tree is near the bottom.
3026 3. If x is the base of a chain (i.e., has parent links) relink
3027 x's parent and children to x's replacement (or null if none).
3030 #define unlink_large_chunk(M, X) {\
3031 tchunkptr XP = X->parent;\
3034 tchunkptr F = X->fd;\
3036 if (RTCHECK(ok_address(M, F))) {\
3041 CORRUPTION_ERROR_ACTION(M);\
3046 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3047 ((R = *(RP = &(X->child[0]))) != 0)) {\
3049 while ((*(CP = &(R->child[1])) != 0) ||\
3050 (*(CP = &(R->child[0])) != 0)) {\
3053 if (RTCHECK(ok_address(M, RP)))\
3056 CORRUPTION_ERROR_ACTION(M);\
3061 tbinptr* H = treebin_at(M, X->index);\
3063 if ((*H = R) == 0) \
3064 clear_treemap(M, X->index);\
3066 else if (RTCHECK(ok_address(M, XP))) {\
3067 if (XP->child[0] == X) \
3073 CORRUPTION_ERROR_ACTION(M);\
3075 if (RTCHECK(ok_address(M, R))) {\
3078 if ((C0 = X->child[0]) != 0) {\
3079 if (RTCHECK(ok_address(M, C0))) {\
3084 CORRUPTION_ERROR_ACTION(M);\
3086 if ((C1 = X->child[1]) != 0) {\
3087 if (RTCHECK(ok_address(M, C1))) {\
3092 CORRUPTION_ERROR_ACTION(M);\
3096 CORRUPTION_ERROR_ACTION(M);\
3101 /* Relays to large vs small bin operations */
3103 #define insert_chunk(M, P, S)\
3104 if (is_small(S)) insert_small_chunk(M, P, S)\
3105 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3107 #define unlink_chunk(M, P, S)\
3108 if (is_small(S)) unlink_small_chunk(M, P, S)\
3109 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3112 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3115 #define internal_malloc(m, b) mspace_malloc(m, b)
3116 #define internal_free(m, mem) mspace_free(m,mem);
3117 #else /* ONLY_MSPACES */
3119 #define internal_malloc(m, b)\
3120 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3121 #define internal_free(m, mem)\
3122 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3124 #define internal_malloc(m, b) dlmalloc(b)
3125 #define internal_free(m, mem) dlfree(mem)
3126 #endif /* MSPACES */
3127 #endif /* ONLY_MSPACES */
3129 /* ----------------------- Direct-mmapping chunks ----------------------- */
3132 Directly mmapped chunks are set up with an offset to the start of
3133 the mmapped region stored in the prev_foot field of the chunk. This
3134 allows reconstruction of the required argument to MUNMAP when freed,
3135 and also allows adjustment of the returned chunk to meet alignment
3136 requirements (especially in memalign). There is also enough space
3137 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3138 the PINUSE bit so frees can be checked.
3141 /* Malloc using mmap */
3142 static void* mmap_alloc(mstate m, size_t nb) {
3143 size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3144 if (mmsize > nb) { /* Check for wrap around 0 */
3145 char* mm = (char*)(DIRECT_MMAP(mmsize));
3147 size_t offset = align_offset(chunk2mem(mm));
3148 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3149 mchunkptr p = (mchunkptr)(mm + offset);
3150 p->prev_foot = offset | IS_MMAPPED_BIT;
3151 (p)->head = (psize|CINUSE_BIT);
3152 mark_inuse_foot(m, p, psize);
3153 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3154 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3156 if (mm < m->least_addr)
3158 if ((m->footprint += mmsize) > m->max_footprint)
3159 m->max_footprint = m->footprint;
3160 assert(is_aligned(chunk2mem(p)));
3161 check_mmapped_chunk(m, p);
3162 return chunk2mem(p);
3168 /* Realloc using mmap */
3169 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3170 size_t oldsize = chunksize(oldp);
3171 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3173 /* Keep old chunk if big enough but not too big */
3174 if (oldsize >= nb + SIZE_T_SIZE &&
3175 (oldsize - nb) <= (mparams.granularity << 1))
3178 size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3179 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3180 size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3182 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3183 oldmmsize, newmmsize, 1);
3185 mchunkptr newp = (mchunkptr)(cp + offset);
3186 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3187 newp->head = (psize|CINUSE_BIT);
3188 mark_inuse_foot(m, newp, psize);
3189 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3190 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3192 if (cp < m->least_addr)
3194 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3195 m->max_footprint = m->footprint;
3196 check_mmapped_chunk(m, newp);
3203 /* -------------------------- mspace management -------------------------- */
3205 /* Initialize top chunk and its size */
3206 static void init_top(mstate m, mchunkptr p, size_t psize) {
3207 /* Ensure alignment */
3208 size_t offset = align_offset(chunk2mem(p));
3209 p = (mchunkptr)((char*)p + offset);
3214 p->head = psize | PINUSE_BIT;
3215 /* set size of fake trailing chunk holding overhead space only once */
3216 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3217 m->trim_check = mparams.trim_threshold; /* reset on each update */
3220 /* Initialize bins for a new mstate that is otherwise zeroed out */
3221 static void init_bins(mstate m) {
3222 /* Establish circular links for smallbins */
3224 for (i = 0; i < NSMALLBINS; ++i) {
3225 sbinptr bin = smallbin_at(m,i);
3226 bin->fd = bin->bk = bin;
3230 #if PROCEED_ON_ERROR
3232 /* default corruption action */
3233 static void reset_on_error(mstate m) {
3235 ++malloc_corruption_error_count;
3236 /* Reinitialize fields to forget about all memory */
3237 m->smallbins = m->treebins = 0;
3238 m->dvsize = m->topsize = 0;
3243 for (i = 0; i < NTREEBINS; ++i)
3244 *treebin_at(m, i) = 0;
3247 #endif /* PROCEED_ON_ERROR */
3249 /* Allocate chunk and prepend remainder with chunk in successor base. */
3250 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3252 mchunkptr p = align_as_chunk(newbase);
3253 mchunkptr oldfirst = align_as_chunk(oldbase);
3254 size_t psize = (char*)oldfirst - (char*)p;
3255 mchunkptr q = chunk_plus_offset(p, nb);
3256 size_t qsize = psize - nb;
3257 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3259 assert((char*)oldfirst > (char*)q);
3260 assert(pinuse(oldfirst));
3261 assert(qsize >= MIN_CHUNK_SIZE);
3263 /* consolidate remainder with first chunk of old base */
3264 if (oldfirst == m->top) {
3265 size_t tsize = m->topsize += qsize;
3267 q->head = tsize | PINUSE_BIT;
3268 check_top_chunk(m, q);
3270 else if (oldfirst == m->dv) {
3271 size_t dsize = m->dvsize += qsize;
3273 set_size_and_pinuse_of_free_chunk(q, dsize);
3276 if (!cinuse(oldfirst)) {
3277 size_t nsize = chunksize(oldfirst);
3278 unlink_chunk(m, oldfirst, nsize);
3279 oldfirst = chunk_plus_offset(oldfirst, nsize);
3282 set_free_with_pinuse(q, qsize, oldfirst);
3283 insert_chunk(m, q, qsize);
3284 check_free_chunk(m, q);
3287 check_malloced_chunk(m, chunk2mem(p), nb);
3288 return chunk2mem(p);
3292 /* Add a segment to hold a new noncontiguous region */
3293 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3294 /* Determine locations and sizes of segment, fenceposts, old top */
3295 char* old_top = (char*)m->top;
3296 msegmentptr oldsp = segment_holding(m, old_top);
3297 char* old_end = oldsp->base + oldsp->size;
3298 size_t ssize = pad_request(sizeof(struct malloc_segment));
3299 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3300 size_t offset = align_offset(chunk2mem(rawsp));
3301 char* asp = rawsp + offset;
3302 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3303 mchunkptr sp = (mchunkptr)csp;
3304 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3305 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3306 mchunkptr p = tnext;
3309 /* reset top to new space */
3310 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3312 /* Set up segment record */
3313 assert(is_aligned(ss));
3314 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3315 *ss = m->seg; /* Push current record */
3316 m->seg.base = tbase;
3317 m->seg.size = tsize;
3318 m->seg.sflags = mmapped;
3321 /* Insert trailing fenceposts */
3323 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3324 p->head = FENCEPOST_HEAD;
3326 if ((char*)(&(nextp->head)) < old_end)
3331 assert(nfences >= 2);
3333 /* Insert the rest of old top into a bin as an ordinary free chunk */
3334 if (csp != old_top) {
3335 mchunkptr q = (mchunkptr)old_top;
3336 size_t psize = csp - old_top;
3337 mchunkptr tn = chunk_plus_offset(q, psize);
3338 set_free_with_pinuse(q, psize, tn);
3339 insert_chunk(m, q, psize);
3342 check_top_chunk(m, m->top);
3345 /* -------------------------- System allocation -------------------------- */
3347 /* Get memory from system using MORECORE or MMAP */
3348 static void* sys_alloc(mstate m, size_t nb) {
3349 char* tbase = CMFAIL;
3351 flag_t mmap_flag = 0;
3355 /* Directly map large chunks */
3356 if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3357 void* mem = mmap_alloc(m, nb);
3363 Try getting memory in any of three ways (in most-preferred to
3364 least-preferred order):
3365 1. A call to MORECORE that can normally contiguously extend memory.
3366 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3367 or main space is mmapped or a previous contiguous call failed)
3368 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3369 Note that under the default settings, if MORECORE is unable to
3370 fulfill a request, and HAVE_MMAP is true, then mmap is
3371 used as a noncontiguous system allocator. This is a useful backup
3372 strategy for systems with holes in address spaces -- in this case
3373 sbrk cannot contiguously expand the heap, but mmap may be able to
3375 3. A call to MORECORE that cannot usually contiguously extend memory.
3376 (disabled if not HAVE_MORECORE)
3379 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3381 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3383 ACQUIRE_MORECORE_LOCK();
3385 if (ss == 0) { /* First time through or recovery */
3386 char* base = (char*)CALL_MORECORE(0);
3387 if (base != CMFAIL) {
3388 asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3389 /* Adjust to end on a page boundary */
3390 if (!is_page_aligned(base))
3391 asize += (page_align((size_t)base) - (size_t)base);
3392 /* Can't call MORECORE if size is negative when treated as signed */
3393 if (asize < HALF_MAX_SIZE_T &&
3394 (br = (char*)(CALL_MORECORE(asize))) == base) {
3401 /* Subtract out existing available top space from MORECORE request. */
3402 asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3403 /* Use mem here only if it did continuously extend old space */
3404 if (asize < HALF_MAX_SIZE_T &&
3405 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3411 if (tbase == CMFAIL) { /* Cope with partial failure */
3412 if (br != CMFAIL) { /* Try to use/extend the space we did get */
3413 if (asize < HALF_MAX_SIZE_T &&
3414 asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3415 size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3416 if (esize < HALF_MAX_SIZE_T) {
3417 char* end = (char*)CALL_MORECORE(esize);
3420 else { /* Can't use; try to release */
3421 CALL_MORECORE(-asize);
3427 if (br != CMFAIL) { /* Use the space we did get */
3432 disable_contiguous(m); /* Don't try contiguous path in the future */
3435 RELEASE_MORECORE_LOCK();
3438 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
3439 size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3440 size_t rsize = granularity_align(req);
3441 if (rsize > nb) { /* Fail if wraps around zero */
3442 char* mp = (char*)(CALL_MMAP(rsize));
3446 mmap_flag = IS_MMAPPED_BIT;
3451 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3452 size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3453 if (asize < HALF_MAX_SIZE_T) {
3456 ACQUIRE_MORECORE_LOCK();
3457 br = (char*)(CALL_MORECORE(asize));
3458 end = (char*)(CALL_MORECORE(0));
3459 RELEASE_MORECORE_LOCK();
3460 if (br != CMFAIL && end != CMFAIL && br < end) {
3461 size_t ssize = end - br;
3462 if (ssize > nb + TOP_FOOT_SIZE) {
3470 if (tbase != CMFAIL) {
3472 if ((m->footprint += tsize) > m->max_footprint)
3473 m->max_footprint = m->footprint;
3475 if (!is_initialized(m)) { /* first-time initialization */
3476 m->seg.base = m->least_addr = tbase;
3477 m->seg.size = tsize;
3478 m->seg.sflags = mmap_flag;
3479 m->magic = mparams.magic;
3482 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3484 /* Offset top by embedded malloc_state */
3485 mchunkptr mn = next_chunk(mem2chunk(m));
3486 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3491 /* Try to merge with an existing segment */
3492 msegmentptr sp = &m->seg;
3493 while (sp != 0 && tbase != sp->base + sp->size)
3496 !is_extern_segment(sp) &&
3497 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3498 segment_holds(sp, m->top)) { /* append */
3500 init_top(m, m->top, m->topsize + tsize);
3503 if (tbase < m->least_addr)
3504 m->least_addr = tbase;
3506 while (sp != 0 && sp->base != tbase + tsize)
3509 !is_extern_segment(sp) &&
3510 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3511 char* oldbase = sp->base;
3514 return prepend_alloc(m, tbase, oldbase, nb);
3517 add_segment(m, tbase, tsize, mmap_flag);
3521 if (nb < m->topsize) { /* Allocate from new or extended top space */
3522 size_t rsize = m->topsize -= nb;
3523 mchunkptr p = m->top;
3524 mchunkptr r = m->top = chunk_plus_offset(p, nb);
3525 r->head = rsize | PINUSE_BIT;
3526 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3527 check_top_chunk(m, m->top);
3528 check_malloced_chunk(m, chunk2mem(p), nb);
3529 return chunk2mem(p);
3533 MALLOC_FAILURE_ACTION;
3537 /* ----------------------- system deallocation -------------------------- */
3539 /* Unmap and unlink any mmapped segments that don't contain used chunks */
3540 static size_t release_unused_segments(mstate m) {
3541 size_t released = 0;
3542 msegmentptr pred = &m->seg;
3543 msegmentptr sp = pred->next;
3545 char* base = sp->base;
3546 size_t size = sp->size;
3547 msegmentptr next = sp->next;
3548 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3549 mchunkptr p = align_as_chunk(base);
3550 size_t psize = chunksize(p);
3551 /* Can unmap if first chunk holds entire segment and not pinned */
3552 if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3553 tchunkptr tp = (tchunkptr)p;
3554 assert(segment_holds(sp, (char*)sp));
3560 unlink_large_chunk(m, tp);
3562 if (CALL_MUNMAP(base, size) == 0) {
3564 m->footprint -= size;
3565 /* unlink obsoleted record */
3569 else { /* back out if cannot unmap */
3570 insert_large_chunk(m, tp, psize);
3580 static int sys_trim(mstate m, size_t pad) {
3581 size_t released = 0;
3582 if (pad < MAX_REQUEST && is_initialized(m)) {
3583 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3585 if (m->topsize > pad) {
3586 /* Shrink top space in granularity-size units, keeping at least one */
3587 size_t unit = mparams.granularity;
3588 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3590 msegmentptr sp = segment_holding(m, (char*)m->top);
3592 if (!is_extern_segment(sp)) {
3593 if (is_mmapped_segment(sp)) {
3595 sp->size >= extra &&
3596 !has_segment_link(m, sp)) { /* can't shrink if pinned */
3597 /* Prefer mremap, fall back to munmap */
3598 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3599 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3604 else if (HAVE_MORECORE) {
3605 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3606 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3607 ACQUIRE_MORECORE_LOCK();
3609 /* Make sure end of memory is where we last set it. */
3610 char* old_br = (char*)(CALL_MORECORE(0));
3611 if (old_br == sp->base + sp->size) {
3612 char* rel_br = (char*)(CALL_MORECORE(-extra));
3613 char* new_br = (char*)(CALL_MORECORE(0));
3614 if (rel_br != CMFAIL && new_br < old_br)
3615 released = old_br - new_br;
3618 RELEASE_MORECORE_LOCK();
3622 if (released != 0) {
3623 sp->size -= released;
3624 m->footprint -= released;
3625 init_top(m, m->top, m->topsize - released);
3626 check_top_chunk(m, m->top);
3630 /* Unmap any unused mmapped segments */
3632 released += release_unused_segments(m);
3634 /* On failure, disable autotrim to avoid repeated failed future calls */
3636 m->trim_check = MAX_SIZE_T;
3639 return (released != 0)? 1 : 0;
3642 /* ---------------------------- malloc support --------------------------- */
3644 /* allocate a large request from the best fitting chunk in a treebin */
3645 static void* tmalloc_large(mstate m, size_t nb) {
3647 size_t rsize = -nb; /* Unsigned negation */
3650 compute_tree_index(nb, idx);
3652 if ((t = *treebin_at(m, idx)) != 0) {
3653 /* Traverse tree for this bin looking for node with size == nb */
3654 size_t sizebits = nb << leftshift_for_tree_index(idx);
3655 tchunkptr rst = 0; /* The deepest untaken right subtree */
3658 size_t trem = chunksize(t) - nb;
3661 if ((rsize = trem) == 0)
3665 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3666 if (rt != 0 && rt != t)
3669 t = rst; /* set t to least subtree holding sizes > nb */
3676 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3677 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3678 if (leftbits != 0) {
3680 binmap_t leastbit = least_bit(leftbits);
3681 compute_bit2idx(leastbit, i);
3682 t = *treebin_at(m, i);
3686 while (t != 0) { /* find smallest of tree or subtree */
3687 size_t trem = chunksize(t) - nb;
3692 t = leftmost_child(t);
3695 /* If dv is a better fit, return 0 so malloc will use it */
3696 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3697 if (RTCHECK(ok_address(m, v))) { /* split */
3698 mchunkptr r = chunk_plus_offset(v, nb);
3699 assert(chunksize(v) == rsize + nb);
3700 if (RTCHECK(ok_next(v, r))) {
3701 unlink_large_chunk(m, v);
3702 if (rsize < MIN_CHUNK_SIZE)
3703 set_inuse_and_pinuse(m, v, (rsize + nb));
3705 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3706 set_size_and_pinuse_of_free_chunk(r, rsize);
3707 insert_chunk(m, r, rsize);
3709 return chunk2mem(v);
3712 CORRUPTION_ERROR_ACTION(m);
3717 /* allocate a small request from the best fitting chunk in a treebin */
3718 static void* tmalloc_small(mstate m, size_t nb) {
3722 binmap_t leastbit = least_bit(m->treemap);
3723 compute_bit2idx(leastbit, i);
3725 v = t = *treebin_at(m, i);
3726 rsize = chunksize(t) - nb;
3728 while ((t = leftmost_child(t)) != 0) {
3729 size_t trem = chunksize(t) - nb;
3736 if (RTCHECK(ok_address(m, v))) {
3737 mchunkptr r = chunk_plus_offset(v, nb);
3738 assert(chunksize(v) == rsize + nb);
3739 if (RTCHECK(ok_next(v, r))) {
3740 unlink_large_chunk(m, v);
3741 if (rsize < MIN_CHUNK_SIZE)
3742 set_inuse_and_pinuse(m, v, (rsize + nb));
3744 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3745 set_size_and_pinuse_of_free_chunk(r, rsize);
3746 replace_dv(m, r, rsize);
3748 return chunk2mem(v);
3752 CORRUPTION_ERROR_ACTION(m);
3756 /* --------------------------- realloc support --------------------------- */
3758 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3759 if (bytes >= MAX_REQUEST) {
3760 MALLOC_FAILURE_ACTION;
3763 if (!PREACTION(m)) {
3764 mchunkptr oldp = mem2chunk(oldmem);
3765 size_t oldsize = chunksize(oldp);
3766 mchunkptr next = chunk_plus_offset(oldp, oldsize);
3770 /* Try to either shrink or extend into top. Else malloc-copy-free */
3772 if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3773 ok_next(oldp, next) && ok_pinuse(next))) {
3774 size_t nb = request2size(bytes);
3775 if (is_mmapped(oldp))
3776 newp = mmap_resize(m, oldp, nb);
3777 else if (oldsize >= nb) { /* already big enough */
3778 size_t rsize = oldsize - nb;
3780 if (rsize >= MIN_CHUNK_SIZE) {
3781 mchunkptr remainder = chunk_plus_offset(newp, nb);
3782 set_inuse(m, newp, nb);
3783 set_inuse(m, remainder, rsize);
3784 extra = chunk2mem(remainder);
3787 else if (next == m->top && oldsize + m->topsize > nb) {
3788 /* Expand into top */
3789 size_t newsize = oldsize + m->topsize;
3790 size_t newtopsize = newsize - nb;
3791 mchunkptr newtop = chunk_plus_offset(oldp, nb);
3792 set_inuse(m, oldp, nb);
3793 newtop->head = newtopsize |PINUSE_BIT;
3795 m->topsize = newtopsize;
3800 USAGE_ERROR_ACTION(m, oldmem);
3809 internal_free(m, extra);
3811 check_inuse_chunk(m, newp);
3812 return chunk2mem(newp);
3815 void* newmem = internal_malloc(m, bytes);
3817 size_t oc = oldsize - overhead_for(oldp);
3818 memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3819 internal_free(m, oldmem);
3827 /* --------------------------- memalign support -------------------------- */
3829 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3830 if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
3831 return internal_malloc(m, bytes);
3832 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3833 alignment = MIN_CHUNK_SIZE;
3834 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3835 size_t a = MALLOC_ALIGNMENT << 1;
3836 while (a < alignment) a <<= 1;
3840 if (bytes >= MAX_REQUEST - alignment) {
3841 if (m != 0) { /* Test isn't needed but avoids compiler warning */
3842 MALLOC_FAILURE_ACTION;
3846 size_t nb = request2size(bytes);
3847 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3848 char* mem = (char*)internal_malloc(m, req);
3852 mchunkptr p = mem2chunk(mem);
3854 if (PREACTION(m)) return 0;
3855 if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3857 Find an aligned spot inside chunk. Since we need to give
3858 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3859 the first calculation places us at a spot with less than
3860 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3861 We've allocated enough total room so that this is always
3864 char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3868 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3870 mchunkptr newp = (mchunkptr)pos;
3871 size_t leadsize = pos - (char*)(p);
3872 size_t newsize = chunksize(p) - leadsize;
3874 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3875 newp->prev_foot = p->prev_foot + leadsize;
3876 newp->head = (newsize|CINUSE_BIT);
3878 else { /* Otherwise, give back leader, use the rest */
3879 set_inuse(m, newp, newsize);
3880 set_inuse(m, p, leadsize);
3881 leader = chunk2mem(p);
3886 /* Give back spare room at the end */
3887 if (!is_mmapped(p)) {
3888 size_t size = chunksize(p);
3889 if (size > nb + MIN_CHUNK_SIZE) {
3890 size_t remainder_size = size - nb;
3891 mchunkptr remainder = chunk_plus_offset(p, nb);
3892 set_inuse(m, p, nb);
3893 set_inuse(m, remainder, remainder_size);
3894 trailer = chunk2mem(remainder);
3898 assert (chunksize(p) >= nb);
3899 assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3900 check_inuse_chunk(m, p);
3903 internal_free(m, leader);
3906 internal_free(m, trailer);
3908 return chunk2mem(p);
3914 /* ------------------------ comalloc/coalloc support --------------------- */
3916 static void** ialloc(mstate m,
3922 This provides common support for independent_X routines, handling
3923 all of the combinations that can result.
3926 bit 0 set if all elements are same size (using sizes[0])
3927 bit 1 set if elements should be zeroed
3930 size_t element_size; /* chunksize of each element, if all same */
3931 size_t contents_size; /* total size of elements */
3932 size_t array_size; /* request size of pointer array */
3933 void* mem; /* malloced aggregate space */
3934 mchunkptr p; /* corresponding chunk */
3935 size_t remainder_size; /* remaining bytes while splitting */
3936 void** marray; /* either "chunks" or malloced ptr array */
3937 mchunkptr array_chunk; /* chunk for malloced ptr array */
3938 flag_t was_enabled; /* to disable mmap */
3942 /* compute array length, if needed */
3944 if (n_elements == 0)
3945 return chunks; /* nothing to do */
3950 /* if empty req, must still return chunk representing empty array */
3951 if (n_elements == 0)
3952 return (void**)internal_malloc(m, 0);
3954 array_size = request2size(n_elements * (sizeof(void*)));
3957 /* compute total element size */
3958 if (opts & 0x1) { /* all-same-size */
3959 element_size = request2size(*sizes);
3960 contents_size = n_elements * element_size;
3962 else { /* add up all the sizes */
3965 for (i = 0; i != n_elements; ++i)
3966 contents_size += request2size(sizes[i]);
3969 size = contents_size + array_size;
3972 Allocate the aggregate chunk. First disable direct-mmapping so
3973 malloc won't use it, since we would not be able to later
3974 free/realloc space internal to a segregated mmap region.
3976 was_enabled = use_mmap(m);
3978 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3984 if (PREACTION(m)) return 0;
3986 remainder_size = chunksize(p);
3988 assert(!is_mmapped(p));
3990 if (opts & 0x2) { /* optionally clear the elements */
3991 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
3994 /* If not provided, allocate the pointer array as final part of chunk */
3996 size_t array_chunk_size;
3997 array_chunk = chunk_plus_offset(p, contents_size);
3998 array_chunk_size = remainder_size - contents_size;
3999 marray = (void**) (chunk2mem(array_chunk));
4000 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4001 remainder_size = contents_size;
4004 /* split out elements */
4005 for (i = 0; ; ++i) {
4006 marray[i] = chunk2mem(p);
4007 if (i != n_elements-1) {
4008 if (element_size != 0)
4009 size = element_size;
4011 size = request2size(sizes[i]);
4012 remainder_size -= size;
4013 set_size_and_pinuse_of_inuse_chunk(m, p, size);
4014 p = chunk_plus_offset(p, size);
4016 else { /* the final element absorbs any overallocation slop */
4017 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4023 if (marray != chunks) {
4024 /* final element must have exactly exhausted chunk */
4025 if (element_size != 0) {
4026 assert(remainder_size == element_size);
4029 assert(remainder_size == request2size(sizes[i]));
4031 check_inuse_chunk(m, mem2chunk(marray));
4033 for (i = 0; i != n_elements; ++i)
4034 check_inuse_chunk(m, mem2chunk(marray[i]));
4043 /* -------------------------- public routines ---------------------------- */
4047 void* dlmalloc(size_t bytes) {
4050 If a small request (< 256 bytes minus per-chunk overhead):
4051 1. If one exists, use a remainderless chunk in associated smallbin.
4052 (Remainderless means that there are too few excess bytes to
4053 represent as a chunk.)
4054 2. If it is big enough, use the dv chunk, which is normally the
4055 chunk adjacent to the one used for the most recent small request.
4056 3. If one exists, split the smallest available chunk in a bin,
4057 saving remainder in dv.
4058 4. If it is big enough, use the top chunk.
4059 5. If available, get memory from system and use it
4060 Otherwise, for a large request:
4061 1. Find the smallest available binned chunk that fits, and use it
4062 if it is better fitting than dv chunk, splitting if necessary.
4063 2. If better fitting than any binned chunk, use the dv chunk.
4064 3. If it is big enough, use the top chunk.
4065 4. If request size >= mmap threshold, try to directly mmap this chunk.
4066 5. If available, get memory from system and use it
4068 The ugly goto's here ensure that postaction occurs along all paths.
4071 if (!PREACTION(gm)) {
4074 if (bytes <= MAX_SMALL_REQUEST) {
4077 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4078 idx = small_index(nb);
4079 smallbits = gm->smallmap >> idx;
4081 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4083 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4084 b = smallbin_at(gm, idx);
4086 assert(chunksize(p) == small_index2size(idx));
4087 unlink_first_small_chunk(gm, b, p, idx);
4088 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4090 check_malloced_chunk(gm, mem, nb);
4094 else if (nb > gm->dvsize) {
4095 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4099 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4100 binmap_t leastbit = least_bit(leftbits);
4101 compute_bit2idx(leastbit, i);
4102 b = smallbin_at(gm, i);
4104 assert(chunksize(p) == small_index2size(i));
4105 unlink_first_small_chunk(gm, b, p, i);
4106 rsize = small_index2size(i) - nb;
4107 /* Fit here cannot be remainderless if 4byte sizes */
4108 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4109 set_inuse_and_pinuse(gm, p, small_index2size(i));
4111 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4112 r = chunk_plus_offset(p, nb);
4113 set_size_and_pinuse_of_free_chunk(r, rsize);
4114 replace_dv(gm, r, rsize);
4117 check_malloced_chunk(gm, mem, nb);
4121 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4122 check_malloced_chunk(gm, mem, nb);
4127 else if (bytes >= MAX_REQUEST)
4128 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4130 nb = pad_request(bytes);
4131 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4132 check_malloced_chunk(gm, mem, nb);
4137 if (nb <= gm->dvsize) {
4138 size_t rsize = gm->dvsize - nb;
4139 mchunkptr p = gm->dv;
4140 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4141 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4143 set_size_and_pinuse_of_free_chunk(r, rsize);
4144 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4146 else { /* exhaust dv */
4147 size_t dvs = gm->dvsize;
4150 set_inuse_and_pinuse(gm, p, dvs);
4153 check_malloced_chunk(gm, mem, nb);
4157 else if (nb < gm->topsize) { /* Split top */
4158 size_t rsize = gm->topsize -= nb;
4159 mchunkptr p = gm->top;
4160 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4161 r->head = rsize | PINUSE_BIT;
4162 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4164 check_top_chunk(gm, gm->top);
4165 check_malloced_chunk(gm, mem, nb);
4169 mem = sys_alloc(gm, nb);
4179 void dlfree(void* mem) {
4181 Consolidate freed chunks with preceeding or succeeding bordering
4182 free chunks, if they exist, and then place in a bin. Intermixed
4183 with special cases for top, dv, mmapped chunks, and usage errors.
4187 mchunkptr p = mem2chunk(mem);
4189 mstate fm = get_mstate_for(p);
4190 if (!ok_magic(fm)) {
4191 USAGE_ERROR_ACTION(fm, p);
4196 #endif /* FOOTERS */
4197 if (!PREACTION(fm)) {
4198 check_inuse_chunk(fm, p);
4199 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4200 size_t psize = chunksize(p);
4201 mchunkptr next = chunk_plus_offset(p, psize);
4203 size_t prevsize = p->prev_foot;
4204 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4205 prevsize &= ~IS_MMAPPED_BIT;
4206 psize += prevsize + MMAP_FOOT_PAD;
4207 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4208 fm->footprint -= psize;
4212 mchunkptr prev = chunk_minus_offset(p, prevsize);
4215 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4217 unlink_chunk(fm, p, prevsize);
4219 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4221 set_free_with_pinuse(p, psize, next);
4230 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4231 if (!cinuse(next)) { /* consolidate forward */
4232 if (next == fm->top) {
4233 size_t tsize = fm->topsize += psize;
4235 p->head = tsize | PINUSE_BIT;
4240 if (should_trim(fm, tsize))
4244 else if (next == fm->dv) {
4245 size_t dsize = fm->dvsize += psize;
4247 set_size_and_pinuse_of_free_chunk(p, dsize);
4251 size_t nsize = chunksize(next);
4253 unlink_chunk(fm, next, nsize);
4254 set_size_and_pinuse_of_free_chunk(p, psize);
4262 set_free_with_pinuse(p, psize, next);
4263 insert_chunk(fm, p, psize);
4264 check_free_chunk(fm, p);
4269 USAGE_ERROR_ACTION(fm, p);
4276 #endif /* FOOTERS */
4279 void* dlcalloc(size_t n_elements, size_t elem_size) {
4282 if (n_elements != 0) {
4283 req = n_elements * elem_size;
4284 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4285 (req / n_elements != elem_size))
4286 req = MAX_SIZE_T; /* force downstream failure on overflow */
4288 mem = dlmalloc(req);
4289 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4290 memset(mem, 0, req);
4294 void* dlrealloc(void* oldmem, size_t bytes) {
4296 return dlmalloc(bytes);
4297 #ifdef REALLOC_ZERO_BYTES_FREES
4302 #endif /* REALLOC_ZERO_BYTES_FREES */
4307 mstate m = get_mstate_for(mem2chunk(oldmem));
4309 USAGE_ERROR_ACTION(m, oldmem);
4312 #endif /* FOOTERS */
4313 return internal_realloc(m, oldmem, bytes);
4317 void* dlmemalign(size_t alignment, size_t bytes) {
4318 return internal_memalign(gm, alignment, bytes);
4321 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4323 size_t sz = elem_size; /* serves as 1-element array */
4324 return ialloc(gm, n_elements, &sz, 3, chunks);
4327 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4329 return ialloc(gm, n_elements, sizes, 0, chunks);
4332 void* dlvalloc(size_t bytes) {
4335 pagesz = mparams.page_size;
4336 return dlmemalign(pagesz, bytes);
4339 void* dlpvalloc(size_t bytes) {
4342 pagesz = mparams.page_size;
4343 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4346 int dlmalloc_trim(size_t pad) {
4348 if (!PREACTION(gm)) {
4349 result = sys_trim(gm, pad);
4355 size_t dlmalloc_footprint(void) {
4356 return gm->footprint;
4359 size_t dlmalloc_max_footprint(void) {
4360 return gm->max_footprint;
4364 struct mallinfo dlmallinfo(void) {
4365 return internal_mallinfo(gm);
4367 #endif /* NO_MALLINFO */
4369 void dlmalloc_stats() {
4370 internal_malloc_stats(gm);
4373 size_t dlmalloc_usable_size(void* mem) {
4375 mchunkptr p = mem2chunk(mem);
4377 return chunksize(p) - overhead_for(p);
4382 int dlmallopt(int param_number, int value) {
4383 return change_mparam(param_number, value);
4386 #endif /* !ONLY_MSPACES */
4388 /* ----------------------------- user mspaces ---------------------------- */
4392 static mstate init_user_mstate(char* tbase, size_t tsize) {
4393 size_t msize = pad_request(sizeof(struct malloc_state));
4395 mchunkptr msp = align_as_chunk(tbase);
4396 mstate m = (mstate)(chunk2mem(msp));
4397 memset(m, 0, msize);
4398 INITIAL_LOCK(&m->mutex);
4399 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4400 m->seg.base = m->least_addr = tbase;
4401 m->seg.size = m->footprint = m->max_footprint = tsize;
4402 m->magic = mparams.magic;
4403 m->mflags = mparams.default_mflags;
4404 disable_contiguous(m);
4406 mn = next_chunk(mem2chunk(m));
4407 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4408 check_top_chunk(m, m->top);
4412 mspace create_mspace(size_t capacity, int locked) {
4414 size_t msize = pad_request(sizeof(struct malloc_state));
4415 init_mparams(); /* Ensure pagesize etc initialized */
4417 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4418 size_t rs = ((capacity == 0)? mparams.granularity :
4419 (capacity + TOP_FOOT_SIZE + msize));
4420 size_t tsize = granularity_align(rs);
4421 char* tbase = (char*)(CALL_MMAP(tsize));
4422 if (tbase != CMFAIL) {
4423 m = init_user_mstate(tbase, tsize);
4424 m->seg.sflags = IS_MMAPPED_BIT;
4425 set_lock(m, locked);
4431 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4433 size_t msize = pad_request(sizeof(struct malloc_state));
4434 init_mparams(); /* Ensure pagesize etc initialized */
4436 if (capacity > msize + TOP_FOOT_SIZE &&
4437 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4438 m = init_user_mstate((char*)base, capacity);
4439 m->seg.sflags = EXTERN_BIT;
4440 set_lock(m, locked);
4445 size_t destroy_mspace(mspace msp) {
4447 mstate ms = (mstate)msp;
4449 msegmentptr sp = &ms->seg;
4451 char* base = sp->base;
4452 size_t size = sp->size;
4453 flag_t flag = sp->sflags;
4455 if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4456 CALL_MUNMAP(base, size) == 0)
4461 USAGE_ERROR_ACTION(ms,ms);
4467 mspace versions of routines are near-clones of the global
4468 versions. This is not so nice but better than the alternatives.
4472 void* mspace_malloc(mspace msp, size_t bytes) {
4473 mstate ms = (mstate)msp;
4474 if (!ok_magic(ms)) {
4475 USAGE_ERROR_ACTION(ms,ms);
4478 if (!PREACTION(ms)) {
4481 if (bytes <= MAX_SMALL_REQUEST) {
4484 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4485 idx = small_index(nb);
4486 smallbits = ms->smallmap >> idx;
4488 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4490 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4491 b = smallbin_at(ms, idx);
4493 assert(chunksize(p) == small_index2size(idx));
4494 unlink_first_small_chunk(ms, b, p, idx);
4495 set_inuse_and_pinuse(ms, p, small_index2size(idx));
4497 check_malloced_chunk(ms, mem, nb);
4501 else if (nb > ms->dvsize) {
4502 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4506 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4507 binmap_t leastbit = least_bit(leftbits);
4508 compute_bit2idx(leastbit, i);
4509 b = smallbin_at(ms, i);
4511 assert(chunksize(p) == small_index2size(i));
4512 unlink_first_small_chunk(ms, b, p, i);
4513 rsize = small_index2size(i) - nb;
4514 /* Fit here cannot be remainderless if 4byte sizes */
4515 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4516 set_inuse_and_pinuse(ms, p, small_index2size(i));
4518 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4519 r = chunk_plus_offset(p, nb);
4520 set_size_and_pinuse_of_free_chunk(r, rsize);
4521 replace_dv(ms, r, rsize);
4524 check_malloced_chunk(ms, mem, nb);
4528 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4529 check_malloced_chunk(ms, mem, nb);
4534 else if (bytes >= MAX_REQUEST)
4535 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4537 nb = pad_request(bytes);
4538 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4539 check_malloced_chunk(ms, mem, nb);
4544 if (nb <= ms->dvsize) {
4545 size_t rsize = ms->dvsize - nb;
4546 mchunkptr p = ms->dv;
4547 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4548 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4550 set_size_and_pinuse_of_free_chunk(r, rsize);
4551 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4553 else { /* exhaust dv */
4554 size_t dvs = ms->dvsize;
4557 set_inuse_and_pinuse(ms, p, dvs);
4560 check_malloced_chunk(ms, mem, nb);
4564 else if (nb < ms->topsize) { /* Split top */
4565 size_t rsize = ms->topsize -= nb;
4566 mchunkptr p = ms->top;
4567 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4568 r->head = rsize | PINUSE_BIT;
4569 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4571 check_top_chunk(ms, ms->top);
4572 check_malloced_chunk(ms, mem, nb);
4576 mem = sys_alloc(ms, nb);
4586 void mspace_free(mspace msp, void* mem) {
4588 mchunkptr p = mem2chunk(mem);
4590 mstate fm = get_mstate_for(p);
4592 mstate fm = (mstate)msp;
4593 #endif /* FOOTERS */
4594 if (!ok_magic(fm)) {
4595 USAGE_ERROR_ACTION(fm, p);
4598 if (!PREACTION(fm)) {
4599 check_inuse_chunk(fm, p);
4600 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4601 size_t psize = chunksize(p);
4602 mchunkptr next = chunk_plus_offset(p, psize);
4604 size_t prevsize = p->prev_foot;
4605 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4606 prevsize &= ~IS_MMAPPED_BIT;
4607 psize += prevsize + MMAP_FOOT_PAD;
4608 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4609 fm->footprint -= psize;
4613 mchunkptr prev = chunk_minus_offset(p, prevsize);
4616 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4618 unlink_chunk(fm, p, prevsize);
4620 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4622 set_free_with_pinuse(p, psize, next);
4631 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4632 if (!cinuse(next)) { /* consolidate forward */
4633 if (next == fm->top) {
4634 size_t tsize = fm->topsize += psize;
4636 p->head = tsize | PINUSE_BIT;
4641 if (should_trim(fm, tsize))
4645 else if (next == fm->dv) {
4646 size_t dsize = fm->dvsize += psize;
4648 set_size_and_pinuse_of_free_chunk(p, dsize);
4652 size_t nsize = chunksize(next);
4654 unlink_chunk(fm, next, nsize);
4655 set_size_and_pinuse_of_free_chunk(p, psize);
4663 set_free_with_pinuse(p, psize, next);
4664 insert_chunk(fm, p, psize);
4665 check_free_chunk(fm, p);
4670 USAGE_ERROR_ACTION(fm, p);
4677 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4680 mstate ms = (mstate)msp;
4681 if (!ok_magic(ms)) {
4682 USAGE_ERROR_ACTION(ms,ms);
4685 if (n_elements != 0) {
4686 req = n_elements * elem_size;
4687 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4688 (req / n_elements != elem_size))
4689 req = MAX_SIZE_T; /* force downstream failure on overflow */
4691 mem = internal_malloc(ms, req);
4692 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4693 memset(mem, 0, req);
4697 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4699 return mspace_malloc(msp, bytes);
4700 #ifdef REALLOC_ZERO_BYTES_FREES
4702 mspace_free(msp, oldmem);
4705 #endif /* REALLOC_ZERO_BYTES_FREES */
4708 mchunkptr p = mem2chunk(oldmem);
4709 mstate ms = get_mstate_for(p);
4711 mstate ms = (mstate)msp;
4712 #endif /* FOOTERS */
4713 if (!ok_magic(ms)) {
4714 USAGE_ERROR_ACTION(ms,ms);
4717 return internal_realloc(ms, oldmem, bytes);
4721 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4722 mstate ms = (mstate)msp;
4723 if (!ok_magic(ms)) {
4724 USAGE_ERROR_ACTION(ms,ms);
4727 return internal_memalign(ms, alignment, bytes);
4730 void** mspace_independent_calloc(mspace msp, size_t n_elements,
4731 size_t elem_size, void* chunks[]) {
4732 size_t sz = elem_size; /* serves as 1-element array */
4733 mstate ms = (mstate)msp;
4734 if (!ok_magic(ms)) {
4735 USAGE_ERROR_ACTION(ms,ms);
4738 return ialloc(ms, n_elements, &sz, 3, chunks);
4741 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4742 size_t sizes[], void* chunks[]) {
4743 mstate ms = (mstate)msp;
4744 if (!ok_magic(ms)) {
4745 USAGE_ERROR_ACTION(ms,ms);
4748 return ialloc(ms, n_elements, sizes, 0, chunks);
4751 int mspace_trim(mspace msp, size_t pad) {
4753 mstate ms = (mstate)msp;
4755 if (!PREACTION(ms)) {
4756 result = sys_trim(ms, pad);
4761 USAGE_ERROR_ACTION(ms,ms);
4766 void mspace_malloc_stats(mspace msp) {
4767 mstate ms = (mstate)msp;
4769 internal_malloc_stats(ms);
4772 USAGE_ERROR_ACTION(ms,ms);
4776 size_t mspace_footprint(mspace msp) {
4778 mstate ms = (mstate)msp;
4780 result = ms->footprint;
4782 USAGE_ERROR_ACTION(ms,ms);
4787 size_t mspace_max_footprint(mspace msp) {
4789 mstate ms = (mstate)msp;
4791 result = ms->max_footprint;
4793 USAGE_ERROR_ACTION(ms,ms);
4799 struct mallinfo mspace_mallinfo(mspace msp) {
4800 mstate ms = (mstate)msp;
4801 if (!ok_magic(ms)) {
4802 USAGE_ERROR_ACTION(ms,ms);
4804 return internal_mallinfo(ms);
4806 #endif /* NO_MALLINFO */
4808 int mspace_mallopt(int param_number, int value) {
4809 return change_mparam(param_number, value);
4812 #endif /* MSPACES */
4814 /* -------------------- Alternative MORECORE functions ------------------- */
4817 Guidelines for creating a custom version of MORECORE:
4819 * For best performance, MORECORE should allocate in multiples of pagesize.
4820 * MORECORE may allocate more memory than requested. (Or even less,
4821 but this will usually result in a malloc failure.)
4822 * MORECORE must not allocate memory when given argument zero, but
4823 instead return one past the end address of memory from previous
4825 * For best performance, consecutive calls to MORECORE with positive
4826 arguments should return increasing addresses, indicating that
4827 space has been contiguously extended.
4828 * Even though consecutive calls to MORECORE need not return contiguous
4829 addresses, it must be OK for malloc'ed chunks to span multiple
4830 regions in those cases where they do happen to be contiguous.
4831 * MORECORE need not handle negative arguments -- it may instead
4832 just return MFAIL when given negative arguments.
4833 Negative arguments are always multiples of pagesize. MORECORE
4834 must not misinterpret negative args as large positive unsigned
4835 args. You can suppress all such calls from even occurring by defining
4836 MORECORE_CANNOT_TRIM,
4838 As an example alternative MORECORE, here is a custom allocator
4839 kindly contributed for pre-OSX macOS. It uses virtually but not
4840 necessarily physically contiguous non-paged memory (locked in,
4841 present and won't get swapped out). You can use it by uncommenting
4842 this section, adding some #includes, and setting up the appropriate
4845 #define MORECORE osMoreCore
4847 There is also a shutdown routine that should somehow be called for
4848 cleanup upon program exit.
4850 #define MAX_POOL_ENTRIES 100
4851 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
4852 static int next_os_pool;
4853 void *our_os_pools[MAX_POOL_ENTRIES];
4855 void *osMoreCore(int size)
4858 static void *sbrk_top = 0;
4862 if (size < MINIMUM_MORECORE_SIZE)
4863 size = MINIMUM_MORECORE_SIZE;
4864 if (CurrentExecutionLevel() == kTaskLevel)
4865 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4868 return (void *) MFAIL;
4870 // save ptrs so they can be freed during cleanup
4871 our_os_pools[next_os_pool] = ptr;
4873 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4874 sbrk_top = (char *) ptr + size;
4879 // we don't currently support shrink behavior
4880 return (void *) MFAIL;
4888 // cleanup any allocated memory pools
4889 // called as last thing before shutting down driver
4891 void osCleanupMem(void)
4895 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4898 PoolDeallocate(*ptr);
4906 /* -----------------------------------------------------------------------
4908 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
4909 * Add max_footprint functions
4910 * Ensure all appropriate literals are size_t
4911 * Fix conditional compilation problem for some #define settings
4912 * Avoid concatenating segments with the one provided
4913 in create_mspace_with_base
4914 * Rename some variables to avoid compiler shadowing warnings
4915 * Use explicit lock initialization.
4916 * Better handling of sbrk interference.
4917 * Simplify and fix segment insertion, trimming and mspace_destroy
4918 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4919 * Thanks especially to Dennis Flanagan for help on these.
4921 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
4922 * Fix memalign brace error.
4924 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
4925 * Fix improper #endif nesting in C++
4926 * Add explicit casts needed for C++
4928 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
4929 * Use trees for large bins
4931 * Use segments to unify sbrk-based and mmap-based system allocation,
4932 removing need for emulation on most platforms without sbrk.
4933 * Default safety checks
4934 * Optional footer checks. Thanks to William Robertson for the idea.
4935 * Internal code refactoring
4936 * Incorporate suggestions and platform-specific changes.
4937 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4938 Aaron Bachmann, Emery Berger, and others.
4939 * Speed up non-fastbin processing enough to remove fastbins.
4940 * Remove useless cfree() to avoid conflicts with other apps.
4941 * Remove internal memcpy, memset. Compilers handle builtins better.
4942 * Remove some options that no one ever used and rename others.
4944 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
4945 * Fix malloc_state bitmap array misdeclaration
4947 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
4948 * Allow tuning of FIRST_SORTED_BIN_SIZE
4949 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4950 * Better detection and support for non-contiguousness of MORECORE.
4951 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4952 * Bypass most of malloc if no frees. Thanks To Emery Berger.
4953 * Fix freeing of old top non-contiguous chunk im sysmalloc.
4954 * Raised default trim and map thresholds to 256K.
4955 * Fix mmap-related #defines. Thanks to Lubos Lunak.
4956 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4957 * Branch-free bin calculation
4958 * Default trim and mmap thresholds now 256K.
4960 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
4961 * Introduce independent_comalloc and independent_calloc.
4962 Thanks to Michael Pachos for motivation and help.
4963 * Make optional .h file available
4964 * Allow > 2GB requests on 32bit systems.
4965 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4966 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4968 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4970 * memalign: check alignment arg
4971 * realloc: don't try to shift chunks backwards, since this
4972 leads to more fragmentation in some programs and doesn't
4973 seem to help in any others.
4974 * Collect all cases in malloc requiring system memory into sysmalloc
4975 * Use mmap as backup to sbrk
4976 * Place all internal state in malloc_state
4977 * Introduce fastbins (although similar to 2.5.1)
4978 * Many minor tunings and cosmetic improvements
4979 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4980 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4981 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4982 * Include errno.h to support default failure action.
4984 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
4985 * return null for negative arguments
4986 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
4987 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
4988 (e.g. WIN32 platforms)
4989 * Cleanup header file inclusion for WIN32 platforms
4990 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
4991 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
4992 memory allocation routines
4993 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
4994 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
4995 usage of 'assert' in non-WIN32 code
4996 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
4998 * Always call 'fREe()' rather than 'free()'
5000 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5001 * Fixed ordering problem with boundary-stamping
5003 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5004 * Added pvalloc, as recommended by H.J. Liu
5005 * Added 64bit pointer support mainly from Wolfram Gloger
5006 * Added anonymously donated WIN32 sbrk emulation
5007 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5008 * malloc_extend_top: fix mask error that caused wastage after
5010 * Add linux mremap support code from HJ Liu
5012 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5013 * Integrated most documentation with the code.
5014 * Add support for mmap, with help from
5015 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5016 * Use last_remainder in more cases.
5017 * Pack bins using idea from colin@nyx10.cs.du.edu
5018 * Use ordered bins instead of best-fit threshhold
5019 * Eliminate block-local decls to simplify tracing and debugging.
5020 * Support another case of realloc via move into top
5021 * Fix error occuring when initial sbrk_base not word-aligned.
5022 * Rely on page size for units instead of SBRK_UNIT to
5023 avoid surprises about sbrk alignment conventions.
5024 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5025 (raymond@es.ele.tue.nl) for the suggestion.
5026 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5027 * More precautions for cases where other routines call sbrk,
5028 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5029 * Added macros etc., allowing use in linux libc from
5030 H.J. Lu (hjl@gnu.ai.mit.edu)
5031 * Inverted this history list
5033 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5034 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5035 * Removed all preallocation code since under current scheme
5036 the work required to undo bad preallocations exceeds
5037 the work saved in good cases for most test programs.
5038 * No longer use return list or unconsolidated bins since
5039 no scheme using them consistently outperforms those that don't
5040 given above changes.
5041 * Use best fit for very large chunks to prevent some worst-cases.
5042 * Added some support for debugging
5044 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5045 * Removed footers when chunks are in use. Thanks to
5046 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5048 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5049 * Added malloc_trim, with help from Wolfram Gloger
5050 (wmglo@Dent.MED.Uni-Muenchen.DE).
5052 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5054 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5055 * realloc: try to expand in both directions
5056 * malloc: swap order of clean-bin strategy;
5057 * realloc: only conditionally expand backwards
5058 * Try not to scavenge used bins
5059 * Use bin counts as a guide to preallocation
5060 * Occasionally bin return list chunks in first scan
5061 * Add a few optimizations from colin@nyx10.cs.du.edu
5063 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5064 * faster bin computation & slightly different binning
5065 * merged all consolidations to one part of malloc proper
5066 (eliminating old malloc_find_space & malloc_clean_bin)
5067 * Scan 2 returns chunks (not just 1)
5068 * Propagate failure in realloc if malloc returns 0
5069 * Add stuff to allow compilation on non-ANSI compilers
5070 from kpv@research.att.com
5072 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5073 * removed potential for odd address access in prev_chunk
5074 * removed dependency on getpagesize.h
5075 * misc cosmetics and a bit more internal documentation
5076 * anticosmetics: mangled names in macros to evade debugger strangeness
5077 * tested on sparc, hp-700, dec-mips, rs6000
5078 with gcc & native cc (hp, dec only) allowing
5079 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5081 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5082 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5083 structure of old version, but most details differ.)