2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/licenses/publicdomain. Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
9 Note: There may be an updated version of this malloc obtainable at
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11 Check before installing!
15 * Modifications made to the original version for mono:
16 * - added PROT_EXEC to MMAP_PROT
17 * - added PAGE_EXECUTE_READWRITE to the win32mmap and win32direct_mmap
18 * - a large portion of functions is #ifdef'ed out to make the native code smaller
22 #define USE_DL_PREFIX 1
24 /* Use mmap for allocating memory */
25 #define HAVE_MORECORE 0
31 This library is all in one file to simplify the most common usage:
32 ftp it, compile it (-O3), and link it into another program. All of
33 the compile-time options default to reasonable values for use on
34 most platforms. You might later want to step through various
35 compile-time and dynamic tuning options.
37 For convenience, an include file for code using this malloc is at:
38 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
39 You don't really need this .h file unless you call functions not
40 defined in your system include files. The .h file contains only the
41 excerpts from this file needed for using this malloc on ANSI C/C++
42 systems, so long as you haven't changed compile-time options about
43 naming and tuning parameters. If you do, then you can create your
44 own malloc.h that does include all settings by cutting at the point
45 indicated below. Note that you may already by default be using a C
46 library containing a malloc that is based on some version of this
47 malloc (for example in linux). You might still want to use the one
48 in this file to customize settings or to avoid overheads associated
49 with library versions.
53 Supported pointer/size_t representation: 4 or 8 bytes
54 size_t MUST be an unsigned type of the same width as
55 pointers. (If you are using an ancient system that declares
56 size_t as a signed type, or need it to be a different width
57 than pointers, you can use a previous release of this malloc
58 (e.g. 2.7.2) supporting these.)
60 Alignment: 8 bytes (default)
61 This suffices for nearly all current machines and C compilers.
62 However, you can define MALLOC_ALIGNMENT to be wider than this
63 if necessary (up to 128bytes), at the expense of using more space.
65 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
66 8 or 16 bytes (if 8byte sizes)
67 Each malloced chunk has a hidden word of overhead holding size
68 and status information, and additional cross-check word
69 if FOOTERS is defined.
71 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
72 8-byte ptrs: 32 bytes (including overhead)
74 Even a request for zero bytes (i.e., malloc(0)) returns a
75 pointer to something of the minimum allocatable size.
76 The maximum overhead wastage (i.e., number of extra bytes
77 allocated than were requested in malloc) is less than or equal
78 to the minimum size, except for requests >= mmap_threshold that
79 are serviced via mmap(), where the worst case wastage is about
80 32 bytes plus the remainder from a system page (the minimal
81 mmap unit); typically 4096 or 8192 bytes.
83 Security: static-safe; optionally more or less
84 The "security" of malloc refers to the ability of malicious
85 code to accentuate the effects of errors (for example, freeing
86 space that is not currently malloc'ed or overwriting past the
87 ends of chunks) in code that calls malloc. This malloc
88 guarantees not to modify any memory locations below the base of
89 heap, i.e., static variables, even in the presence of usage
90 errors. The routines additionally detect most improper frees
91 and reallocs. All this holds as long as the static bookkeeping
92 for malloc itself is not corrupted by some other means. This
93 is only one aspect of security -- these checks do not, and
94 cannot, detect all possible programming errors.
96 If FOOTERS is defined nonzero, then each allocated chunk
97 carries an additional check word to verify that it was malloced
98 from its space. These check words are the same within each
99 execution of a program using malloc, but differ across
100 executions, so externally crafted fake chunks cannot be
101 freed. This improves security by rejecting frees/reallocs that
102 could corrupt heap memory, in addition to the checks preventing
103 writes to statics that are always on. This may further improve
104 security at the expense of time and space overhead. (Note that
105 FOOTERS may also be worth using with MSPACES.)
107 By default detected errors cause the program to abort (calling
108 "abort()"). You can override this to instead proceed past
109 errors by defining PROCEED_ON_ERROR. In this case, a bad free
110 has no effect, and a malloc that encounters a bad address
111 caused by user overwrites will ignore the bad address by
112 dropping pointers and indices to all known memory. This may
113 be appropriate for programs that should continue if at all
114 possible in the face of programming errors, although they may
115 run out of memory because dropped memory is never reclaimed.
117 If you don't like either of these options, you can define
118 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
119 else. And if if you are sure that your program using malloc has
120 no errors or vulnerabilities, you can define INSECURE to 1,
121 which might (or might not) provide a small performance improvement.
123 Thread-safety: NOT thread-safe unless USE_LOCKS defined
124 When USE_LOCKS is defined, each public call to malloc, free,
125 etc is surrounded with either a pthread mutex or a win32
126 spinlock (depending on WIN32). This is not especially fast, and
127 can be a major bottleneck. It is designed only to provide
128 minimal protection in concurrent environments, and to provide a
129 basis for extensions. If you are using malloc in a concurrent
130 program, consider instead using ptmalloc, which is derived from
131 a version of this malloc. (See http://www.malloc.de).
133 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
134 This malloc can use unix sbrk or any emulation (invoked using
135 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
136 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
137 memory. On most unix systems, it tends to work best if both
138 MORECORE and MMAP are enabled. On Win32, it uses emulations
139 based on VirtualAlloc. It also uses common C library functions
142 Compliance: I believe it is compliant with the Single Unix Specification
143 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
146 * Overview of algorithms
148 This is not the fastest, most space-conserving, most portable, or
149 most tunable malloc ever written. However it is among the fastest
150 while also being among the most space-conserving, portable and
151 tunable. Consistent balance across these factors results in a good
152 general-purpose allocator for malloc-intensive programs.
154 In most ways, this malloc is a best-fit allocator. Generally, it
155 chooses the best-fitting existing chunk for a request, with ties
156 broken in approximately least-recently-used order. (This strategy
157 normally maintains low fragmentation.) However, for requests less
158 than 256bytes, it deviates from best-fit when there is not an
159 exactly fitting available chunk by preferring to use space adjacent
160 to that used for the previous small request, as well as by breaking
161 ties in approximately most-recently-used order. (These enhance
162 locality of series of small allocations.) And for very large requests
163 (>= 256Kb by default), it relies on system memory mapping
164 facilities, if supported. (This helps avoid carrying around and
165 possibly fragmenting memory used only for large chunks.)
167 All operations (except malloc_stats and mallinfo) have execution
168 times that are bounded by a constant factor of the number of bits in
169 a size_t, not counting any clearing in calloc or copying in realloc,
170 or actions surrounding MORECORE and MMAP that have times
171 proportional to the number of non-contiguous regions returned by
172 system allocation routines, which is often just 1.
174 The implementation is not very modular and seriously overuses
175 macros. Perhaps someday all C compilers will do as good a job
176 inlining modular code as can now be done by brute-force expansion,
177 but now, enough of them seem not to.
179 Some compilers issue a lot of warnings about code that is
180 dead/unreachable only on some platforms, and also about intentional
181 uses of negation on unsigned types. All known cases of each can be
184 For a longer but out of date high-level description, see
185 http://gee.cs.oswego.edu/dl/html/malloc.html
188 If MSPACES is defined, then in addition to malloc, free, etc.,
189 this file also defines mspace_malloc, mspace_free, etc. These
190 are versions of malloc routines that take an "mspace" argument
191 obtained using create_mspace, to control all internal bookkeeping.
192 If ONLY_MSPACES is defined, only these versions are compiled.
193 So if you would like to use this allocator for only some allocations,
194 and your system malloc for others, you can compile with
195 ONLY_MSPACES and then do something like...
196 static mspace mymspace = create_mspace(0,0); // for example
197 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
199 (Note: If you only need one instance of an mspace, you can instead
200 use "USE_DL_PREFIX" to relabel the global malloc.)
202 You can similarly create thread-local allocators by storing
203 mspaces as thread-locals. For example:
204 static __thread mspace tlms = 0;
205 void* tlmalloc(size_t bytes) {
206 if (tlms == 0) tlms = create_mspace(0, 0);
207 return mspace_malloc(tlms, bytes);
209 void tlfree(void* mem) { mspace_free(tlms, mem); }
211 Unless FOOTERS is defined, each mspace is completely independent.
212 You cannot allocate from one and free to another (although
213 conformance is only weakly checked, so usage errors are not always
214 caught). If FOOTERS is defined, then each chunk carries around a tag
215 indicating its originating mspace, and frees are directed to their
218 ------------------------- Compile-time options ---------------------------
220 Be careful in setting #define values for numerical constants of type
221 size_t. On some systems, literal values are not automatically extended
222 to size_t precision unless they are explicitly casted.
224 WIN32 default: defined if _WIN32 defined
225 Defining WIN32 sets up defaults for MS environment and compilers.
226 Otherwise defaults are for unix.
228 MALLOC_ALIGNMENT default: (size_t)8
229 Controls the minimum alignment for malloc'ed chunks. It must be a
230 power of two and at least 8, even on machines for which smaller
231 alignments would suffice. It may be defined as larger than this
232 though. Note however that code and data structures are optimized for
233 the case of 8-byte alignment.
235 MSPACES default: 0 (false)
236 If true, compile in support for independent allocation spaces.
237 This is only supported if HAVE_MMAP is true.
239 ONLY_MSPACES default: 0 (false)
240 If true, only compile in mspace versions, not regular versions.
242 USE_LOCKS default: 0 (false)
243 Causes each call to each public routine to be surrounded with
244 pthread or WIN32 mutex lock/unlock. (If set true, this can be
245 overridden on a per-mspace basis for mspace versions.)
248 If true, provide extra checking and dispatching by placing
249 information in the footers of allocated chunks. This adds
250 space and time overhead.
253 If true, omit checks for usage errors and heap space overwrites.
255 USE_DL_PREFIX default: NOT defined
256 Causes compiler to prefix all public routines with the string 'dl'.
257 This can be useful when you only want to use this malloc in one part
258 of a program, using your regular system malloc elsewhere.
260 ABORT default: defined as abort()
261 Defines how to abort on failed checks. On most systems, a failed
262 check cannot die with an "assert" or even print an informative
263 message, because the underlying print routines in turn call malloc,
264 which will fail again. Generally, the best policy is to simply call
265 abort(). It's not very useful to do more than this because many
266 errors due to overwriting will show up as address faults (null, odd
267 addresses etc) rather than malloc-triggered checks, so will also
268 abort. Also, most compilers know that abort() does not return, so
269 can better optimize code conditionally calling it.
271 PROCEED_ON_ERROR default: defined as 0 (false)
272 Controls whether detected bad addresses cause them to bypassed
273 rather than aborting. If set, detected bad arguments to free and
274 realloc are ignored. And all bookkeeping information is zeroed out
275 upon a detected overwrite of freed heap space, thus losing the
276 ability to ever return it from malloc again, but enabling the
277 application to proceed. If PROCEED_ON_ERROR is defined, the
278 static variable malloc_corruption_error_count is compiled in
279 and can be examined to see if errors have occurred. This option
280 generates slower code than the default abort policy.
282 DEBUG default: NOT defined
283 The DEBUG setting is mainly intended for people trying to modify
284 this code or diagnose problems when porting to new platforms.
285 However, it may also be able to better isolate user errors than just
286 using runtime checks. The assertions in the check routines spell
287 out in more detail the assumptions and invariants underlying the
288 algorithms. The checking is fairly extensive, and will slow down
289 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
290 set will attempt to check every non-mmapped allocated and free chunk
291 in the course of computing the summaries.
293 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
294 Debugging assertion failures can be nearly impossible if your
295 version of the assert macro causes malloc to be called, which will
296 lead to a cascade of further failures, blowing the runtime stack.
297 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
298 which will usually make debugging easier.
300 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
301 The action to take before "return 0" when malloc fails to be able to
302 return memory because there is none available.
304 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
305 True if this system supports sbrk or an emulation of it.
307 MORECORE default: sbrk
308 The name of the sbrk-style system routine to call to obtain more
309 memory. See below for guidance on writing custom MORECORE
310 functions. The type of the argument to sbrk/MORECORE varies across
311 systems. It cannot be size_t, because it supports negative
312 arguments, so it is normally the signed type of the same width as
313 size_t (sometimes declared as "intptr_t"). It doesn't much matter
314 though. Internally, we only call it with arguments less than half
315 the max value of a size_t, which should work across all reasonable
316 possibilities, although sometimes generating compiler warnings. See
317 near the end of this file for guidelines for creating a custom
320 MORECORE_CONTIGUOUS default: 1 (true)
321 If true, take advantage of fact that consecutive calls to MORECORE
322 with positive arguments always return contiguous increasing
323 addresses. This is true of unix sbrk. It does not hurt too much to
324 set it true anyway, since malloc copes with non-contiguities.
325 Setting it false when definitely non-contiguous saves time
326 and possibly wasted space it would take to discover this though.
328 MORECORE_CANNOT_TRIM default: NOT defined
329 True if MORECORE cannot release space back to the system when given
330 negative arguments. This is generally necessary only if you are
331 using a hand-crafted MORECORE function that cannot handle negative
334 HAVE_MMAP default: 1 (true)
335 True if this system supports mmap or an emulation of it. If so, and
336 HAVE_MORECORE is not true, MMAP is used for all system
337 allocation. If set and HAVE_MORECORE is true as well, MMAP is
338 primarily used to directly allocate very large blocks. It is also
339 used as a backup strategy in cases where MORECORE fails to provide
340 space from system. Note: A single call to MUNMAP is assumed to be
341 able to unmap memory that may have be allocated using multiple calls
342 to MMAP, so long as they are adjacent.
344 HAVE_MREMAP default: 1 on linux, else 0
345 If true realloc() uses mremap() to re-allocate large blocks and
346 extend or shrink allocation spaces.
348 MMAP_CLEARS default: 1 on unix
349 True if mmap clears memory so calloc doesn't need to. This is true
350 for standard unix mmap using /dev/zero.
352 USE_BUILTIN_FFS default: 0 (i.e., not used)
353 Causes malloc to use the builtin ffs() function to compute indices.
354 Some compilers may recognize and intrinsify ffs to be faster than the
355 supplied C version. Also, the case of x86 using gcc is special-cased
356 to an asm instruction, so is already as fast as it can be, and so
357 this setting has no effect. (On most x86s, the asm version is only
358 slightly faster than the C version.)
360 malloc_getpagesize default: derive from system includes, or 4096.
361 The system page size. To the extent possible, this malloc manages
362 memory from the system in page-size units. This may be (and
363 usually is) a function rather than a constant. This is ignored
364 if WIN32, where page size is determined using getSystemInfo during
367 USE_DEV_RANDOM default: 0 (i.e., not used)
368 Causes malloc to use /dev/random to initialize secure magic seed for
369 stamping footers. Otherwise, the current time is used.
371 NO_MALLINFO default: 0
372 If defined, don't compile "mallinfo". This can be a simple way
373 of dealing with mismatches between system declarations and
376 MALLINFO_FIELD_TYPE default: size_t
377 The type of the fields in the mallinfo struct. This was originally
378 defined as "int" in SVID etc, but is more usefully defined as
379 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
381 REALLOC_ZERO_BYTES_FREES default: not defined
382 This should be set if a call to realloc with zero bytes should
383 be the same as a call to free. Some people think it should. Otherwise,
384 since this malloc returns a unique pointer for malloc(0), so does
387 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
388 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
389 LACKS_STDLIB_H default: NOT defined unless on WIN32
390 Define these if your system does not have these header files.
391 You might need to manually insert some of the declarations they provide.
393 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
394 system_info.dwAllocationGranularity in WIN32,
396 Also settable using mallopt(M_GRANULARITY, x)
397 The unit for allocating and deallocating memory from the system. On
398 most systems with contiguous MORECORE, there is no reason to
399 make this more than a page. However, systems with MMAP tend to
400 either require or encourage larger granularities. You can increase
401 this value to prevent system allocation functions to be called so
402 often, especially if they are slow. The value must be at least one
403 page and must be a power of two. Setting to 0 causes initialization
404 to either page size or win32 region size. (Note: In previous
405 versions of malloc, the equivalent of this option was called
408 DEFAULT_TRIM_THRESHOLD default: 2MB
409 Also settable using mallopt(M_TRIM_THRESHOLD, x)
410 The maximum amount of unused top-most memory to keep before
411 releasing via malloc_trim in free(). Automatic trimming is mainly
412 useful in long-lived programs using contiguous MORECORE. Because
413 trimming via sbrk can be slow on some systems, and can sometimes be
414 wasteful (in cases where programs immediately afterward allocate
415 more large chunks) the value should be high enough so that your
416 overall system performance would improve by releasing this much
417 memory. As a rough guide, you might set to a value close to the
418 average size of a process (program) running on your system.
419 Releasing this much memory would allow such a process to run in
420 memory. Generally, it is worth tuning trim thresholds when a
421 program undergoes phases where several large chunks are allocated
422 and released in ways that can reuse each other's storage, perhaps
423 mixed with phases where there are no such chunks at all. The trim
424 value must be greater than page size to have any useful effect. To
425 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
426 some people use of mallocing a huge space and then freeing it at
427 program startup, in an attempt to reserve system memory, doesn't
428 have the intended effect under automatic trimming, since that memory
429 will immediately be returned to the system.
431 DEFAULT_MMAP_THRESHOLD default: 256K
432 Also settable using mallopt(M_MMAP_THRESHOLD, x)
433 The request size threshold for using MMAP to directly service a
434 request. Requests of at least this size that cannot be allocated
435 using already-existing space will be serviced via mmap. (If enough
436 normal freed space already exists it is used instead.) Using mmap
437 segregates relatively large chunks of memory so that they can be
438 individually obtained and released from the host system. A request
439 serviced through mmap is never reused by any other request (at least
440 not directly; the system may just so happen to remap successive
441 requests to the same locations). Segregating space in this way has
442 the benefits that: Mmapped space can always be individually released
443 back to the system, which helps keep the system level memory demands
444 of a long-lived program low. Also, mapped memory doesn't become
445 `locked' between other chunks, as can happen with normally allocated
446 chunks, which means that even trimming via malloc_trim would not
447 release them. However, it has the disadvantage that the space
448 cannot be reclaimed, consolidated, and then used to service later
449 requests, as happens with normal chunks. The advantages of mmap
450 nearly always outweigh disadvantages for "large" chunks, but the
451 value of "large" may vary across systems. The default is an
452 empirically derived value that works well in most systems. You can
453 disable mmap by setting to MAX_SIZE_T.
463 #define WIN32_LEAN_AND_MEAN
466 #define HAVE_MORECORE 0
467 #define LACKS_UNISTD_H
468 #define LACKS_SYS_PARAM_H
469 #define LACKS_SYS_MMAN_H
470 #define LACKS_STRING_H
471 #define LACKS_STRINGS_H
472 #define LACKS_SYS_TYPES_H
473 #define LACKS_ERRNO_H
474 #define MALLOC_FAILURE_ACTION
475 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
478 #if defined(DARWIN) || defined(_DARWIN)
479 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
480 #ifndef HAVE_MORECORE
481 #define HAVE_MORECORE 0
483 #endif /* HAVE_MORECORE */
486 #if defined(__native_client__)
490 #define HAVE_MREMAP 0
493 #ifndef LACKS_SYS_TYPES_H
494 #include <sys/types.h> /* For size_t */
495 #endif /* LACKS_SYS_TYPES_H */
497 /* The maximum possible size_t value has all bits set */
498 #define MAX_SIZE_T (~(size_t)0)
501 #define ONLY_MSPACES 0
502 #endif /* ONLY_MSPACES */
506 #else /* ONLY_MSPACES */
508 #endif /* ONLY_MSPACES */
510 #ifndef MALLOC_ALIGNMENT
511 #define MALLOC_ALIGNMENT ((size_t)8U)
512 #endif /* MALLOC_ALIGNMENT */
517 #define ABORT abort()
519 #ifndef ABORT_ON_ASSERT_FAILURE
520 #define ABORT_ON_ASSERT_FAILURE 1
521 #endif /* ABORT_ON_ASSERT_FAILURE */
522 #ifndef PROCEED_ON_ERROR
523 #define PROCEED_ON_ERROR 0
524 #endif /* PROCEED_ON_ERROR */
527 #endif /* USE_LOCKS */
530 #endif /* INSECURE */
533 #endif /* HAVE_MMAP */
535 #define MMAP_CLEARS 1
536 #endif /* MMAP_CLEARS */
539 #define HAVE_MREMAP 1
541 #define HAVE_MREMAP 0
543 #endif /* HAVE_MREMAP */
544 #ifndef MALLOC_FAILURE_ACTION
545 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
546 #endif /* MALLOC_FAILURE_ACTION */
547 #ifndef HAVE_MORECORE
549 #define HAVE_MORECORE 0
550 #else /* ONLY_MSPACES */
551 #define HAVE_MORECORE 1
552 #endif /* ONLY_MSPACES */
553 #endif /* HAVE_MORECORE */
555 #define MORECORE_CONTIGUOUS 0
556 #else /* !HAVE_MORECORE */
558 #define MORECORE sbrk
559 #endif /* MORECORE */
560 #ifndef MORECORE_CONTIGUOUS
561 #define MORECORE_CONTIGUOUS 1
562 #endif /* MORECORE_CONTIGUOUS */
563 #endif /* HAVE_MORECORE */
564 #ifndef DEFAULT_GRANULARITY
565 #if MORECORE_CONTIGUOUS
566 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
567 #else /* MORECORE_CONTIGUOUS */
568 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
569 #endif /* MORECORE_CONTIGUOUS */
570 #endif /* DEFAULT_GRANULARITY */
571 #ifndef DEFAULT_TRIM_THRESHOLD
572 #ifndef MORECORE_CANNOT_TRIM
573 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
574 #else /* MORECORE_CANNOT_TRIM */
575 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
576 #endif /* MORECORE_CANNOT_TRIM */
577 #endif /* DEFAULT_TRIM_THRESHOLD */
578 #ifndef DEFAULT_MMAP_THRESHOLD
580 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
581 #else /* HAVE_MMAP */
582 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
583 #endif /* HAVE_MMAP */
584 #endif /* DEFAULT_MMAP_THRESHOLD */
585 #ifndef USE_BUILTIN_FFS
586 #define USE_BUILTIN_FFS 0
587 #endif /* USE_BUILTIN_FFS */
588 #ifndef USE_DEV_RANDOM
589 #define USE_DEV_RANDOM 0
590 #endif /* USE_DEV_RANDOM */
592 #define NO_MALLINFO 0
593 #endif /* NO_MALLINFO */
594 #ifndef MALLINFO_FIELD_TYPE
595 #define MALLINFO_FIELD_TYPE size_t
596 #endif /* MALLINFO_FIELD_TYPE */
599 mallopt tuning options. SVID/XPG defines four standard parameter
600 numbers for mallopt, normally defined in malloc.h. None of these
601 are used in this malloc, so setting them has no effect. But this
602 malloc does support the following options.
605 #define M_TRIM_THRESHOLD (-1)
606 #define M_GRANULARITY (-2)
607 #define M_MMAP_THRESHOLD (-3)
609 /* ------------------------ Mallinfo declarations ------------------------ */
613 This version of malloc supports the standard SVID/XPG mallinfo
614 routine that returns a struct containing usage properties and
615 statistics. It should work on any system that has a
616 /usr/include/malloc.h defining struct mallinfo. The main
617 declaration needed is the mallinfo struct that is returned (by-copy)
618 by mallinfo(). The malloinfo struct contains a bunch of fields that
619 are not even meaningful in this version of malloc. These fields are
620 are instead filled by mallinfo() with other numbers that might be of
623 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
624 /usr/include/malloc.h file that includes a declaration of struct
625 mallinfo. If so, it is included; else a compliant version is
626 declared below. These must be precisely the same for mallinfo() to
627 work. The original SVID version of this struct, defined on most
628 systems with mallinfo, declares all fields as ints. But some others
629 define as unsigned long. If your system defines the fields using a
630 type of different width than listed here, you MUST #include your
631 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
634 /* #define HAVE_USR_INCLUDE_MALLOC_H */
636 #ifdef HAVE_USR_INCLUDE_MALLOC_H
637 #include "/usr/include/malloc.h"
638 #else /* HAVE_USR_INCLUDE_MALLOC_H */
641 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
642 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
643 MALLINFO_FIELD_TYPE smblks; /* always 0 */
644 MALLINFO_FIELD_TYPE hblks; /* always 0 */
645 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
646 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
647 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
648 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
649 MALLINFO_FIELD_TYPE fordblks; /* total free space */
650 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
653 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
654 #endif /* NO_MALLINFO */
658 #endif /* __cplusplus */
662 /* ------------------- Declarations of public routines ------------------- */
664 #ifndef USE_DL_PREFIX
665 #define dlcalloc calloc
667 #define dlmalloc malloc
668 #define dlmemalign memalign
669 #define dlrealloc realloc
670 #define dlvalloc valloc
671 #define dlpvalloc pvalloc
672 #define dlmallinfo mallinfo
673 #define dlmallopt mallopt
674 #define dlmalloc_trim malloc_trim
675 #define dlmalloc_stats malloc_stats
676 #define dlmalloc_usable_size malloc_usable_size
677 #define dlmalloc_footprint malloc_footprint
678 #define dlmalloc_max_footprint malloc_max_footprint
679 #define dlindependent_calloc independent_calloc
680 #define dlindependent_comalloc independent_comalloc
681 #endif /* USE_DL_PREFIX */
686 Returns a pointer to a newly allocated chunk of at least n bytes, or
687 null if no space is available, in which case errno is set to ENOMEM
690 If n is zero, malloc returns a minimum-sized chunk. (The minimum
691 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
692 systems.) Note that size_t is an unsigned type, so calls with
693 arguments that would be negative if signed are interpreted as
694 requests for huge amounts of space, which will often fail. The
695 maximum supported value of n differs across systems, but is in all
696 cases less than the maximum representable value of a size_t.
698 void* dlmalloc(size_t);
702 Releases the chunk of memory pointed to by p, that had been previously
703 allocated using malloc or a related routine such as realloc.
704 It has no effect if p is null. If p was not malloced or already
705 freed, free(p) will by default cause the current program to abort.
710 calloc(size_t n_elements, size_t element_size);
711 Returns a pointer to n_elements * element_size bytes, with all locations
714 void* dlcalloc(size_t, size_t);
717 realloc(void* p, size_t n)
718 Returns a pointer to a chunk of size n that contains the same data
719 as does chunk p up to the minimum of (n, p's size) bytes, or null
720 if no space is available.
722 The returned pointer may or may not be the same as p. The algorithm
723 prefers extending p in most cases when possible, otherwise it
724 employs the equivalent of a malloc-copy-free sequence.
726 If p is null, realloc is equivalent to malloc.
728 If space is not available, realloc returns null, errno is set (if on
729 ANSI) and p is NOT freed.
731 if n is for fewer bytes than already held by p, the newly unused
732 space is lopped off and freed if possible. realloc with a size
733 argument of zero (re)allocates a minimum-sized chunk.
735 The old unix realloc convention of allowing the last-free'd chunk
736 to be used as an argument to realloc is not supported.
739 void* dlrealloc(void*, size_t);
742 memalign(size_t alignment, size_t n);
743 Returns a pointer to a newly allocated chunk of n bytes, aligned
744 in accord with the alignment argument.
746 The alignment argument should be a power of two. If the argument is
747 not a power of two, the nearest greater power is used.
748 8-byte alignment is guaranteed by normal malloc calls, so don't
749 bother calling memalign with an argument of 8 or less.
751 Overreliance on memalign is a sure way to fragment space.
753 void* dlmemalign(size_t, size_t);
757 Equivalent to memalign(pagesize, n), where pagesize is the page
758 size of the system. If the pagesize is unknown, 4096 is used.
760 void* dlvalloc(size_t);
763 mallopt(int parameter_number, int parameter_value)
764 Sets tunable parameters The format is to provide a
765 (parameter-number, parameter-value) pair. mallopt then sets the
766 corresponding parameter to the argument value if it can (i.e., so
767 long as the value is meaningful), and returns 1 if successful else
768 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
769 normally defined in malloc.h. None of these are use in this malloc,
770 so setting them has no effect. But this malloc also supports other
771 options in mallopt. See below for details. Briefly, supported
772 parameters are as follows (listed defaults are for "typical"
775 Symbol param # default allowed param values
776 M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
777 M_GRANULARITY -2 page size any power of 2 >= page size
778 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
780 int dlmallopt(int, int);
784 Returns the number of bytes obtained from the system. The total
785 number of bytes allocated by malloc, realloc etc., is less than this
786 value. Unlike mallinfo, this function returns only a precomputed
787 result, so can be called frequently to monitor memory consumption.
788 Even if locks are otherwise defined, this function does not use them,
789 so results might not be up to date.
791 size_t dlmalloc_footprint(void);
794 malloc_max_footprint();
795 Returns the maximum number of bytes obtained from the system. This
796 value will be greater than current footprint if deallocated space
797 has been reclaimed by the system. The peak number of bytes allocated
798 by malloc, realloc etc., is less than this value. Unlike mallinfo,
799 this function returns only a precomputed result, so can be called
800 frequently to monitor memory consumption. Even if locks are
801 otherwise defined, this function does not use them, so results might
804 size_t dlmalloc_max_footprint(void);
809 Returns (by copy) a struct containing various summary statistics:
811 arena: current total non-mmapped bytes allocated from system
812 ordblks: the number of free chunks
814 hblks: current number of mmapped regions
815 hblkhd: total bytes held in mmapped regions
816 usmblks: the maximum total allocated space. This will be greater
817 than current total if trimming has occurred.
819 uordblks: current total allocated space (normal or mmapped)
820 fordblks: total free space
821 keepcost: the maximum number of bytes that could ideally be released
822 back to system via malloc_trim. ("ideally" means that
823 it ignores page restrictions etc.)
825 Because these fields are ints, but internal bookkeeping may
826 be kept as longs, the reported values may wrap around zero and
829 struct mallinfo dlmallinfo(void);
830 #endif /* NO_MALLINFO */
833 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
835 independent_calloc is similar to calloc, but instead of returning a
836 single cleared space, it returns an array of pointers to n_elements
837 independent elements that can hold contents of size elem_size, each
838 of which starts out cleared, and can be independently freed,
839 realloc'ed etc. The elements are guaranteed to be adjacently
840 allocated (this is not guaranteed to occur with multiple callocs or
841 mallocs), which may also improve cache locality in some
844 The "chunks" argument is optional (i.e., may be null, which is
845 probably the most typical usage). If it is null, the returned array
846 is itself dynamically allocated and should also be freed when it is
847 no longer needed. Otherwise, the chunks array must be of at least
848 n_elements in length. It is filled in with the pointers to the
851 In either case, independent_calloc returns this pointer array, or
852 null if the allocation failed. If n_elements is zero and "chunks"
853 is null, it returns a chunk representing an array with zero elements
854 (which should be freed if not wanted).
856 Each element must be individually freed when it is no longer
857 needed. If you'd like to instead be able to free all at once, you
858 should instead use regular calloc and assign pointers into this
859 space to represent elements. (In this case though, you cannot
860 independently free elements.)
862 independent_calloc simplifies and speeds up implementations of many
863 kinds of pools. It may also be useful when constructing large data
864 structures that initially have a fixed number of fixed-sized nodes,
865 but the number is not known at compile time, and some of the nodes
866 may later need to be freed. For example:
868 struct Node { int item; struct Node* next; };
870 struct Node* build_list() {
872 int n = read_number_of_nodes_needed();
873 if (n <= 0) return 0;
874 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
875 if (pool == 0) die();
876 // organize into a linked list...
877 struct Node* first = pool[0];
878 for (i = 0; i < n-1; ++i)
879 pool[i]->next = pool[i+1];
880 free(pool); // Can now free the array (or not, if it is needed later)
884 void** dlindependent_calloc(size_t, size_t, void**);
887 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
889 independent_comalloc allocates, all at once, a set of n_elements
890 chunks with sizes indicated in the "sizes" array. It returns
891 an array of pointers to these elements, each of which can be
892 independently freed, realloc'ed etc. The elements are guaranteed to
893 be adjacently allocated (this is not guaranteed to occur with
894 multiple callocs or mallocs), which may also improve cache locality
895 in some applications.
897 The "chunks" argument is optional (i.e., may be null). If it is null
898 the returned array is itself dynamically allocated and should also
899 be freed when it is no longer needed. Otherwise, the chunks array
900 must be of at least n_elements in length. It is filled in with the
901 pointers to the chunks.
903 In either case, independent_comalloc returns this pointer array, or
904 null if the allocation failed. If n_elements is zero and chunks is
905 null, it returns a chunk representing an array with zero elements
906 (which should be freed if not wanted).
908 Each element must be individually freed when it is no longer
909 needed. If you'd like to instead be able to free all at once, you
910 should instead use a single regular malloc, and assign pointers at
911 particular offsets in the aggregate space. (In this case though, you
912 cannot independently free elements.)
914 independent_comallac differs from independent_calloc in that each
915 element may have a different size, and also that it does not
916 automatically clear elements.
918 independent_comalloc can be used to speed up allocation in cases
919 where several structs or objects must always be allocated at the
920 same time. For example:
925 void send_message(char* msg) {
926 int msglen = strlen(msg);
927 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
929 if (independent_comalloc(3, sizes, chunks) == 0)
931 struct Head* head = (struct Head*)(chunks[0]);
932 char* body = (char*)(chunks[1]);
933 struct Foot* foot = (struct Foot*)(chunks[2]);
937 In general though, independent_comalloc is worth using only for
938 larger values of n_elements. For small values, you probably won't
939 detect enough difference from series of malloc calls to bother.
941 Overuse of independent_comalloc can increase overall memory usage,
942 since it cannot reuse existing noncontiguous small chunks that
943 might be available for some of the elements.
945 void** dlindependent_comalloc(size_t, size_t*, void**);
950 Equivalent to valloc(minimum-page-that-holds(n)), that is,
951 round up n to nearest pagesize.
953 void* dlpvalloc(size_t);
956 malloc_trim(size_t pad);
958 If possible, gives memory back to the system (via negative arguments
959 to sbrk) if there is unused memory at the `high' end of the malloc
960 pool or in unused MMAP segments. You can call this after freeing
961 large blocks of memory to potentially reduce the system-level memory
962 requirements of a program. However, it cannot guarantee to reduce
963 memory. Under some allocation patterns, some large free blocks of
964 memory will be locked between two used chunks, so they cannot be
965 given back to the system.
967 The `pad' argument to malloc_trim represents the amount of free
968 trailing space to leave untrimmed. If this argument is zero, only
969 the minimum amount of memory to maintain internal data structures
970 will be left. Non-zero arguments can be supplied to maintain enough
971 trailing space to service future expected allocations without having
972 to re-obtain memory from the system.
974 Malloc_trim returns 1 if it actually released any memory, else 0.
976 int dlmalloc_trim(size_t);
979 malloc_usable_size(void* p);
981 Returns the number of bytes you can actually use in
982 an allocated chunk, which may be more than you requested (although
983 often not) due to alignment and minimum size constraints.
984 You can use this many bytes without worrying about
985 overwriting other allocated objects. This is not a particularly great
986 programming practice. malloc_usable_size can be more useful in
987 debugging and assertions, for example:
990 assert(malloc_usable_size(p) >= 256);
992 size_t dlmalloc_usable_size(void*);
996 Prints on stderr the amount of space obtained from the system (both
997 via sbrk and mmap), the maximum amount (which may be more than
998 current if malloc_trim and/or munmap got called), and the current
999 number of bytes allocated via malloc (or realloc, etc) but not yet
1000 freed. Note that this is the number of bytes allocated, not the
1001 number requested. It will be larger than the number requested
1002 because of alignment and bookkeeping overhead. Because it includes
1003 alignment wastage as being in use, this figure may be greater than
1004 zero even when no user-level chunks are allocated.
1006 The reported current and maximum system memory can be inaccurate if
1007 a program makes other calls to system memory allocation functions
1008 (normally sbrk) outside of malloc.
1010 malloc_stats prints only the most commonly interesting statistics.
1011 More information can be obtained by calling mallinfo.
1013 void dlmalloc_stats(void);
1015 #endif /* ONLY_MSPACES */
1020 mspace is an opaque type representing an independent
1021 region of space that supports mspace_malloc, etc.
1023 typedef void* mspace;
1026 create_mspace creates and returns a new independent space with the
1027 given initial capacity, or, if 0, the default granularity size. It
1028 returns null if there is no system memory available to create the
1029 space. If argument locked is non-zero, the space uses a separate
1030 lock to control access. The capacity of the space will grow
1031 dynamically as needed to service mspace_malloc requests. You can
1032 control the sizes of incremental increases of this space by
1033 compiling with a different DEFAULT_GRANULARITY or dynamically
1034 setting with mallopt(M_GRANULARITY, value).
1036 mspace create_mspace(size_t capacity, int locked);
1039 destroy_mspace destroys the given space, and attempts to return all
1040 of its memory back to the system, returning the total number of
1041 bytes freed. After destruction, the results of access to all memory
1042 used by the space become undefined.
1044 size_t destroy_mspace(mspace msp);
1047 create_mspace_with_base uses the memory supplied as the initial base
1048 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1049 space is used for bookkeeping, so the capacity must be at least this
1050 large. (Otherwise 0 is returned.) When this initial space is
1051 exhausted, additional memory will be obtained from the system.
1052 Destroying this space will deallocate all additionally allocated
1053 space (if possible) but not the initial base.
1055 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1058 mspace_malloc behaves as malloc, but operates within
1061 void* mspace_malloc(mspace msp, size_t bytes);
1064 mspace_free behaves as free, but operates within
1067 If compiled with FOOTERS==1, mspace_free is not actually needed.
1068 free may be called instead of mspace_free because freed chunks from
1069 any space are handled by their originating spaces.
1071 void mspace_free(mspace msp, void* mem);
1074 mspace_realloc behaves as realloc, but operates within
1077 If compiled with FOOTERS==1, mspace_realloc is not actually
1078 needed. realloc may be called instead of mspace_realloc because
1079 realloced chunks from any space are handled by their originating
1082 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1085 mspace_calloc behaves as calloc, but operates within
1088 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1091 mspace_memalign behaves as memalign, but operates within
1094 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1097 mspace_independent_calloc behaves as independent_calloc, but
1098 operates within the given space.
1100 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1101 size_t elem_size, void* chunks[]);
1104 mspace_independent_comalloc behaves as independent_comalloc, but
1105 operates within the given space.
1107 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1108 size_t sizes[], void* chunks[]);
1111 mspace_footprint() returns the number of bytes obtained from the
1112 system for this space.
1114 size_t mspace_footprint(mspace msp);
1117 mspace_max_footprint() returns the peak number of bytes obtained from the
1118 system for this space.
1120 size_t mspace_max_footprint(mspace msp);
1125 mspace_mallinfo behaves as mallinfo, but reports properties of
1128 struct mallinfo mspace_mallinfo(mspace msp);
1129 #endif /* NO_MALLINFO */
1132 mspace_malloc_stats behaves as malloc_stats, but reports
1133 properties of the given space.
1135 void mspace_malloc_stats(mspace msp);
1138 mspace_trim behaves as malloc_trim, but
1139 operates within the given space.
1141 int mspace_trim(mspace msp, size_t pad);
1144 An alias for mallopt.
1146 int mspace_mallopt(int, int);
1148 #endif /* MSPACES */
1151 }; /* end of extern "C" */
1152 #endif /* __cplusplus */
1155 ========================================================================
1156 To make a fully customizable malloc.h header file, cut everything
1157 above this line, put into file malloc.h, edit to suit, and #include it
1158 on the next line, as well as in programs that use this malloc.
1159 ========================================================================
1162 /* #include "malloc.h" */
1164 /*------------------------------ internal #includes ---------------------- */
1167 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1170 #include <stdio.h> /* for printing in malloc_stats */
1172 #ifndef LACKS_ERRNO_H
1173 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1174 #endif /* LACKS_ERRNO_H */
1176 #include <time.h> /* for magic initialization */
1177 #endif /* FOOTERS */
1178 #ifndef LACKS_STDLIB_H
1179 #include <stdlib.h> /* for abort() */
1180 #endif /* LACKS_STDLIB_H */
1182 #if ABORT_ON_ASSERT_FAILURE
1183 #define assert(x) if(!(x)) ABORT
1184 #else /* ABORT_ON_ASSERT_FAILURE */
1186 #endif /* ABORT_ON_ASSERT_FAILURE */
1190 #ifndef LACKS_STRING_H
1191 #include <string.h> /* for memset etc */
1192 #endif /* LACKS_STRING_H */
1194 #ifndef LACKS_STRINGS_H
1195 #include <strings.h> /* for ffs */
1196 #endif /* LACKS_STRINGS_H */
1197 #endif /* USE_BUILTIN_FFS */
1199 #ifndef LACKS_SYS_MMAN_H
1200 #include <sys/mman.h> /* for mmap */
1201 #endif /* LACKS_SYS_MMAN_H */
1202 #ifndef LACKS_FCNTL_H
1204 #endif /* LACKS_FCNTL_H */
1205 #endif /* HAVE_MMAP */
1207 #ifndef LACKS_UNISTD_H
1208 #include <unistd.h> /* for sbrk */
1209 #else /* LACKS_UNISTD_H */
1210 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1211 extern void* sbrk(ptrdiff_t);
1212 #endif /* FreeBSD etc */
1213 #endif /* LACKS_UNISTD_H */
1214 #endif /* HAVE_MMAP */
1217 #ifndef malloc_getpagesize
1218 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1219 # ifndef _SC_PAGE_SIZE
1220 # define _SC_PAGE_SIZE _SC_PAGESIZE
1223 # ifdef _SC_PAGE_SIZE
1224 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1226 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1227 extern size_t getpagesize();
1228 # define malloc_getpagesize getpagesize()
1230 # ifdef WIN32 /* use supplied emulation of getpagesize */
1231 # define malloc_getpagesize getpagesize()
1233 # ifndef LACKS_SYS_PARAM_H
1234 # include <sys/param.h>
1236 # ifdef EXEC_PAGESIZE
1237 # define malloc_getpagesize EXEC_PAGESIZE
1241 # define malloc_getpagesize NBPG
1243 # define malloc_getpagesize (NBPG * CLSIZE)
1247 # define malloc_getpagesize NBPC
1250 # define malloc_getpagesize PAGESIZE
1251 # else /* just guess */
1252 # define malloc_getpagesize ((size_t)4096U)
1263 /* ------------------- size_t and alignment properties -------------------- */
1265 /* The byte and bit size of a size_t */
1266 #define SIZE_T_SIZE (sizeof(size_t))
1267 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1269 /* Some constants coerced to size_t */
1270 /* Annoying but necessary to avoid errors on some plaftorms */
1271 #define SIZE_T_ZERO ((size_t)0)
1272 #define SIZE_T_ONE ((size_t)1)
1273 #define SIZE_T_TWO ((size_t)2)
1274 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1275 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1276 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1277 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1279 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1280 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1282 /* True if address a has acceptable alignment */
1283 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1285 /* the number of bytes to offset an address to align it */
1286 #define align_offset(A)\
1287 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1288 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1290 /* -------------------------- MMAP preliminaries ------------------------- */
1293 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1294 checks to fail so compiler optimizer can delete code rather than
1295 using so many "#if"s.
1299 /* MORECORE and MMAP must return MFAIL on failure */
1300 #define MFAIL ((void*)(MAX_SIZE_T))
1301 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1304 #define IS_MMAPPED_BIT (SIZE_T_ZERO)
1305 #define USE_MMAP_BIT (SIZE_T_ZERO)
1306 #define CALL_MMAP(s) MFAIL
1307 #define CALL_MUNMAP(a, s) (-1)
1308 #define DIRECT_MMAP(s) MFAIL
1310 #else /* HAVE_MMAP */
1311 #define IS_MMAPPED_BIT (SIZE_T_ONE)
1312 #define USE_MMAP_BIT (SIZE_T_ONE)
1315 #define CALL_MUNMAP(a, s) munmap((a), (s))
1316 #define MMAP_PROT (PROT_READ|PROT_WRITE|PROT_EXEC)
1317 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1318 #define MAP_ANONYMOUS MAP_ANON
1319 #endif /* MAP_ANON */
1320 #ifdef MAP_ANONYMOUS
1321 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1322 #define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1323 #else /* MAP_ANONYMOUS */
1325 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1326 is unlikely to be needed, but is supplied just in case.
1328 #define MMAP_FLAGS (MAP_PRIVATE)
1329 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1330 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1331 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1332 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1333 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1334 #endif /* MAP_ANONYMOUS */
1336 #define DIRECT_MMAP(s) CALL_MMAP(s)
1339 /* Win32 MMAP via VirtualAlloc */
1340 static void* win32mmap(size_t size) {
1341 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_EXECUTE_READWRITE);
1342 return (ptr != 0)? ptr: MFAIL;
1345 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1346 static void* win32direct_mmap(size_t size) {
1347 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1348 PAGE_EXECUTE_READWRITE);
1349 return (ptr != 0)? ptr: MFAIL;
1352 /* This function supports releasing coalesed segments */
1353 static int win32munmap(void* ptr, size_t size) {
1354 MEMORY_BASIC_INFORMATION minfo;
1357 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1359 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1360 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1362 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1364 cptr += minfo.RegionSize;
1365 size -= minfo.RegionSize;
1370 #define CALL_MMAP(s) win32mmap(s)
1371 #define CALL_MUNMAP(a, s) win32munmap((a), (s))
1372 #define DIRECT_MMAP(s) win32direct_mmap(s)
1374 #endif /* HAVE_MMAP */
1376 #if HAVE_MMAP && HAVE_MREMAP
1377 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1378 #else /* HAVE_MMAP && HAVE_MREMAP */
1379 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1380 #endif /* HAVE_MMAP && HAVE_MREMAP */
1383 #define CALL_MORECORE(S) MORECORE(S)
1384 #else /* HAVE_MORECORE */
1385 #define CALL_MORECORE(S) MFAIL
1386 #endif /* HAVE_MORECORE */
1388 /* mstate bit set if continguous morecore disabled or failed */
1389 #define USE_NONCONTIGUOUS_BIT (4U)
1391 /* segment bit set in create_mspace_with_base */
1392 #define EXTERN_BIT (8U)
1395 /* --------------------------- Lock preliminaries ------------------------ */
1400 When locks are defined, there are up to two global locks:
1402 * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1403 MORECORE. In many cases sys_alloc requires two calls, that should
1404 not be interleaved with calls by other threads. This does not
1405 protect against direct calls to MORECORE by other threads not
1406 using this lock, so there is still code to cope the best we can on
1409 * magic_init_mutex ensures that mparams.magic and other
1410 unique mparams values are initialized only once.
1414 /* By default use posix locks */
1415 #include <pthread.h>
1416 #define MLOCK_T pthread_mutex_t
1417 #define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
1418 #define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
1419 #define RELEASE_LOCK(l) pthread_mutex_unlock(l)
1422 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1423 #endif /* HAVE_MORECORE */
1425 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1429 Because lock-protected regions have bounded times, and there
1430 are no recursive lock calls, we can use simple spinlocks.
1433 #define MLOCK_T long
1434 static int win32_acquire_lock (MLOCK_T *sl) {
1436 #ifdef InterlockedCompareExchangePointer
1437 if (!InterlockedCompareExchange(sl, 1, 0))
1439 #else /* Use older void* version */
1440 if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1442 #endif /* InterlockedCompareExchangePointer */
1447 static void win32_release_lock (MLOCK_T *sl) {
1448 InterlockedExchange (sl, 0);
1451 #define INITIAL_LOCK(l) *(l)=0
1452 #define ACQUIRE_LOCK(l) win32_acquire_lock(l)
1453 #define RELEASE_LOCK(l) win32_release_lock(l)
1455 static MLOCK_T morecore_mutex;
1456 #endif /* HAVE_MORECORE */
1457 static MLOCK_T magic_init_mutex;
1460 #define USE_LOCK_BIT (2U)
1461 #else /* USE_LOCKS */
1462 #define USE_LOCK_BIT (0U)
1463 #define INITIAL_LOCK(l)
1464 #endif /* USE_LOCKS */
1466 #if USE_LOCKS && HAVE_MORECORE
1467 #define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
1468 #define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
1469 #else /* USE_LOCKS && HAVE_MORECORE */
1470 #define ACQUIRE_MORECORE_LOCK()
1471 #define RELEASE_MORECORE_LOCK()
1472 #endif /* USE_LOCKS && HAVE_MORECORE */
1475 #define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
1476 #define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
1477 #else /* USE_LOCKS */
1478 #define ACQUIRE_MAGIC_INIT_LOCK()
1479 #define RELEASE_MAGIC_INIT_LOCK()
1480 #endif /* USE_LOCKS */
1483 /* ----------------------- Chunk representations ------------------------ */
1486 (The following includes lightly edited explanations by Colin Plumb.)
1488 The malloc_chunk declaration below is misleading (but accurate and
1489 necessary). It declares a "view" into memory allowing access to
1490 necessary fields at known offsets from a given base.
1492 Chunks of memory are maintained using a `boundary tag' method as
1493 originally described by Knuth. (See the paper by Paul Wilson
1494 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1495 techniques.) Sizes of free chunks are stored both in the front of
1496 each chunk and at the end. This makes consolidating fragmented
1497 chunks into bigger chunks fast. The head fields also hold bits
1498 representing whether chunks are free or in use.
1500 Here are some pictures to make it clearer. They are "exploded" to
1501 show that the state of a chunk can be thought of as extending from
1502 the high 31 bits of the head field of its header through the
1503 prev_foot and PINUSE_BIT bit of the following chunk header.
1505 A chunk that's in use looks like:
1507 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1508 | Size of previous chunk (if P = 1) |
1509 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1510 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1511 | Size of this chunk 1| +-+
1512 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1518 +- size - sizeof(size_t) available payload bytes -+
1522 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1523 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1524 | Size of next chunk (may or may not be in use) | +-+
1525 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1527 And if it's free, it looks like this:
1530 | User payload (must be in use, or we would have merged!) |
1531 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1532 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1533 | Size of this chunk 0| +-+
1534 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1536 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1538 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1540 +- size - sizeof(struct chunk) unused bytes -+
1542 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1543 | Size of this chunk |
1544 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1545 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1546 | Size of next chunk (must be in use, or we would have merged)| +-+
1547 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1551 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1554 Note that since we always merge adjacent free chunks, the chunks
1555 adjacent to a free chunk must be in use.
1557 Given a pointer to a chunk (which can be derived trivially from the
1558 payload pointer) we can, in O(1) time, find out whether the adjacent
1559 chunks are free, and if so, unlink them from the lists that they
1560 are on and merge them with the current chunk.
1562 Chunks always begin on even word boundaries, so the mem portion
1563 (which is returned to the user) is also on an even word boundary, and
1564 thus at least double-word aligned.
1566 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1567 chunk size (which is always a multiple of two words), is an in-use
1568 bit for the *previous* chunk. If that bit is *clear*, then the
1569 word before the current chunk size contains the previous chunk
1570 size, and can be used to find the front of the previous chunk.
1571 The very first chunk allocated always has this bit set, preventing
1572 access to non-existent (or non-owned) memory. If pinuse is set for
1573 any given chunk, then you CANNOT determine the size of the
1574 previous chunk, and might even get a memory addressing fault when
1577 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1578 the chunk size redundantly records whether the current chunk is
1579 inuse. This redundancy enables usage checks within free and realloc,
1580 and reduces indirection when freeing and consolidating chunks.
1582 Each freshly allocated chunk must have both cinuse and pinuse set.
1583 That is, each allocated chunk borders either a previously allocated
1584 and still in-use chunk, or the base of its memory arena. This is
1585 ensured by making all allocations from the the `lowest' part of any
1586 found chunk. Further, no free chunk physically borders another one,
1587 so each free chunk is known to be preceded and followed by either
1588 inuse chunks or the ends of memory.
1590 Note that the `foot' of the current chunk is actually represented
1591 as the prev_foot of the NEXT chunk. This makes it easier to
1592 deal with alignments etc but can be very confusing when trying
1593 to extend or adapt this code.
1595 The exceptions to all this are
1597 1. The special chunk `top' is the top-most available chunk (i.e.,
1598 the one bordering the end of available memory). It is treated
1599 specially. Top is never included in any bin, is used only if
1600 no other chunk is available, and is released back to the
1601 system if it is very large (see M_TRIM_THRESHOLD). In effect,
1602 the top chunk is treated as larger (and thus less well
1603 fitting) than any other available chunk. The top chunk
1604 doesn't update its trailing size field since there is no next
1605 contiguous chunk that would have to index off it. However,
1606 space is still allocated for it (TOP_FOOT_SIZE) to enable
1607 separation or merging when space is extended.
1609 3. Chunks allocated via mmap, which have the lowest-order bit
1610 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1611 PINUSE_BIT in their head fields. Because they are allocated
1612 one-by-one, each must carry its own prev_foot field, which is
1613 also used to hold the offset this chunk has within its mmapped
1614 region, which is needed to preserve alignment. Each mmapped
1615 chunk is trailed by the first two fields of a fake next-chunk
1616 for sake of usage checks.
1620 struct malloc_chunk {
1621 size_t prev_foot; /* Size of previous chunk (if free). */
1622 size_t head; /* Size and inuse bits. */
1623 struct malloc_chunk* fd; /* double links -- used only if free. */
1624 struct malloc_chunk* bk;
1627 typedef struct malloc_chunk mchunk;
1628 typedef struct malloc_chunk* mchunkptr;
1629 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
1630 typedef unsigned int bindex_t; /* Described below */
1631 typedef unsigned int binmap_t; /* Described below */
1632 typedef unsigned int flag_t; /* The type of various bit flag sets */
1634 /* ------------------- Chunks sizes and alignments ----------------------- */
1636 #define MCHUNK_SIZE (sizeof(mchunk))
1639 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1641 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
1642 #endif /* FOOTERS */
1644 /* MMapped chunks need a second word of overhead ... */
1645 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1646 /* ... and additional padding for fake next-chunk at foot */
1647 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
1649 /* The smallest size we can malloc is an aligned minimal chunk */
1650 #define MIN_CHUNK_SIZE\
1651 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1653 /* conversion from malloc headers to user pointers, and back */
1654 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
1655 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1656 /* chunk associated with aligned address A */
1657 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
1659 /* Bounds on request (not chunk) sizes. */
1660 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
1661 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1663 /* pad request bytes into a usable size */
1664 #define pad_request(req) \
1665 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1667 /* pad request, checking for minimum (but not maximum) */
1668 #define request2size(req) \
1669 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1672 /* ------------------ Operations on head and foot fields ----------------- */
1675 The head field of a chunk is or'ed with PINUSE_BIT when previous
1676 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1677 use. If the chunk was obtained with mmap, the prev_foot field has
1678 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1679 mmapped region to the base of the chunk.
1682 #define PINUSE_BIT (SIZE_T_ONE)
1683 #define CINUSE_BIT (SIZE_T_TWO)
1684 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
1686 /* Head value for fenceposts */
1687 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
1689 /* extraction of fields from head words */
1690 #define cinuse(p) ((p)->head & CINUSE_BIT)
1691 #define pinuse(p) ((p)->head & PINUSE_BIT)
1692 #define chunksize(p) ((p)->head & ~(INUSE_BITS))
1694 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
1695 #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
1697 /* Treat space at ptr +/- offset as a chunk */
1698 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1699 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1701 /* Ptr to next or previous physical malloc_chunk. */
1702 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1703 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1705 /* extract next chunk's pinuse bit */
1706 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
1708 /* Get/set size at footer */
1709 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1710 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1712 /* Set size, pinuse bit, and foot */
1713 #define set_size_and_pinuse_of_free_chunk(p, s)\
1714 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1716 /* Set size, pinuse bit, foot, and clear next pinuse */
1717 #define set_free_with_pinuse(p, s, n)\
1718 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1720 #define is_mmapped(p)\
1721 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1723 /* Get the internal overhead associated with chunk p */
1724 #define overhead_for(p)\
1725 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1727 /* Return true if malloced space is not necessarily cleared */
1729 #define calloc_must_clear(p) (!is_mmapped(p))
1730 #else /* MMAP_CLEARS */
1731 #define calloc_must_clear(p) (1)
1732 #endif /* MMAP_CLEARS */
1734 /* ---------------------- Overlaid data structures ----------------------- */
1737 When chunks are not in use, they are treated as nodes of either
1740 "Small" chunks are stored in circular doubly-linked lists, and look
1743 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1744 | Size of previous chunk |
1745 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1746 `head:' | Size of chunk, in bytes |P|
1747 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1748 | Forward pointer to next chunk in list |
1749 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1750 | Back pointer to previous chunk in list |
1751 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1752 | Unused space (may be 0 bytes long) .
1755 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1756 `foot:' | Size of chunk, in bytes |
1757 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1759 Larger chunks are kept in a form of bitwise digital trees (aka
1760 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
1761 free chunks greater than 256 bytes, their size doesn't impose any
1762 constraints on user chunk sizes. Each node looks like:
1764 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1765 | Size of previous chunk |
1766 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1767 `head:' | Size of chunk, in bytes |P|
1768 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1769 | Forward pointer to next chunk of same size |
1770 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1771 | Back pointer to previous chunk of same size |
1772 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1773 | Pointer to left child (child[0]) |
1774 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1775 | Pointer to right child (child[1]) |
1776 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1777 | Pointer to parent |
1778 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1779 | bin index of this chunk |
1780 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1783 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1784 `foot:' | Size of chunk, in bytes |
1785 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1787 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
1788 of the same size are arranged in a circularly-linked list, with only
1789 the oldest chunk (the next to be used, in our FIFO ordering)
1790 actually in the tree. (Tree members are distinguished by a non-null
1791 parent pointer.) If a chunk with the same size an an existing node
1792 is inserted, it is linked off the existing node using pointers that
1793 work in the same way as fd/bk pointers of small chunks.
1795 Each tree contains a power of 2 sized range of chunk sizes (the
1796 smallest is 0x100 <= x < 0x180), which is is divided in half at each
1797 tree level, with the chunks in the smaller half of the range (0x100
1798 <= x < 0x140 for the top nose) in the left subtree and the larger
1799 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
1800 done by inspecting individual bits.
1802 Using these rules, each node's left subtree contains all smaller
1803 sizes than its right subtree. However, the node at the root of each
1804 subtree has no particular ordering relationship to either. (The
1805 dividing line between the subtree sizes is based on trie relation.)
1806 If we remove the last chunk of a given size from the interior of the
1807 tree, we need to replace it with a leaf node. The tree ordering
1808 rules permit a node to be replaced by any leaf below it.
1810 The smallest chunk in a tree (a common operation in a best-fit
1811 allocator) can be found by walking a path to the leftmost leaf in
1812 the tree. Unlike a usual binary tree, where we follow left child
1813 pointers until we reach a null, here we follow the right child
1814 pointer any time the left one is null, until we reach a leaf with
1815 both child pointers null. The smallest chunk in the tree will be
1816 somewhere along that path.
1818 The worst case number of steps to add, find, or remove a node is
1819 bounded by the number of bits differentiating chunks within
1820 bins. Under current bin calculations, this ranges from 6 up to 21
1821 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1822 is of course much better.
1825 struct malloc_tree_chunk {
1826 /* The first four fields must be compatible with malloc_chunk */
1829 struct malloc_tree_chunk* fd;
1830 struct malloc_tree_chunk* bk;
1832 struct malloc_tree_chunk* child[2];
1833 struct malloc_tree_chunk* parent;
1837 typedef struct malloc_tree_chunk tchunk;
1838 typedef struct malloc_tree_chunk* tchunkptr;
1839 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1841 /* A little helper macro for trees */
1842 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1844 /* ----------------------------- Segments -------------------------------- */
1847 Each malloc space may include non-contiguous segments, held in a
1848 list headed by an embedded malloc_segment record representing the
1849 top-most space. Segments also include flags holding properties of
1850 the space. Large chunks that are directly allocated by mmap are not
1851 included in this list. They are instead independently created and
1852 destroyed without otherwise keeping track of them.
1854 Segment management mainly comes into play for spaces allocated by
1855 MMAP. Any call to MMAP might or might not return memory that is
1856 adjacent to an existing segment. MORECORE normally contiguously
1857 extends the current space, so this space is almost always adjacent,
1858 which is simpler and faster to deal with. (This is why MORECORE is
1859 used preferentially to MMAP when both are available -- see
1860 sys_alloc.) When allocating using MMAP, we don't use any of the
1861 hinting mechanisms (inconsistently) supported in various
1862 implementations of unix mmap, or distinguish reserving from
1863 committing memory. Instead, we just ask for space, and exploit
1864 contiguity when we get it. It is probably possible to do
1865 better than this on some systems, but no general scheme seems
1866 to be significantly better.
1868 Management entails a simpler variant of the consolidation scheme
1869 used for chunks to reduce fragmentation -- new adjacent memory is
1870 normally prepended or appended to an existing segment. However,
1871 there are limitations compared to chunk consolidation that mostly
1872 reflect the fact that segment processing is relatively infrequent
1873 (occurring only when getting memory from system) and that we
1874 don't expect to have huge numbers of segments:
1876 * Segments are not indexed, so traversal requires linear scans. (It
1877 would be possible to index these, but is not worth the extra
1878 overhead and complexity for most programs on most platforms.)
1879 * New segments are only appended to old ones when holding top-most
1880 memory; if they cannot be prepended to others, they are held in
1883 Except for the top-most segment of an mstate, each segment record
1884 is kept at the tail of its segment. Segments are added by pushing
1885 segment records onto the list headed by &mstate.seg for the
1888 Segment flags control allocation/merge/deallocation policies:
1889 * If EXTERN_BIT set, then we did not allocate this segment,
1890 and so should not try to deallocate or merge with others.
1891 (This currently holds only for the initial segment passed
1892 into create_mspace_with_base.)
1893 * If IS_MMAPPED_BIT set, the segment may be merged with
1894 other surrounding mmapped segments and trimmed/de-allocated
1896 * If neither bit is set, then the segment was obtained using
1897 MORECORE so can be merged with surrounding MORECORE'd segments
1898 and deallocated/trimmed using MORECORE with negative arguments.
1901 struct malloc_segment {
1902 char* base; /* base address */
1903 size_t size; /* allocated size */
1904 struct malloc_segment* next; /* ptr to next segment */
1905 flag_t sflags; /* mmap and extern flag */
1908 #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
1909 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
1911 typedef struct malloc_segment msegment;
1912 typedef struct malloc_segment* msegmentptr;
1914 /* ---------------------------- malloc_state ----------------------------- */
1917 A malloc_state holds all of the bookkeeping for a space.
1918 The main fields are:
1921 The topmost chunk of the currently active segment. Its size is
1922 cached in topsize. The actual size of topmost space is
1923 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1924 fenceposts and segment records if necessary when getting more
1925 space from the system. The size at which to autotrim top is
1926 cached from mparams in trim_check, except that it is disabled if
1929 Designated victim (dv)
1930 This is the preferred chunk for servicing small requests that
1931 don't have exact fits. It is normally the chunk split off most
1932 recently to service another small request. Its size is cached in
1933 dvsize. The link fields of this chunk are not maintained since it
1934 is not kept in a bin.
1937 An array of bin headers for free chunks. These bins hold chunks
1938 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1939 chunks of all the same size, spaced 8 bytes apart. To simplify
1940 use in double-linked lists, each bin header acts as a malloc_chunk
1941 pointing to the real first node, if it exists (else pointing to
1942 itself). This avoids special-casing for headers. But to avoid
1943 waste, we allocate only the fd/bk pointers of bins, and then use
1944 repositioning tricks to treat these as the fields of a chunk.
1947 Treebins are pointers to the roots of trees holding a range of
1948 sizes. There are 2 equally spaced treebins for each power of two
1949 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1953 There is one bit map for small bins ("smallmap") and one for
1954 treebins ("treemap). Each bin sets its bit when non-empty, and
1955 clears the bit when empty. Bit operations are then used to avoid
1956 bin-by-bin searching -- nearly all "search" is done without ever
1957 looking at bins that won't be selected. The bit maps
1958 conservatively use 32 bits per map word, even if on 64bit system.
1959 For a good description of some of the bit-based techniques used
1960 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1961 supplement at http://hackersdelight.org/). Many of these are
1962 intended to reduce the branchiness of paths through malloc etc, as
1963 well as to reduce the number of memory locations read or written.
1966 A list of segments headed by an embedded malloc_segment record
1967 representing the initial space.
1969 Address check support
1970 The least_addr field is the least address ever obtained from
1971 MORECORE or MMAP. Attempted frees and reallocs of any address less
1972 than this are trapped (unless INSECURE is defined).
1975 A cross-check field that should always hold same value as mparams.magic.
1978 Bits recording whether to use MMAP, locks, or contiguous MORECORE
1981 Each space keeps track of current and maximum system memory
1982 obtained via MORECORE or MMAP.
1985 If USE_LOCKS is defined, the "mutex" lock is acquired and released
1986 around every public call using this mspace.
1989 /* Bin types, widths and sizes */
1990 #define NSMALLBINS (32U)
1991 #define NTREEBINS (32U)
1992 #define SMALLBIN_SHIFT (3U)
1993 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
1994 #define TREEBIN_SHIFT (8U)
1995 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
1996 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
1997 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
1999 struct malloc_state {
2009 mchunkptr smallbins[(NSMALLBINS+1)*2];
2010 tbinptr treebins[NTREEBINS];
2012 size_t max_footprint;
2015 MLOCK_T mutex; /* locate lock among fields that rarely change */
2016 #endif /* USE_LOCKS */
2020 typedef struct malloc_state* mstate;
2022 /* ------------- Global malloc_state and malloc_params ------------------- */
2025 malloc_params holds global properties, including those that can be
2026 dynamically set using mallopt. There is a single instance, mparams,
2027 initialized in init_mparams.
2030 struct malloc_params {
2034 size_t mmap_threshold;
2035 size_t trim_threshold;
2036 flag_t default_mflags;
2039 static struct malloc_params mparams;
2041 /* The global malloc_state used for all non-"mspace" calls */
2042 static struct malloc_state _gm_;
2044 #define is_global(M) ((M) == &_gm_)
2045 #define is_initialized(M) ((M)->top != 0)
2047 /* -------------------------- system alloc setup ------------------------- */
2049 /* Operations on mflags */
2051 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2052 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2053 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2055 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2056 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2057 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2059 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2060 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2062 #define set_lock(M,L)\
2063 ((M)->mflags = (L)?\
2064 ((M)->mflags | USE_LOCK_BIT) :\
2065 ((M)->mflags & ~USE_LOCK_BIT))
2067 /* page-align a size */
2068 #define page_align(S)\
2069 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2071 /* granularity-align a size */
2072 #define granularity_align(S)\
2073 (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2075 #define is_page_aligned(S)\
2076 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2077 #define is_granularity_aligned(S)\
2078 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2080 /* True if segment S holds address A */
2081 #define segment_holds(S, A)\
2082 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2084 /* Return segment holding given address */
2085 static msegmentptr segment_holding(mstate m, char* addr) {
2086 msegmentptr sp = &m->seg;
2088 if (addr >= sp->base && addr < sp->base + sp->size)
2090 if ((sp = sp->next) == 0)
2095 /* Return true if segment contains a segment link */
2096 static int has_segment_link(mstate m, msegmentptr ss) {
2097 msegmentptr sp = &m->seg;
2099 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2101 if ((sp = sp->next) == 0)
2106 #ifndef MORECORE_CANNOT_TRIM
2107 #define should_trim(M,s) ((s) > (M)->trim_check)
2108 #else /* MORECORE_CANNOT_TRIM */
2109 #define should_trim(M,s) (0)
2110 #endif /* MORECORE_CANNOT_TRIM */
2113 TOP_FOOT_SIZE is padding at the end of a segment, including space
2114 that may be needed to place segment records and fenceposts when new
2115 noncontiguous segments are added.
2117 #define TOP_FOOT_SIZE\
2118 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2121 /* ------------------------------- Hooks -------------------------------- */
2124 PREACTION should be defined to return 0 on success, and nonzero on
2125 failure. If you are not using locking, you can redefine these to do
2131 /* Ensure locks are initialized */
2132 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2134 #define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2135 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2136 #else /* USE_LOCKS */
2139 #define PREACTION(M) (0)
2140 #endif /* PREACTION */
2143 #define POSTACTION(M)
2144 #endif /* POSTACTION */
2146 #endif /* USE_LOCKS */
2149 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2150 USAGE_ERROR_ACTION is triggered on detected bad frees and
2151 reallocs. The argument p is an address that might have triggered the
2152 fault. It is ignored by the two predefined actions, but might be
2153 useful in custom actions that try to help diagnose errors.
2156 #if PROCEED_ON_ERROR
2158 /* A count of the number of corruption errors causing resets */
2159 int malloc_corruption_error_count;
2161 /* default corruption action */
2162 static void reset_on_error(mstate m);
2164 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2165 #define USAGE_ERROR_ACTION(m, p)
2167 #else /* PROCEED_ON_ERROR */
2169 #ifndef CORRUPTION_ERROR_ACTION
2170 #define CORRUPTION_ERROR_ACTION(m) ABORT
2171 #endif /* CORRUPTION_ERROR_ACTION */
2173 #ifndef USAGE_ERROR_ACTION
2174 #define USAGE_ERROR_ACTION(m,p) ABORT
2175 #endif /* USAGE_ERROR_ACTION */
2177 #endif /* PROCEED_ON_ERROR */
2179 /* -------------------------- Debugging setup ---------------------------- */
2183 #define check_free_chunk(M,P)
2184 #define check_inuse_chunk(M,P)
2185 #define check_malloced_chunk(M,P,N)
2186 #define check_mmapped_chunk(M,P)
2187 #define check_malloc_state(M)
2188 #define check_top_chunk(M,P)
2191 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2192 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2193 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2194 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2195 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2196 #define check_malloc_state(M) do_check_malloc_state(M)
2198 static void do_check_any_chunk(mstate m, mchunkptr p);
2199 static void do_check_top_chunk(mstate m, mchunkptr p);
2200 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2201 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2202 static void do_check_free_chunk(mstate m, mchunkptr p);
2203 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2204 static void do_check_tree(mstate m, tchunkptr t);
2205 static void do_check_treebin(mstate m, bindex_t i);
2206 static void do_check_smallbin(mstate m, bindex_t i);
2207 static void do_check_malloc_state(mstate m);
2208 static int bin_find(mstate m, mchunkptr x);
2209 static size_t traverse_and_check(mstate m);
2212 /* ---------------------------- Indexing Bins ---------------------------- */
2214 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2215 #define small_index(s) ((s) >> SMALLBIN_SHIFT)
2216 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2217 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2219 /* addressing by index. See above about smallbin repositioning */
2220 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2221 #define treebin_at(M,i) (&((M)->treebins[i]))
2223 /* assign tree index for size S to variable I */
2224 #if defined(__GNUC__) && defined(i386)
2225 #define compute_tree_index(S, I)\
2227 size_t X = S >> TREEBIN_SHIFT;\
2230 else if (X > 0xFFFF)\
2234 __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
2235 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2239 #define compute_tree_index(S, I)\
2241 size_t X = S >> TREEBIN_SHIFT;\
2244 else if (X > 0xFFFF)\
2247 unsigned int Y = (unsigned int)X;\
2248 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2249 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2251 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2252 K = 14 - N + ((Y <<= K) >> 15);\
2253 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2258 /* Bit representing maximum resolved size in a treebin at i */
2259 #define bit_for_tree_index(i) \
2260 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2262 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2263 #define leftshift_for_tree_index(i) \
2264 ((i == NTREEBINS-1)? 0 : \
2265 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2267 /* The size of the smallest chunk held in bin with index i */
2268 #define minsize_for_tree_index(i) \
2269 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2270 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2273 /* ------------------------ Operations on bin maps ----------------------- */
2275 /* bit corresponding to given index */
2276 #define idx2bit(i) ((binmap_t)(1) << (i))
2278 /* Mark/Clear bits with given index */
2279 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2280 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2281 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2283 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2284 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2285 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2287 /* index corresponding to given bit */
2289 #if defined(__GNUC__) && defined(i386)
2290 #define compute_bit2idx(X, I)\
2293 __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2299 #define compute_bit2idx(X, I) I = ffs(X)-1
2301 #else /* USE_BUILTIN_FFS */
2302 #define compute_bit2idx(X, I)\
2304 unsigned int Y = X - 1;\
2305 unsigned int K = Y >> (16-4) & 16;\
2306 unsigned int N = K; Y >>= K;\
2307 N += K = Y >> (8-3) & 8; Y >>= K;\
2308 N += K = Y >> (4-2) & 4; Y >>= K;\
2309 N += K = Y >> (2-1) & 2; Y >>= K;\
2310 N += K = Y >> (1-0) & 1; Y >>= K;\
2311 I = (bindex_t)(N + Y);\
2313 #endif /* USE_BUILTIN_FFS */
2316 /* isolate the least set bit of a bitmap */
2317 #define least_bit(x) ((x) & -(x))
2319 /* mask with all bits to left of least bit of x on */
2320 #define left_bits(x) ((x<<1) | -(x<<1))
2322 /* mask with all bits to left of or equal to least bit of x on */
2323 #define same_or_left_bits(x) ((x) | -(x))
2326 /* ----------------------- Runtime Check Support ------------------------- */
2329 For security, the main invariant is that malloc/free/etc never
2330 writes to a static address other than malloc_state, unless static
2331 malloc_state itself has been corrupted, which cannot occur via
2332 malloc (because of these checks). In essence this means that we
2333 believe all pointers, sizes, maps etc held in malloc_state, but
2334 check all of those linked or offsetted from other embedded data
2335 structures. These checks are interspersed with main code in a way
2336 that tends to minimize their run-time cost.
2338 When FOOTERS is defined, in addition to range checking, we also
2339 verify footer fields of inuse chunks, which can be used guarantee
2340 that the mstate controlling malloc/free is intact. This is a
2341 streamlined version of the approach described by William Robertson
2342 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2343 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2344 of an inuse chunk holds the xor of its mstate and a random seed,
2345 that is checked upon calls to free() and realloc(). This is
2346 (probablistically) unguessable from outside the program, but can be
2347 computed by any code successfully malloc'ing any chunk, so does not
2348 itself provide protection against code that has already broken
2349 security through some other means. Unlike Robertson et al, we
2350 always dynamically check addresses of all offset chunks (previous,
2351 next, etc). This turns out to be cheaper than relying on hashes.
2355 /* Check if address a is at least as high as any from MORECORE or MMAP */
2356 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2357 /* Check if address of next chunk n is higher than base chunk p */
2358 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2359 /* Check if p has its cinuse bit on */
2360 #define ok_cinuse(p) cinuse(p)
2361 /* Check if p has its pinuse bit on */
2362 #define ok_pinuse(p) pinuse(p)
2364 #else /* !INSECURE */
2365 #define ok_address(M, a) (1)
2366 #define ok_next(b, n) (1)
2367 #define ok_cinuse(p) (1)
2368 #define ok_pinuse(p) (1)
2369 #endif /* !INSECURE */
2371 #if (FOOTERS && !INSECURE)
2372 /* Check if (alleged) mstate m has expected magic field */
2373 #define ok_magic(M) ((M)->magic == mparams.magic)
2374 #else /* (FOOTERS && !INSECURE) */
2375 #define ok_magic(M) (1)
2376 #endif /* (FOOTERS && !INSECURE) */
2379 /* In gcc, use __builtin_expect to minimize impact of checks */
2381 #if defined(__GNUC__) && __GNUC__ >= 3
2382 #define RTCHECK(e) __builtin_expect(e, 1)
2384 #define RTCHECK(e) (e)
2386 #else /* !INSECURE */
2387 #define RTCHECK(e) (1)
2388 #endif /* !INSECURE */
2390 /* macros to set up inuse chunks with or without footers */
2394 #define mark_inuse_foot(M,p,s)
2396 /* Set cinuse bit and pinuse bit of next chunk */
2397 #define set_inuse(M,p,s)\
2398 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2399 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2401 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2402 #define set_inuse_and_pinuse(M,p,s)\
2403 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2404 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2406 /* Set size, cinuse and pinuse bit of this chunk */
2407 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2408 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2412 /* Set foot of inuse chunk to be xor of mstate and seed */
2413 #define mark_inuse_foot(M,p,s)\
2414 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2416 #define get_mstate_for(p)\
2417 ((mstate)(((mchunkptr)((char*)(p) +\
2418 (chunksize(p))))->prev_foot ^ mparams.magic))
2420 #define set_inuse(M,p,s)\
2421 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2422 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2423 mark_inuse_foot(M,p,s))
2425 #define set_inuse_and_pinuse(M,p,s)\
2426 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2427 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2428 mark_inuse_foot(M,p,s))
2430 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2431 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2432 mark_inuse_foot(M, p, s))
2434 #endif /* !FOOTERS */
2436 /* ---------------------------- setting mparams -------------------------- */
2438 /* Initialize mparams */
2439 static int init_mparams(void) {
2440 if (mparams.page_size == 0) {
2443 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2444 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2445 #if MORECORE_CONTIGUOUS
2446 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2447 #else /* MORECORE_CONTIGUOUS */
2448 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2449 #endif /* MORECORE_CONTIGUOUS */
2451 #if (FOOTERS && !INSECURE)
2455 unsigned char buf[sizeof(size_t)];
2456 /* Try to use /dev/urandom, else fall back on using time */
2457 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2458 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2459 s = *((size_t *) buf);
2463 #endif /* USE_DEV_RANDOM */
2464 s = (size_t)(time(0) ^ (size_t)0x55555555U);
2466 s |= (size_t)8U; /* ensure nonzero */
2467 s &= ~(size_t)7U; /* improve chances of fault for bad values */
2470 #else /* (FOOTERS && !INSECURE) */
2471 s = (size_t)0x58585858U;
2472 #endif /* (FOOTERS && !INSECURE) */
2473 ACQUIRE_MAGIC_INIT_LOCK();
2474 if (mparams.magic == 0) {
2476 /* Set up lock for main malloc area */
2477 INITIAL_LOCK(&gm->mutex);
2478 gm->mflags = mparams.default_mflags;
2480 RELEASE_MAGIC_INIT_LOCK();
2483 mparams.page_size = malloc_getpagesize;
2484 mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2485 DEFAULT_GRANULARITY : mparams.page_size);
2488 SYSTEM_INFO system_info;
2489 GetSystemInfo(&system_info);
2490 mparams.page_size = system_info.dwPageSize;
2491 mparams.granularity = system_info.dwAllocationGranularity;
2495 /* Sanity-check configuration:
2496 size_t must be unsigned and as wide as pointer type.
2497 ints must be at least 4 bytes.
2498 alignment must be at least 8.
2499 Alignment, min chunk size, and page size must all be powers of 2.
2501 if ((sizeof(size_t) != sizeof(char*)) ||
2502 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
2503 (sizeof(int) < 4) ||
2504 (MALLOC_ALIGNMENT < (size_t)8U) ||
2505 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
2506 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
2507 ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2508 ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))
2514 /* support for mallopt */
2515 static int change_mparam(int param_number, int value) {
2516 size_t val = (size_t)value;
2518 switch(param_number) {
2519 case M_TRIM_THRESHOLD:
2520 mparams.trim_threshold = val;
2523 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2524 mparams.granularity = val;
2529 case M_MMAP_THRESHOLD:
2530 mparams.mmap_threshold = val;
2538 /* ------------------------- Debugging Support --------------------------- */
2540 /* Check properties of any chunk, whether free, inuse, mmapped etc */
2541 static void do_check_any_chunk(mstate m, mchunkptr p) {
2542 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2543 assert(ok_address(m, p));
2546 /* Check properties of top chunk */
2547 static void do_check_top_chunk(mstate m, mchunkptr p) {
2548 msegmentptr sp = segment_holding(m, (char*)p);
2549 size_t sz = chunksize(p);
2551 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2552 assert(ok_address(m, p));
2553 assert(sz == m->topsize);
2555 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2557 assert(!next_pinuse(p));
2560 /* Check properties of (inuse) mmapped chunks */
2561 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2562 size_t sz = chunksize(p);
2563 size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2564 assert(is_mmapped(p));
2565 assert(use_mmap(m));
2566 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2567 assert(ok_address(m, p));
2568 assert(!is_small(sz));
2569 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2570 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2571 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2574 /* Check properties of inuse chunks */
2575 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2576 do_check_any_chunk(m, p);
2578 assert(next_pinuse(p));
2579 /* If not pinuse and not mmapped, previous chunk has OK offset */
2580 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2582 do_check_mmapped_chunk(m, p);
2585 /* Check properties of free chunks */
2586 static void do_check_free_chunk(mstate m, mchunkptr p) {
2587 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2588 mchunkptr next = chunk_plus_offset(p, sz);
2589 do_check_any_chunk(m, p);
2591 assert(!next_pinuse(p));
2592 assert (!is_mmapped(p));
2593 if (p != m->dv && p != m->top) {
2594 if (sz >= MIN_CHUNK_SIZE) {
2595 assert((sz & CHUNK_ALIGN_MASK) == 0);
2596 assert(is_aligned(chunk2mem(p)));
2597 assert(next->prev_foot == sz);
2599 assert (next == m->top || cinuse(next));
2600 assert(p->fd->bk == p);
2601 assert(p->bk->fd == p);
2603 else /* markers are always of size SIZE_T_SIZE */
2604 assert(sz == SIZE_T_SIZE);
2608 /* Check properties of malloced chunks at the point they are malloced */
2609 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2611 mchunkptr p = mem2chunk(mem);
2612 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2613 do_check_inuse_chunk(m, p);
2614 assert((sz & CHUNK_ALIGN_MASK) == 0);
2615 assert(sz >= MIN_CHUNK_SIZE);
2617 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2618 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2622 /* Check a tree and its subtrees. */
2623 static void do_check_tree(mstate m, tchunkptr t) {
2626 bindex_t tindex = t->index;
2627 size_t tsize = chunksize(t);
2629 compute_tree_index(tsize, idx);
2630 assert(tindex == idx);
2631 assert(tsize >= MIN_LARGE_SIZE);
2632 assert(tsize >= minsize_for_tree_index(idx));
2633 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2635 do { /* traverse through chain of same-sized nodes */
2636 do_check_any_chunk(m, ((mchunkptr)u));
2637 assert(u->index == tindex);
2638 assert(chunksize(u) == tsize);
2640 assert(!next_pinuse(u));
2641 assert(u->fd->bk == u);
2642 assert(u->bk->fd == u);
2643 if (u->parent == 0) {
2644 assert(u->child[0] == 0);
2645 assert(u->child[1] == 0);
2648 assert(head == 0); /* only one node on chain has parent */
2650 assert(u->parent != u);
2651 assert (u->parent->child[0] == u ||
2652 u->parent->child[1] == u ||
2653 *((tbinptr*)(u->parent)) == u);
2654 if (u->child[0] != 0) {
2655 assert(u->child[0]->parent == u);
2656 assert(u->child[0] != u);
2657 do_check_tree(m, u->child[0]);
2659 if (u->child[1] != 0) {
2660 assert(u->child[1]->parent == u);
2661 assert(u->child[1] != u);
2662 do_check_tree(m, u->child[1]);
2664 if (u->child[0] != 0 && u->child[1] != 0) {
2665 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2673 /* Check all the chunks in a treebin. */
2674 static void do_check_treebin(mstate m, bindex_t i) {
2675 tbinptr* tb = treebin_at(m, i);
2677 int empty = (m->treemap & (1U << i)) == 0;
2681 do_check_tree(m, t);
2684 /* Check all the chunks in a smallbin. */
2685 static void do_check_smallbin(mstate m, bindex_t i) {
2686 sbinptr b = smallbin_at(m, i);
2687 mchunkptr p = b->bk;
2688 unsigned int empty = (m->smallmap & (1U << i)) == 0;
2692 for (; p != b; p = p->bk) {
2693 size_t size = chunksize(p);
2695 /* each chunk claims to be free */
2696 do_check_free_chunk(m, p);
2697 /* chunk belongs in bin */
2698 assert(small_index(size) == i);
2699 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2700 /* chunk is followed by an inuse chunk */
2702 if (q->head != FENCEPOST_HEAD)
2703 do_check_inuse_chunk(m, q);
2708 /* Find x in a bin. Used in other check functions. */
2709 static int bin_find(mstate m, mchunkptr x) {
2710 size_t size = chunksize(x);
2711 if (is_small(size)) {
2712 bindex_t sidx = small_index(size);
2713 sbinptr b = smallbin_at(m, sidx);
2714 if (smallmap_is_marked(m, sidx)) {
2719 } while ((p = p->fd) != b);
2724 compute_tree_index(size, tidx);
2725 if (treemap_is_marked(m, tidx)) {
2726 tchunkptr t = *treebin_at(m, tidx);
2727 size_t sizebits = size << leftshift_for_tree_index(tidx);
2728 while (t != 0 && chunksize(t) != size) {
2729 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2735 if (u == (tchunkptr)x)
2737 } while ((u = u->fd) != t);
2744 /* Traverse each chunk and check it; return total */
2745 static size_t traverse_and_check(mstate m) {
2747 if (is_initialized(m)) {
2748 msegmentptr s = &m->seg;
2749 sum += m->topsize + TOP_FOOT_SIZE;
2751 mchunkptr q = align_as_chunk(s->base);
2752 mchunkptr lastq = 0;
2754 while (segment_holds(s, q) &&
2755 q != m->top && q->head != FENCEPOST_HEAD) {
2756 sum += chunksize(q);
2758 assert(!bin_find(m, q));
2759 do_check_inuse_chunk(m, q);
2762 assert(q == m->dv || bin_find(m, q));
2763 assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2764 do_check_free_chunk(m, q);
2775 /* Check all properties of malloc_state. */
2776 static void do_check_malloc_state(mstate m) {
2780 for (i = 0; i < NSMALLBINS; ++i)
2781 do_check_smallbin(m, i);
2782 for (i = 0; i < NTREEBINS; ++i)
2783 do_check_treebin(m, i);
2785 if (m->dvsize != 0) { /* check dv chunk */
2786 do_check_any_chunk(m, m->dv);
2787 assert(m->dvsize == chunksize(m->dv));
2788 assert(m->dvsize >= MIN_CHUNK_SIZE);
2789 assert(bin_find(m, m->dv) == 0);
2792 if (m->top != 0) { /* check top chunk */
2793 do_check_top_chunk(m, m->top);
2794 assert(m->topsize == chunksize(m->top));
2795 assert(m->topsize > 0);
2796 assert(bin_find(m, m->top) == 0);
2799 total = traverse_and_check(m);
2800 assert(total <= m->footprint);
2801 assert(m->footprint <= m->max_footprint);
2805 /* ----------------------------- statistics ------------------------------ */
2808 static struct mallinfo internal_mallinfo(mstate m) {
2809 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2810 if (!PREACTION(m)) {
2811 check_malloc_state(m);
2812 if (is_initialized(m)) {
2813 size_t nfree = SIZE_T_ONE; /* top always free */
2814 size_t mfree = m->topsize + TOP_FOOT_SIZE;
2816 msegmentptr s = &m->seg;
2818 mchunkptr q = align_as_chunk(s->base);
2819 while (segment_holds(s, q) &&
2820 q != m->top && q->head != FENCEPOST_HEAD) {
2821 size_t sz = chunksize(q);
2834 nm.hblkhd = m->footprint - sum;
2835 nm.usmblks = m->max_footprint;
2836 nm.uordblks = m->footprint - mfree;
2837 nm.fordblks = mfree;
2838 nm.keepcost = m->topsize;
2845 #endif /* !NO_MALLINFO */
2847 static void internal_malloc_stats(mstate m) {
2848 if (!PREACTION(m)) {
2852 check_malloc_state(m);
2853 if (is_initialized(m)) {
2854 msegmentptr s = &m->seg;
2855 maxfp = m->max_footprint;
2857 used = fp - (m->topsize + TOP_FOOT_SIZE);
2860 mchunkptr q = align_as_chunk(s->base);
2861 while (segment_holds(s, q) &&
2862 q != m->top && q->head != FENCEPOST_HEAD) {
2864 used -= chunksize(q);
2871 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2872 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
2873 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
2879 /* ----------------------- Operations on smallbins ----------------------- */
2882 Various forms of linking and unlinking are defined as macros. Even
2883 the ones for trees, which are very long but have very short typical
2884 paths. This is ugly but reduces reliance on inlining support of
2888 /* Link a free chunk into a smallbin */
2889 #define insert_small_chunk(M, P, S) {\
2890 bindex_t I = small_index(S);\
2891 mchunkptr B = smallbin_at(M, I);\
2893 assert(S >= MIN_CHUNK_SIZE);\
2894 if (!smallmap_is_marked(M, I))\
2895 mark_smallmap(M, I);\
2896 else if (RTCHECK(ok_address(M, B->fd)))\
2899 CORRUPTION_ERROR_ACTION(M);\
2907 /* Unlink a chunk from a smallbin */
2908 #define unlink_small_chunk(M, P, S) {\
2909 mchunkptr F = P->fd;\
2910 mchunkptr B = P->bk;\
2911 bindex_t I = small_index(S);\
2914 assert(chunksize(P) == small_index2size(I));\
2916 clear_smallmap(M, I);\
2917 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2918 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2923 CORRUPTION_ERROR_ACTION(M);\
2927 /* Unlink the first chunk from a smallbin */
2928 #define unlink_first_small_chunk(M, B, P, I) {\
2929 mchunkptr F = P->fd;\
2932 assert(chunksize(P) == small_index2size(I));\
2934 clear_smallmap(M, I);\
2935 else if (RTCHECK(ok_address(M, F))) {\
2940 CORRUPTION_ERROR_ACTION(M);\
2944 /* Replace dv node, binning the old one */
2945 /* Used only when dvsize known to be small */
2946 #define replace_dv(M, P, S) {\
2947 size_t DVS = M->dvsize;\
2949 mchunkptr DV = M->dv;\
2950 assert(is_small(DVS));\
2951 insert_small_chunk(M, DV, DVS);\
2957 /* ------------------------- Operations on trees ------------------------- */
2959 /* Insert chunk into tree */
2960 #define insert_large_chunk(M, X, S) {\
2963 compute_tree_index(S, I);\
2964 H = treebin_at(M, I);\
2966 X->child[0] = X->child[1] = 0;\
2967 if (!treemap_is_marked(M, I)) {\
2968 mark_treemap(M, I);\
2970 X->parent = (tchunkptr)H;\
2975 size_t K = S << leftshift_for_tree_index(I);\
2977 if (chunksize(T) != S) {\
2978 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2982 else if (RTCHECK(ok_address(M, C))) {\
2989 CORRUPTION_ERROR_ACTION(M);\
2994 tchunkptr F = T->fd;\
2995 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3003 CORRUPTION_ERROR_ACTION(M);\
3014 1. If x is a chained node, unlink it from its same-sized fd/bk links
3015 and choose its bk node as its replacement.
3016 2. If x was the last node of its size, but not a leaf node, it must
3017 be replaced with a leaf node (not merely one with an open left or
3018 right), to make sure that lefts and rights of descendents
3019 correspond properly to bit masks. We use the rightmost descendent
3020 of x. We could use any other leaf, but this is easy to locate and
3021 tends to counteract removal of leftmosts elsewhere, and so keeps
3022 paths shorter than minimally guaranteed. This doesn't loop much
3023 because on average a node in a tree is near the bottom.
3024 3. If x is the base of a chain (i.e., has parent links) relink
3025 x's parent and children to x's replacement (or null if none).
3028 #define unlink_large_chunk(M, X) {\
3029 tchunkptr XP = X->parent;\
3032 tchunkptr F = X->fd;\
3034 if (RTCHECK(ok_address(M, F))) {\
3039 CORRUPTION_ERROR_ACTION(M);\
3044 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3045 ((R = *(RP = &(X->child[0]))) != 0)) {\
3047 while ((*(CP = &(R->child[1])) != 0) ||\
3048 (*(CP = &(R->child[0])) != 0)) {\
3051 if (RTCHECK(ok_address(M, RP)))\
3054 CORRUPTION_ERROR_ACTION(M);\
3059 tbinptr* H = treebin_at(M, X->index);\
3061 if ((*H = R) == 0) \
3062 clear_treemap(M, X->index);\
3064 else if (RTCHECK(ok_address(M, XP))) {\
3065 if (XP->child[0] == X) \
3071 CORRUPTION_ERROR_ACTION(M);\
3073 if (RTCHECK(ok_address(M, R))) {\
3076 if ((C0 = X->child[0]) != 0) {\
3077 if (RTCHECK(ok_address(M, C0))) {\
3082 CORRUPTION_ERROR_ACTION(M);\
3084 if ((C1 = X->child[1]) != 0) {\
3085 if (RTCHECK(ok_address(M, C1))) {\
3090 CORRUPTION_ERROR_ACTION(M);\
3094 CORRUPTION_ERROR_ACTION(M);\
3099 /* Relays to large vs small bin operations */
3101 #define insert_chunk(M, P, S)\
3102 if (is_small(S)) insert_small_chunk(M, P, S)\
3103 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3105 #define unlink_chunk(M, P, S)\
3106 if (is_small(S)) unlink_small_chunk(M, P, S)\
3107 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3110 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3113 #define internal_malloc(m, b) mspace_malloc(m, b)
3114 #define internal_free(m, mem) mspace_free(m,mem);
3115 #else /* ONLY_MSPACES */
3117 #define internal_malloc(m, b)\
3118 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3119 #define internal_free(m, mem)\
3120 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3122 #define internal_malloc(m, b) dlmalloc(b)
3123 #define internal_free(m, mem) dlfree(mem)
3124 #endif /* MSPACES */
3125 #endif /* ONLY_MSPACES */
3127 /* ----------------------- Direct-mmapping chunks ----------------------- */
3130 Directly mmapped chunks are set up with an offset to the start of
3131 the mmapped region stored in the prev_foot field of the chunk. This
3132 allows reconstruction of the required argument to MUNMAP when freed,
3133 and also allows adjustment of the returned chunk to meet alignment
3134 requirements (especially in memalign). There is also enough space
3135 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3136 the PINUSE bit so frees can be checked.
3139 /* Malloc using mmap */
3140 static void* mmap_alloc(mstate m, size_t nb) {
3141 size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3142 if (mmsize > nb) { /* Check for wrap around 0 */
3143 char* mm = (char*)(DIRECT_MMAP(mmsize));
3145 size_t offset = align_offset(chunk2mem(mm));
3146 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3147 mchunkptr p = (mchunkptr)(mm + offset);
3148 p->prev_foot = offset | IS_MMAPPED_BIT;
3149 (p)->head = (psize|CINUSE_BIT);
3150 mark_inuse_foot(m, p, psize);
3151 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3152 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3154 if (mm < m->least_addr)
3156 if ((m->footprint += mmsize) > m->max_footprint)
3157 m->max_footprint = m->footprint;
3158 assert(is_aligned(chunk2mem(p)));
3159 check_mmapped_chunk(m, p);
3160 return chunk2mem(p);
3168 /* Realloc using mmap */
3169 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3170 size_t oldsize = chunksize(oldp);
3171 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3173 /* Keep old chunk if big enough but not too big */
3174 if (oldsize >= nb + SIZE_T_SIZE &&
3175 (oldsize - nb) <= (mparams.granularity << 1))
3178 size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3179 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3180 size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3182 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3183 oldmmsize, newmmsize, 1);
3185 mchunkptr newp = (mchunkptr)(cp + offset);
3186 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3187 newp->head = (psize|CINUSE_BIT);
3188 mark_inuse_foot(m, newp, psize);
3189 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3190 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3192 if (cp < m->least_addr)
3194 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3195 m->max_footprint = m->footprint;
3196 check_mmapped_chunk(m, newp);
3205 /* -------------------------- mspace management -------------------------- */
3207 /* Initialize top chunk and its size */
3208 static void init_top(mstate m, mchunkptr p, size_t psize) {
3209 /* Ensure alignment */
3210 size_t offset = align_offset(chunk2mem(p));
3211 p = (mchunkptr)((char*)p + offset);
3216 p->head = psize | PINUSE_BIT;
3217 /* set size of fake trailing chunk holding overhead space only once */
3218 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3219 m->trim_check = mparams.trim_threshold; /* reset on each update */
3222 /* Initialize bins for a new mstate that is otherwise zeroed out */
3223 static void init_bins(mstate m) {
3224 /* Establish circular links for smallbins */
3226 for (i = 0; i < NSMALLBINS; ++i) {
3227 sbinptr bin = smallbin_at(m,i);
3228 bin->fd = bin->bk = bin;
3232 #if PROCEED_ON_ERROR
3234 /* default corruption action */
3235 static void reset_on_error(mstate m) {
3237 ++malloc_corruption_error_count;
3238 /* Reinitialize fields to forget about all memory */
3239 m->smallbins = m->treebins = 0;
3240 m->dvsize = m->topsize = 0;
3245 for (i = 0; i < NTREEBINS; ++i)
3246 *treebin_at(m, i) = 0;
3249 #endif /* PROCEED_ON_ERROR */
3251 /* Allocate chunk and prepend remainder with chunk in successor base. */
3252 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3254 mchunkptr p = align_as_chunk(newbase);
3255 mchunkptr oldfirst = align_as_chunk(oldbase);
3256 size_t psize = (char*)oldfirst - (char*)p;
3257 mchunkptr q = chunk_plus_offset(p, nb);
3258 size_t qsize = psize - nb;
3259 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3261 assert((char*)oldfirst > (char*)q);
3262 assert(pinuse(oldfirst));
3263 assert(qsize >= MIN_CHUNK_SIZE);
3265 /* consolidate remainder with first chunk of old base */
3266 if (oldfirst == m->top) {
3267 size_t tsize = m->topsize += qsize;
3269 q->head = tsize | PINUSE_BIT;
3270 check_top_chunk(m, q);
3272 else if (oldfirst == m->dv) {
3273 size_t dsize = m->dvsize += qsize;
3275 set_size_and_pinuse_of_free_chunk(q, dsize);
3278 if (!cinuse(oldfirst)) {
3279 size_t nsize = chunksize(oldfirst);
3280 unlink_chunk(m, oldfirst, nsize);
3281 oldfirst = chunk_plus_offset(oldfirst, nsize);
3284 set_free_with_pinuse(q, qsize, oldfirst);
3285 insert_chunk(m, q, qsize);
3286 check_free_chunk(m, q);
3289 check_malloced_chunk(m, chunk2mem(p), nb);
3290 return chunk2mem(p);
3294 /* Add a segment to hold a new noncontiguous region */
3295 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3296 /* Determine locations and sizes of segment, fenceposts, old top */
3297 char* old_top = (char*)m->top;
3298 msegmentptr oldsp = segment_holding(m, old_top);
3299 char* old_end = oldsp->base + oldsp->size;
3300 size_t ssize = pad_request(sizeof(struct malloc_segment));
3301 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3302 size_t offset = align_offset(chunk2mem(rawsp));
3303 char* asp = rawsp + offset;
3304 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3305 mchunkptr sp = (mchunkptr)csp;
3306 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3307 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3308 mchunkptr p = tnext;
3311 /* reset top to new space */
3312 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3314 /* Set up segment record */
3315 assert(is_aligned(ss));
3316 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3317 *ss = m->seg; /* Push current record */
3318 m->seg.base = tbase;
3319 m->seg.size = tsize;
3320 m->seg.sflags = mmapped;
3323 /* Insert trailing fenceposts */
3325 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3326 p->head = FENCEPOST_HEAD;
3328 if ((char*)(&(nextp->head)) < old_end)
3333 assert(nfences >= 2);
3335 /* Insert the rest of old top into a bin as an ordinary free chunk */
3336 if (csp != old_top) {
3337 mchunkptr q = (mchunkptr)old_top;
3338 size_t psize = csp - old_top;
3339 mchunkptr tn = chunk_plus_offset(q, psize);
3340 set_free_with_pinuse(q, psize, tn);
3341 insert_chunk(m, q, psize);
3344 check_top_chunk(m, m->top);
3347 /* -------------------------- System allocation -------------------------- */
3349 /* Get memory from system using MORECORE or MMAP */
3350 static void* sys_alloc(mstate m, size_t nb) {
3351 char* tbase = CMFAIL;
3353 flag_t mmap_flag = 0;
3357 /* Directly map large chunks */
3358 if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3359 void* mem = mmap_alloc(m, nb);
3365 Try getting memory in any of three ways (in most-preferred to
3366 least-preferred order):
3367 1. A call to MORECORE that can normally contiguously extend memory.
3368 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3369 or main space is mmapped or a previous contiguous call failed)
3370 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3371 Note that under the default settings, if MORECORE is unable to
3372 fulfill a request, and HAVE_MMAP is true, then mmap is
3373 used as a noncontiguous system allocator. This is a useful backup
3374 strategy for systems with holes in address spaces -- in this case
3375 sbrk cannot contiguously expand the heap, but mmap may be able to
3377 3. A call to MORECORE that cannot usually contiguously extend memory.
3378 (disabled if not HAVE_MORECORE)
3381 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3383 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3385 ACQUIRE_MORECORE_LOCK();
3387 if (ss == 0) { /* First time through or recovery */
3388 char* base = (char*)CALL_MORECORE(0);
3389 if (base != CMFAIL) {
3390 asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3391 /* Adjust to end on a page boundary */
3392 if (!is_page_aligned(base))
3393 asize += (page_align((size_t)base) - (size_t)base);
3394 /* Can't call MORECORE if size is negative when treated as signed */
3395 if (asize < HALF_MAX_SIZE_T &&
3396 (br = (char*)(CALL_MORECORE(asize))) == base) {
3403 /* Subtract out existing available top space from MORECORE request. */
3404 asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3405 /* Use mem here only if it did continuously extend old space */
3406 if (asize < HALF_MAX_SIZE_T &&
3407 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3413 if (tbase == CMFAIL) { /* Cope with partial failure */
3414 if (br != CMFAIL) { /* Try to use/extend the space we did get */
3415 if (asize < HALF_MAX_SIZE_T &&
3416 asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3417 size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3418 if (esize < HALF_MAX_SIZE_T) {
3419 char* end = (char*)CALL_MORECORE(esize);
3422 else { /* Can't use; try to release */
3423 CALL_MORECORE(-asize);
3429 if (br != CMFAIL) { /* Use the space we did get */
3434 disable_contiguous(m); /* Don't try contiguous path in the future */
3437 RELEASE_MORECORE_LOCK();
3440 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
3441 size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3442 size_t rsize = granularity_align(req);
3443 if (rsize > nb) { /* Fail if wraps around zero */
3444 char* mp = (char*)(CALL_MMAP(rsize));
3448 mmap_flag = IS_MMAPPED_BIT;
3453 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3454 size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3455 if (asize < HALF_MAX_SIZE_T) {
3458 ACQUIRE_MORECORE_LOCK();
3459 br = (char*)(CALL_MORECORE(asize));
3460 end = (char*)(CALL_MORECORE(0));
3461 RELEASE_MORECORE_LOCK();
3462 if (br != CMFAIL && end != CMFAIL && br < end) {
3463 size_t ssize = end - br;
3464 if (ssize > nb + TOP_FOOT_SIZE) {
3472 if (tbase != CMFAIL) {
3474 if ((m->footprint += tsize) > m->max_footprint)
3475 m->max_footprint = m->footprint;
3477 if (!is_initialized(m)) { /* first-time initialization */
3478 m->seg.base = m->least_addr = tbase;
3479 m->seg.size = tsize;
3480 m->seg.sflags = mmap_flag;
3481 m->magic = mparams.magic;
3484 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3486 /* Offset top by embedded malloc_state */
3487 mchunkptr mn = next_chunk(mem2chunk(m));
3488 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3493 /* Try to merge with an existing segment */
3494 msegmentptr sp = &m->seg;
3495 while (sp != 0 && tbase != sp->base + sp->size)
3498 !is_extern_segment(sp) &&
3499 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3500 segment_holds(sp, m->top)) { /* append */
3502 init_top(m, m->top, m->topsize + tsize);
3505 if (tbase < m->least_addr)
3506 m->least_addr = tbase;
3508 while (sp != 0 && sp->base != tbase + tsize)
3511 !is_extern_segment(sp) &&
3512 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3513 char* oldbase = sp->base;
3516 return prepend_alloc(m, tbase, oldbase, nb);
3519 add_segment(m, tbase, tsize, mmap_flag);
3523 if (nb < m->topsize) { /* Allocate from new or extended top space */
3524 size_t rsize = m->topsize -= nb;
3525 mchunkptr p = m->top;
3526 mchunkptr r = m->top = chunk_plus_offset(p, nb);
3527 r->head = rsize | PINUSE_BIT;
3528 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3529 check_top_chunk(m, m->top);
3530 check_malloced_chunk(m, chunk2mem(p), nb);
3531 return chunk2mem(p);
3535 MALLOC_FAILURE_ACTION;
3539 /* ----------------------- system deallocation -------------------------- */
3541 /* Unmap and unlink any mmapped segments that don't contain used chunks */
3542 static size_t release_unused_segments(mstate m) {
3543 size_t released = 0;
3544 msegmentptr pred = &m->seg;
3545 msegmentptr sp = pred->next;
3547 char* base = sp->base;
3548 size_t size = sp->size;
3549 msegmentptr next = sp->next;
3550 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3551 mchunkptr p = align_as_chunk(base);
3552 size_t psize = chunksize(p);
3553 /* Can unmap if first chunk holds entire segment and not pinned */
3554 if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3555 tchunkptr tp = (tchunkptr)p;
3556 assert(segment_holds(sp, (char*)sp));
3562 unlink_large_chunk(m, tp);
3564 if (CALL_MUNMAP(base, size) == 0) {
3566 m->footprint -= size;
3567 /* unlink obsoleted record */
3571 else { /* back out if cannot unmap */
3572 insert_large_chunk(m, tp, psize);
3582 static int sys_trim(mstate m, size_t pad) {
3583 size_t released = 0;
3584 if (pad < MAX_REQUEST && is_initialized(m)) {
3585 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3587 if (m->topsize > pad) {
3588 /* Shrink top space in granularity-size units, keeping at least one */
3589 size_t unit = mparams.granularity;
3590 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3592 msegmentptr sp = segment_holding(m, (char*)m->top);
3594 if (!is_extern_segment(sp)) {
3595 if (is_mmapped_segment(sp)) {
3597 sp->size >= extra &&
3598 !has_segment_link(m, sp)) { /* can't shrink if pinned */
3599 size_t newsize = sp->size - extra;
3600 /* Prefer mremap, fall back to munmap */
3601 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3602 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3607 else if (HAVE_MORECORE) {
3608 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3609 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3610 ACQUIRE_MORECORE_LOCK();
3612 /* Make sure end of memory is where we last set it. */
3613 char* old_br = (char*)(CALL_MORECORE(0));
3614 if (old_br == sp->base + sp->size) {
3615 char* rel_br = (char*)(CALL_MORECORE(-extra));
3616 char* new_br = (char*)(CALL_MORECORE(0));
3617 if (rel_br != CMFAIL && new_br < old_br)
3618 released = old_br - new_br;
3621 RELEASE_MORECORE_LOCK();
3625 if (released != 0) {
3626 sp->size -= released;
3627 m->footprint -= released;
3628 init_top(m, m->top, m->topsize - released);
3629 check_top_chunk(m, m->top);
3633 /* Unmap any unused mmapped segments */
3635 released += release_unused_segments(m);
3637 /* On failure, disable autotrim to avoid repeated failed future calls */
3639 m->trim_check = MAX_SIZE_T;
3642 return (released != 0)? 1 : 0;
3645 /* ---------------------------- malloc support --------------------------- */
3647 /* allocate a large request from the best fitting chunk in a treebin */
3648 static void* tmalloc_large(mstate m, size_t nb) {
3650 size_t rsize = -nb; /* Unsigned negation */
3653 compute_tree_index(nb, idx);
3655 if ((t = *treebin_at(m, idx)) != 0) {
3656 /* Traverse tree for this bin looking for node with size == nb */
3657 size_t sizebits = nb << leftshift_for_tree_index(idx);
3658 tchunkptr rst = 0; /* The deepest untaken right subtree */
3661 size_t trem = chunksize(t) - nb;
3664 if ((rsize = trem) == 0)
3668 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3669 if (rt != 0 && rt != t)
3672 t = rst; /* set t to least subtree holding sizes > nb */
3679 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3680 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3681 if (leftbits != 0) {
3683 binmap_t leastbit = least_bit(leftbits);
3684 compute_bit2idx(leastbit, i);
3685 t = *treebin_at(m, i);
3689 while (t != 0) { /* find smallest of tree or subtree */
3690 size_t trem = chunksize(t) - nb;
3695 t = leftmost_child(t);
3698 /* If dv is a better fit, return 0 so malloc will use it */
3699 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3700 if (RTCHECK(ok_address(m, v))) { /* split */
3701 mchunkptr r = chunk_plus_offset(v, nb);
3702 assert(chunksize(v) == rsize + nb);
3703 if (RTCHECK(ok_next(v, r))) {
3704 unlink_large_chunk(m, v);
3705 if (rsize < MIN_CHUNK_SIZE)
3706 set_inuse_and_pinuse(m, v, (rsize + nb));
3708 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3709 set_size_and_pinuse_of_free_chunk(r, rsize);
3710 insert_chunk(m, r, rsize);
3712 return chunk2mem(v);
3715 CORRUPTION_ERROR_ACTION(m);
3720 /* allocate a small request from the best fitting chunk in a treebin */
3721 static void* tmalloc_small(mstate m, size_t nb) {
3725 binmap_t leastbit = least_bit(m->treemap);
3726 compute_bit2idx(leastbit, i);
3728 v = t = *treebin_at(m, i);
3729 rsize = chunksize(t) - nb;
3731 while ((t = leftmost_child(t)) != 0) {
3732 size_t trem = chunksize(t) - nb;
3739 if (RTCHECK(ok_address(m, v))) {
3740 mchunkptr r = chunk_plus_offset(v, nb);
3741 assert(chunksize(v) == rsize + nb);
3742 if (RTCHECK(ok_next(v, r))) {
3743 unlink_large_chunk(m, v);
3744 if (rsize < MIN_CHUNK_SIZE)
3745 set_inuse_and_pinuse(m, v, (rsize + nb));
3747 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3748 set_size_and_pinuse_of_free_chunk(r, rsize);
3749 replace_dv(m, r, rsize);
3751 return chunk2mem(v);
3755 CORRUPTION_ERROR_ACTION(m);
3759 /* --------------------------- realloc support --------------------------- */
3763 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3764 if (bytes >= MAX_REQUEST) {
3765 MALLOC_FAILURE_ACTION;
3768 if (!PREACTION(m)) {
3769 mchunkptr oldp = mem2chunk(oldmem);
3770 size_t oldsize = chunksize(oldp);
3771 mchunkptr next = chunk_plus_offset(oldp, oldsize);
3775 /* Try to either shrink or extend into top. Else malloc-copy-free */
3777 if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3778 ok_next(oldp, next) && ok_pinuse(next))) {
3779 size_t nb = request2size(bytes);
3780 if (is_mmapped(oldp))
3781 newp = mmap_resize(m, oldp, nb);
3782 else if (oldsize >= nb) { /* already big enough */
3783 size_t rsize = oldsize - nb;
3785 if (rsize >= MIN_CHUNK_SIZE) {
3786 mchunkptr remainder = chunk_plus_offset(newp, nb);
3787 set_inuse(m, newp, nb);
3788 set_inuse(m, remainder, rsize);
3789 extra = chunk2mem(remainder);
3792 else if (next == m->top && oldsize + m->topsize > nb) {
3793 /* Expand into top */
3794 size_t newsize = oldsize + m->topsize;
3795 size_t newtopsize = newsize - nb;
3796 mchunkptr newtop = chunk_plus_offset(oldp, nb);
3797 set_inuse(m, oldp, nb);
3798 newtop->head = newtopsize |PINUSE_BIT;
3800 m->topsize = newtopsize;
3805 USAGE_ERROR_ACTION(m, oldmem);
3814 internal_free(m, extra);
3816 check_inuse_chunk(m, newp);
3817 return chunk2mem(newp);
3820 void* newmem = internal_malloc(m, bytes);
3822 size_t oc = oldsize - overhead_for(oldp);
3823 memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3824 internal_free(m, oldmem);
3834 /* --------------------------- memalign support -------------------------- */
3836 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3837 if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
3838 return internal_malloc(m, bytes);
3839 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3840 alignment = MIN_CHUNK_SIZE;
3841 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3842 size_t a = MALLOC_ALIGNMENT << 1;
3843 while (a < alignment) a <<= 1;
3847 if (bytes >= MAX_REQUEST - alignment) {
3848 if (m != 0) { /* Test isn't needed but avoids compiler warning */
3849 MALLOC_FAILURE_ACTION;
3853 size_t nb = request2size(bytes);
3854 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3855 char* mem = (char*)internal_malloc(m, req);
3859 mchunkptr p = mem2chunk(mem);
3861 if (PREACTION(m)) return 0;
3862 if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3864 Find an aligned spot inside chunk. Since we need to give
3865 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3866 the first calculation places us at a spot with less than
3867 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3868 We've allocated enough total room so that this is always
3871 char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3875 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3877 mchunkptr newp = (mchunkptr)pos;
3878 size_t leadsize = pos - (char*)(p);
3879 size_t newsize = chunksize(p) - leadsize;
3881 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3882 newp->prev_foot = p->prev_foot + leadsize;
3883 newp->head = (newsize|CINUSE_BIT);
3885 else { /* Otherwise, give back leader, use the rest */
3886 set_inuse(m, newp, newsize);
3887 set_inuse(m, p, leadsize);
3888 leader = chunk2mem(p);
3893 /* Give back spare room at the end */
3894 if (!is_mmapped(p)) {
3895 size_t size = chunksize(p);
3896 if (size > nb + MIN_CHUNK_SIZE) {
3897 size_t remainder_size = size - nb;
3898 mchunkptr remainder = chunk_plus_offset(p, nb);
3899 set_inuse(m, p, nb);
3900 set_inuse(m, remainder, remainder_size);
3901 trailer = chunk2mem(remainder);
3905 assert (chunksize(p) >= nb);
3906 assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3907 check_inuse_chunk(m, p);
3910 internal_free(m, leader);
3913 internal_free(m, trailer);
3915 return chunk2mem(p);
3923 /* ------------------------ comalloc/coalloc support --------------------- */
3925 static void** ialloc(mstate m,
3931 This provides common support for independent_X routines, handling
3932 all of the combinations that can result.
3935 bit 0 set if all elements are same size (using sizes[0])
3936 bit 1 set if elements should be zeroed
3939 size_t element_size; /* chunksize of each element, if all same */
3940 size_t contents_size; /* total size of elements */
3941 size_t array_size; /* request size of pointer array */
3942 void* mem; /* malloced aggregate space */
3943 mchunkptr p; /* corresponding chunk */
3944 size_t remainder_size; /* remaining bytes while splitting */
3945 void** marray; /* either "chunks" or malloced ptr array */
3946 mchunkptr array_chunk; /* chunk for malloced ptr array */
3947 flag_t was_enabled; /* to disable mmap */
3951 /* compute array length, if needed */
3953 if (n_elements == 0)
3954 return chunks; /* nothing to do */
3959 /* if empty req, must still return chunk representing empty array */
3960 if (n_elements == 0)
3961 return (void**)internal_malloc(m, 0);
3963 array_size = request2size(n_elements * (sizeof(void*)));
3966 /* compute total element size */
3967 if (opts & 0x1) { /* all-same-size */
3968 element_size = request2size(*sizes);
3969 contents_size = n_elements * element_size;
3971 else { /* add up all the sizes */
3974 for (i = 0; i != n_elements; ++i)
3975 contents_size += request2size(sizes[i]);
3978 size = contents_size + array_size;
3981 Allocate the aggregate chunk. First disable direct-mmapping so
3982 malloc won't use it, since we would not be able to later
3983 free/realloc space internal to a segregated mmap region.
3985 was_enabled = use_mmap(m);
3987 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3993 if (PREACTION(m)) return 0;
3995 remainder_size = chunksize(p);
3997 assert(!is_mmapped(p));
3999 if (opts & 0x2) { /* optionally clear the elements */
4000 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4003 /* If not provided, allocate the pointer array as final part of chunk */
4005 size_t array_chunk_size;
4006 array_chunk = chunk_plus_offset(p, contents_size);
4007 array_chunk_size = remainder_size - contents_size;
4008 marray = (void**) (chunk2mem(array_chunk));
4009 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4010 remainder_size = contents_size;
4013 /* split out elements */
4014 for (i = 0; ; ++i) {
4015 marray[i] = chunk2mem(p);
4016 if (i != n_elements-1) {
4017 if (element_size != 0)
4018 size = element_size;
4020 size = request2size(sizes[i]);
4021 remainder_size -= size;
4022 set_size_and_pinuse_of_inuse_chunk(m, p, size);
4023 p = chunk_plus_offset(p, size);
4025 else { /* the final element absorbs any overallocation slop */
4026 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4032 if (marray != chunks) {
4033 /* final element must have exactly exhausted chunk */
4034 if (element_size != 0) {
4035 assert(remainder_size == element_size);
4038 assert(remainder_size == request2size(sizes[i]));
4040 check_inuse_chunk(m, mem2chunk(marray));
4042 for (i = 0; i != n_elements; ++i)
4043 check_inuse_chunk(m, mem2chunk(marray[i]));
4053 /* -------------------------- public routines ---------------------------- */
4057 void* dlmalloc(size_t bytes) {
4060 If a small request (< 256 bytes minus per-chunk overhead):
4061 1. If one exists, use a remainderless chunk in associated smallbin.
4062 (Remainderless means that there are too few excess bytes to
4063 represent as a chunk.)
4064 2. If it is big enough, use the dv chunk, which is normally the
4065 chunk adjacent to the one used for the most recent small request.
4066 3. If one exists, split the smallest available chunk in a bin,
4067 saving remainder in dv.
4068 4. If it is big enough, use the top chunk.
4069 5. If available, get memory from system and use it
4070 Otherwise, for a large request:
4071 1. Find the smallest available binned chunk that fits, and use it
4072 if it is better fitting than dv chunk, splitting if necessary.
4073 2. If better fitting than any binned chunk, use the dv chunk.
4074 3. If it is big enough, use the top chunk.
4075 4. If request size >= mmap threshold, try to directly mmap this chunk.
4076 5. If available, get memory from system and use it
4078 The ugly goto's here ensure that postaction occurs along all paths.
4081 if (!PREACTION(gm)) {
4084 if (bytes <= MAX_SMALL_REQUEST) {
4087 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4088 idx = small_index(nb);
4089 smallbits = gm->smallmap >> idx;
4091 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4093 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4094 b = smallbin_at(gm, idx);
4096 assert(chunksize(p) == small_index2size(idx));
4097 unlink_first_small_chunk(gm, b, p, idx);
4098 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4100 check_malloced_chunk(gm, mem, nb);
4104 else if (nb > gm->dvsize) {
4105 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4109 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4110 binmap_t leastbit = least_bit(leftbits);
4111 compute_bit2idx(leastbit, i);
4112 b = smallbin_at(gm, i);
4114 assert(chunksize(p) == small_index2size(i));
4115 unlink_first_small_chunk(gm, b, p, i);
4116 rsize = small_index2size(i) - nb;
4117 /* Fit here cannot be remainderless if 4byte sizes */
4118 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4119 set_inuse_and_pinuse(gm, p, small_index2size(i));
4121 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4122 r = chunk_plus_offset(p, nb);
4123 set_size_and_pinuse_of_free_chunk(r, rsize);
4124 replace_dv(gm, r, rsize);
4127 check_malloced_chunk(gm, mem, nb);
4131 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4132 check_malloced_chunk(gm, mem, nb);
4137 else if (bytes >= MAX_REQUEST)
4138 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4140 nb = pad_request(bytes);
4141 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4142 check_malloced_chunk(gm, mem, nb);
4147 if (nb <= gm->dvsize) {
4148 size_t rsize = gm->dvsize - nb;
4149 mchunkptr p = gm->dv;
4150 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4151 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4153 set_size_and_pinuse_of_free_chunk(r, rsize);
4154 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4156 else { /* exhaust dv */
4157 size_t dvs = gm->dvsize;
4160 set_inuse_and_pinuse(gm, p, dvs);
4163 check_malloced_chunk(gm, mem, nb);
4167 else if (nb < gm->topsize) { /* Split top */
4168 size_t rsize = gm->topsize -= nb;
4169 mchunkptr p = gm->top;
4170 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4171 r->head = rsize | PINUSE_BIT;
4172 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4174 check_top_chunk(gm, gm->top);
4175 check_malloced_chunk(gm, mem, nb);
4179 mem = sys_alloc(gm, nb);
4189 void dlfree(void* mem) {
4191 Consolidate freed chunks with preceeding or succeeding bordering
4192 free chunks, if they exist, and then place in a bin. Intermixed
4193 with special cases for top, dv, mmapped chunks, and usage errors.
4197 mchunkptr p = mem2chunk(mem);
4199 mstate fm = get_mstate_for(p);
4200 if (!ok_magic(fm)) {
4201 USAGE_ERROR_ACTION(fm, p);
4206 #endif /* FOOTERS */
4207 if (!PREACTION(fm)) {
4208 check_inuse_chunk(fm, p);
4209 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4210 size_t psize = chunksize(p);
4211 mchunkptr next = chunk_plus_offset(p, psize);
4213 size_t prevsize = p->prev_foot;
4214 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4215 prevsize &= ~IS_MMAPPED_BIT;
4216 psize += prevsize + MMAP_FOOT_PAD;
4217 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4218 fm->footprint -= psize;
4222 mchunkptr prev = chunk_minus_offset(p, prevsize);
4225 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4227 unlink_chunk(fm, p, prevsize);
4229 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4231 set_free_with_pinuse(p, psize, next);
4240 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4241 if (!cinuse(next)) { /* consolidate forward */
4242 if (next == fm->top) {
4243 size_t tsize = fm->topsize += psize;
4245 p->head = tsize | PINUSE_BIT;
4250 if (should_trim(fm, tsize))
4254 else if (next == fm->dv) {
4255 size_t dsize = fm->dvsize += psize;
4257 set_size_and_pinuse_of_free_chunk(p, dsize);
4261 size_t nsize = chunksize(next);
4263 unlink_chunk(fm, next, nsize);
4264 set_size_and_pinuse_of_free_chunk(p, psize);
4272 set_free_with_pinuse(p, psize, next);
4273 insert_chunk(fm, p, psize);
4274 check_free_chunk(fm, p);
4279 USAGE_ERROR_ACTION(fm, p);
4286 #endif /* FOOTERS */
4291 void* dlcalloc(size_t n_elements, size_t elem_size) {
4294 if (n_elements != 0) {
4295 req = n_elements * elem_size;
4296 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4297 (req / n_elements != elem_size))
4298 req = MAX_SIZE_T; /* force downstream failure on overflow */
4300 mem = dlmalloc(req);
4301 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4302 memset(mem, 0, req);
4306 void* dlrealloc(void* oldmem, size_t bytes) {
4308 return dlmalloc(bytes);
4309 #ifdef REALLOC_ZERO_BYTES_FREES
4314 #endif /* REALLOC_ZERO_BYTES_FREES */
4319 mstate m = get_mstate_for(mem2chunk(oldmem));
4321 USAGE_ERROR_ACTION(m, oldmem);
4324 #endif /* FOOTERS */
4325 return internal_realloc(m, oldmem, bytes);
4331 void* dlmemalign(size_t alignment, size_t bytes) {
4332 return internal_memalign(gm, alignment, bytes);
4337 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4339 size_t sz = elem_size; /* serves as 1-element array */
4340 return ialloc(gm, n_elements, &sz, 3, chunks);
4343 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4345 return ialloc(gm, n_elements, sizes, 0, chunks);
4348 void* dlvalloc(size_t bytes) {
4351 pagesz = mparams.page_size;
4352 return dlmemalign(pagesz, bytes);
4355 void* dlpvalloc(size_t bytes) {
4358 pagesz = mparams.page_size;
4359 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4362 int dlmalloc_trim(size_t pad) {
4364 if (!PREACTION(gm)) {
4365 result = sys_trim(gm, pad);
4371 size_t dlmalloc_footprint(void) {
4372 return gm->footprint;
4375 size_t dlmalloc_max_footprint(void) {
4376 return gm->max_footprint;
4380 struct mallinfo dlmallinfo(void) {
4381 return internal_mallinfo(gm);
4383 #endif /* NO_MALLINFO */
4385 void dlmalloc_stats() {
4386 internal_malloc_stats(gm);
4389 size_t dlmalloc_usable_size(void* mem) {
4391 mchunkptr p = mem2chunk(mem);
4393 return chunksize(p) - overhead_for(p);
4398 int dlmallopt(int param_number, int value) {
4399 return change_mparam(param_number, value);
4404 #endif /* !ONLY_MSPACES */
4406 /* ----------------------------- user mspaces ---------------------------- */
4410 static mstate init_user_mstate(char* tbase, size_t tsize) {
4411 size_t msize = pad_request(sizeof(struct malloc_state));
4413 mchunkptr msp = align_as_chunk(tbase);
4414 mstate m = (mstate)(chunk2mem(msp));
4415 memset(m, 0, msize);
4416 INITIAL_LOCK(&m->mutex);
4417 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4418 m->seg.base = m->least_addr = tbase;
4419 m->seg.size = m->footprint = m->max_footprint = tsize;
4420 m->magic = mparams.magic;
4421 m->mflags = mparams.default_mflags;
4422 disable_contiguous(m);
4424 mn = next_chunk(mem2chunk(m));
4425 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4426 check_top_chunk(m, m->top);
4430 mspace create_mspace(size_t capacity, int locked) {
4432 size_t msize = pad_request(sizeof(struct malloc_state));
4433 init_mparams(); /* Ensure pagesize etc initialized */
4435 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4436 size_t rs = ((capacity == 0)? mparams.granularity :
4437 (capacity + TOP_FOOT_SIZE + msize));
4438 size_t tsize = granularity_align(rs);
4439 char* tbase = (char*)(CALL_MMAP(tsize));
4440 if (tbase != CMFAIL) {
4441 m = init_user_mstate(tbase, tsize);
4442 m->seg.sflags = IS_MMAPPED_BIT;
4443 set_lock(m, locked);
4449 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4451 size_t msize = pad_request(sizeof(struct malloc_state));
4452 init_mparams(); /* Ensure pagesize etc initialized */
4454 if (capacity > msize + TOP_FOOT_SIZE &&
4455 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4456 m = init_user_mstate((char*)base, capacity);
4457 m->seg.sflags = EXTERN_BIT;
4458 set_lock(m, locked);
4463 size_t destroy_mspace(mspace msp) {
4465 mstate ms = (mstate)msp;
4467 msegmentptr sp = &ms->seg;
4469 char* base = sp->base;
4470 size_t size = sp->size;
4471 flag_t flag = sp->sflags;
4473 if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4474 CALL_MUNMAP(base, size) == 0)
4479 USAGE_ERROR_ACTION(ms,ms);
4485 mspace versions of routines are near-clones of the global
4486 versions. This is not so nice but better than the alternatives.
4489 void* mspace_malloc(mspace msp, size_t bytes) {
4490 mstate ms = (mstate)msp;
4491 if (!ok_magic(ms)) {
4492 USAGE_ERROR_ACTION(ms,ms);
4495 if (!PREACTION(ms)) {
4498 if (bytes <= MAX_SMALL_REQUEST) {
4501 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4502 idx = small_index(nb);
4503 smallbits = ms->smallmap >> idx;
4505 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4507 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4508 b = smallbin_at(ms, idx);
4510 assert(chunksize(p) == small_index2size(idx));
4511 unlink_first_small_chunk(ms, b, p, idx);
4512 set_inuse_and_pinuse(ms, p, small_index2size(idx));
4514 check_malloced_chunk(ms, mem, nb);
4518 else if (nb > ms->dvsize) {
4519 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4523 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4524 binmap_t leastbit = least_bit(leftbits);
4525 compute_bit2idx(leastbit, i);
4526 b = smallbin_at(ms, i);
4528 assert(chunksize(p) == small_index2size(i));
4529 unlink_first_small_chunk(ms, b, p, i);
4530 rsize = small_index2size(i) - nb;
4531 /* Fit here cannot be remainderless if 4byte sizes */
4532 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4533 set_inuse_and_pinuse(ms, p, small_index2size(i));
4535 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4536 r = chunk_plus_offset(p, nb);
4537 set_size_and_pinuse_of_free_chunk(r, rsize);
4538 replace_dv(ms, r, rsize);
4541 check_malloced_chunk(ms, mem, nb);
4545 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4546 check_malloced_chunk(ms, mem, nb);
4551 else if (bytes >= MAX_REQUEST)
4552 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4554 nb = pad_request(bytes);
4555 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4556 check_malloced_chunk(ms, mem, nb);
4561 if (nb <= ms->dvsize) {
4562 size_t rsize = ms->dvsize - nb;
4563 mchunkptr p = ms->dv;
4564 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4565 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4567 set_size_and_pinuse_of_free_chunk(r, rsize);
4568 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4570 else { /* exhaust dv */
4571 size_t dvs = ms->dvsize;
4574 set_inuse_and_pinuse(ms, p, dvs);
4577 check_malloced_chunk(ms, mem, nb);
4581 else if (nb < ms->topsize) { /* Split top */
4582 size_t rsize = ms->topsize -= nb;
4583 mchunkptr p = ms->top;
4584 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4585 r->head = rsize | PINUSE_BIT;
4586 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4588 check_top_chunk(ms, ms->top);
4589 check_malloced_chunk(ms, mem, nb);
4593 mem = sys_alloc(ms, nb);
4603 void mspace_free(mspace msp, void* mem) {
4605 mchunkptr p = mem2chunk(mem);
4607 mstate fm = get_mstate_for(p);
4609 mstate fm = (mstate)msp;
4610 #endif /* FOOTERS */
4611 if (!ok_magic(fm)) {
4612 USAGE_ERROR_ACTION(fm, p);
4615 if (!PREACTION(fm)) {
4616 check_inuse_chunk(fm, p);
4617 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4618 size_t psize = chunksize(p);
4619 mchunkptr next = chunk_plus_offset(p, psize);
4621 size_t prevsize = p->prev_foot;
4622 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4623 prevsize &= ~IS_MMAPPED_BIT;
4624 psize += prevsize + MMAP_FOOT_PAD;
4625 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4626 fm->footprint -= psize;
4630 mchunkptr prev = chunk_minus_offset(p, prevsize);
4633 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4635 unlink_chunk(fm, p, prevsize);
4637 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4639 set_free_with_pinuse(p, psize, next);
4648 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4649 if (!cinuse(next)) { /* consolidate forward */
4650 if (next == fm->top) {
4651 size_t tsize = fm->topsize += psize;
4653 p->head = tsize | PINUSE_BIT;
4658 if (should_trim(fm, tsize))
4662 else if (next == fm->dv) {
4663 size_t dsize = fm->dvsize += psize;
4665 set_size_and_pinuse_of_free_chunk(p, dsize);
4669 size_t nsize = chunksize(next);
4671 unlink_chunk(fm, next, nsize);
4672 set_size_and_pinuse_of_free_chunk(p, psize);
4680 set_free_with_pinuse(p, psize, next);
4681 insert_chunk(fm, p, psize);
4682 check_free_chunk(fm, p);
4687 USAGE_ERROR_ACTION(fm, p);
4694 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4697 mstate ms = (mstate)msp;
4698 if (!ok_magic(ms)) {
4699 USAGE_ERROR_ACTION(ms,ms);
4702 if (n_elements != 0) {
4703 req = n_elements * elem_size;
4704 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4705 (req / n_elements != elem_size))
4706 req = MAX_SIZE_T; /* force downstream failure on overflow */
4708 mem = internal_malloc(ms, req);
4709 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4710 memset(mem, 0, req);
4714 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4716 return mspace_malloc(msp, bytes);
4717 #ifdef REALLOC_ZERO_BYTES_FREES
4719 mspace_free(msp, oldmem);
4722 #endif /* REALLOC_ZERO_BYTES_FREES */
4725 mchunkptr p = mem2chunk(oldmem);
4726 mstate ms = get_mstate_for(p);
4728 mstate ms = (mstate)msp;
4729 #endif /* FOOTERS */
4730 if (!ok_magic(ms)) {
4731 USAGE_ERROR_ACTION(ms,ms);
4734 return internal_realloc(ms, oldmem, bytes);
4738 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4739 mstate ms = (mstate)msp;
4740 if (!ok_magic(ms)) {
4741 USAGE_ERROR_ACTION(ms,ms);
4744 return internal_memalign(ms, alignment, bytes);
4747 void** mspace_independent_calloc(mspace msp, size_t n_elements,
4748 size_t elem_size, void* chunks[]) {
4749 size_t sz = elem_size; /* serves as 1-element array */
4750 mstate ms = (mstate)msp;
4751 if (!ok_magic(ms)) {
4752 USAGE_ERROR_ACTION(ms,ms);
4755 return ialloc(ms, n_elements, &sz, 3, chunks);
4758 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4759 size_t sizes[], void* chunks[]) {
4760 mstate ms = (mstate)msp;
4761 if (!ok_magic(ms)) {
4762 USAGE_ERROR_ACTION(ms,ms);
4765 return ialloc(ms, n_elements, sizes, 0, chunks);
4768 int mspace_trim(mspace msp, size_t pad) {
4770 mstate ms = (mstate)msp;
4772 if (!PREACTION(ms)) {
4773 result = sys_trim(ms, pad);
4778 USAGE_ERROR_ACTION(ms,ms);
4783 void mspace_malloc_stats(mspace msp) {
4784 mstate ms = (mstate)msp;
4786 internal_malloc_stats(ms);
4789 USAGE_ERROR_ACTION(ms,ms);
4793 size_t mspace_footprint(mspace msp) {
4795 mstate ms = (mstate)msp;
4797 result = ms->footprint;
4799 USAGE_ERROR_ACTION(ms,ms);
4804 size_t mspace_max_footprint(mspace msp) {
4806 mstate ms = (mstate)msp;
4808 result = ms->max_footprint;
4810 USAGE_ERROR_ACTION(ms,ms);
4816 struct mallinfo mspace_mallinfo(mspace msp) {
4817 mstate ms = (mstate)msp;
4818 if (!ok_magic(ms)) {
4819 USAGE_ERROR_ACTION(ms,ms);
4821 return internal_mallinfo(ms);
4823 #endif /* NO_MALLINFO */
4825 int mspace_mallopt(int param_number, int value) {
4826 return change_mparam(param_number, value);
4829 #endif /* MSPACES */
4831 /* -------------------- Alternative MORECORE functions ------------------- */
4834 Guidelines for creating a custom version of MORECORE:
4836 * For best performance, MORECORE should allocate in multiples of pagesize.
4837 * MORECORE may allocate more memory than requested. (Or even less,
4838 but this will usually result in a malloc failure.)
4839 * MORECORE must not allocate memory when given argument zero, but
4840 instead return one past the end address of memory from previous
4842 * For best performance, consecutive calls to MORECORE with positive
4843 arguments should return increasing addresses, indicating that
4844 space has been contiguously extended.
4845 * Even though consecutive calls to MORECORE need not return contiguous
4846 addresses, it must be OK for malloc'ed chunks to span multiple
4847 regions in those cases where they do happen to be contiguous.
4848 * MORECORE need not handle negative arguments -- it may instead
4849 just return MFAIL when given negative arguments.
4850 Negative arguments are always multiples of pagesize. MORECORE
4851 must not misinterpret negative args as large positive unsigned
4852 args. You can suppress all such calls from even occurring by defining
4853 MORECORE_CANNOT_TRIM,
4855 As an example alternative MORECORE, here is a custom allocator
4856 kindly contributed for pre-OSX macOS. It uses virtually but not
4857 necessarily physically contiguous non-paged memory (locked in,
4858 present and won't get swapped out). You can use it by uncommenting
4859 this section, adding some #includes, and setting up the appropriate
4862 #define MORECORE osMoreCore
4864 There is also a shutdown routine that should somehow be called for
4865 cleanup upon program exit.
4867 #define MAX_POOL_ENTRIES 100
4868 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
4869 static int next_os_pool;
4870 void *our_os_pools[MAX_POOL_ENTRIES];
4872 void *osMoreCore(int size)
4875 static void *sbrk_top = 0;
4879 if (size < MINIMUM_MORECORE_SIZE)
4880 size = MINIMUM_MORECORE_SIZE;
4881 if (CurrentExecutionLevel() == kTaskLevel)
4882 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4885 return (void *) MFAIL;
4887 // save ptrs so they can be freed during cleanup
4888 our_os_pools[next_os_pool] = ptr;
4890 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4891 sbrk_top = (char *) ptr + size;
4896 // we don't currently support shrink behavior
4897 return (void *) MFAIL;
4905 // cleanup any allocated memory pools
4906 // called as last thing before shutting down driver
4908 void osCleanupMem(void)
4912 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4915 PoolDeallocate(*ptr);
4923 /* -----------------------------------------------------------------------
4925 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
4926 * Add max_footprint functions
4927 * Ensure all appropriate literals are size_t
4928 * Fix conditional compilation problem for some #define settings
4929 * Avoid concatenating segments with the one provided
4930 in create_mspace_with_base
4931 * Rename some variables to avoid compiler shadowing warnings
4932 * Use explicit lock initialization.
4933 * Better handling of sbrk interference.
4934 * Simplify and fix segment insertion, trimming and mspace_destroy
4935 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4936 * Thanks especially to Dennis Flanagan for help on these.
4938 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
4939 * Fix memalign brace error.
4941 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
4942 * Fix improper #endif nesting in C++
4943 * Add explicit casts needed for C++
4945 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
4946 * Use trees for large bins
4948 * Use segments to unify sbrk-based and mmap-based system allocation,
4949 removing need for emulation on most platforms without sbrk.
4950 * Default safety checks
4951 * Optional footer checks. Thanks to William Robertson for the idea.
4952 * Internal code refactoring
4953 * Incorporate suggestions and platform-specific changes.
4954 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4955 Aaron Bachmann, Emery Berger, and others.
4956 * Speed up non-fastbin processing enough to remove fastbins.
4957 * Remove useless cfree() to avoid conflicts with other apps.
4958 * Remove internal memcpy, memset. Compilers handle builtins better.
4959 * Remove some options that no one ever used and rename others.
4961 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
4962 * Fix malloc_state bitmap array misdeclaration
4964 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
4965 * Allow tuning of FIRST_SORTED_BIN_SIZE
4966 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4967 * Better detection and support for non-contiguousness of MORECORE.
4968 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4969 * Bypass most of malloc if no frees. Thanks To Emery Berger.
4970 * Fix freeing of old top non-contiguous chunk im sysmalloc.
4971 * Raised default trim and map thresholds to 256K.
4972 * Fix mmap-related #defines. Thanks to Lubos Lunak.
4973 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4974 * Branch-free bin calculation
4975 * Default trim and mmap thresholds now 256K.
4977 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
4978 * Introduce independent_comalloc and independent_calloc.
4979 Thanks to Michael Pachos for motivation and help.
4980 * Make optional .h file available
4981 * Allow > 2GB requests on 32bit systems.
4982 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4983 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4985 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4987 * memalign: check alignment arg
4988 * realloc: don't try to shift chunks backwards, since this
4989 leads to more fragmentation in some programs and doesn't
4990 seem to help in any others.
4991 * Collect all cases in malloc requiring system memory into sysmalloc
4992 * Use mmap as backup to sbrk
4993 * Place all internal state in malloc_state
4994 * Introduce fastbins (although similar to 2.5.1)
4995 * Many minor tunings and cosmetic improvements
4996 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
4997 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
4998 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
4999 * Include errno.h to support default failure action.
5001 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5002 * return null for negative arguments
5003 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5004 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5005 (e.g. WIN32 platforms)
5006 * Cleanup header file inclusion for WIN32 platforms
5007 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5008 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5009 memory allocation routines
5010 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5011 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5012 usage of 'assert' in non-WIN32 code
5013 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5015 * Always call 'fREe()' rather than 'free()'
5017 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5018 * Fixed ordering problem with boundary-stamping
5020 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5021 * Added pvalloc, as recommended by H.J. Liu
5022 * Added 64bit pointer support mainly from Wolfram Gloger
5023 * Added anonymously donated WIN32 sbrk emulation
5024 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5025 * malloc_extend_top: fix mask error that caused wastage after
5027 * Add linux mremap support code from HJ Liu
5029 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5030 * Integrated most documentation with the code.
5031 * Add support for mmap, with help from
5032 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5033 * Use last_remainder in more cases.
5034 * Pack bins using idea from colin@nyx10.cs.du.edu
5035 * Use ordered bins instead of best-fit threshhold
5036 * Eliminate block-local decls to simplify tracing and debugging.
5037 * Support another case of realloc via move into top
5038 * Fix error occuring when initial sbrk_base not word-aligned.
5039 * Rely on page size for units instead of SBRK_UNIT to
5040 avoid surprises about sbrk alignment conventions.
5041 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5042 (raymond@es.ele.tue.nl) for the suggestion.
5043 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5044 * More precautions for cases where other routines call sbrk,
5045 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5046 * Added macros etc., allowing use in linux libc from
5047 H.J. Lu (hjl@gnu.ai.mit.edu)
5048 * Inverted this history list
5050 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5051 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5052 * Removed all preallocation code since under current scheme
5053 the work required to undo bad preallocations exceeds
5054 the work saved in good cases for most test programs.
5055 * No longer use return list or unconsolidated bins since
5056 no scheme using them consistently outperforms those that don't
5057 given above changes.
5058 * Use best fit for very large chunks to prevent some worst-cases.
5059 * Added some support for debugging
5061 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5062 * Removed footers when chunks are in use. Thanks to
5063 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5065 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5066 * Added malloc_trim, with help from Wolfram Gloger
5067 (wmglo@Dent.MED.Uni-Muenchen.DE).
5069 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5071 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5072 * realloc: try to expand in both directions
5073 * malloc: swap order of clean-bin strategy;
5074 * realloc: only conditionally expand backwards
5075 * Try not to scavenge used bins
5076 * Use bin counts as a guide to preallocation
5077 * Occasionally bin return list chunks in first scan
5078 * Add a few optimizations from colin@nyx10.cs.du.edu
5080 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5081 * faster bin computation & slightly different binning
5082 * merged all consolidations to one part of malloc proper
5083 (eliminating old malloc_find_space & malloc_clean_bin)
5084 * Scan 2 returns chunks (not just 1)
5085 * Propagate failure in realloc if malloc returns 0
5086 * Add stuff to allow compilation on non-ANSI compilers
5087 from kpv@research.att.com
5089 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5090 * removed potential for odd address access in prev_chunk
5091 * removed dependency on getpagesize.h
5092 * misc cosmetics and a bit more internal documentation
5093 * anticosmetics: mangled names in macros to evade debugger strangeness
5094 * tested on sparc, hp-700, dec-mips, rs6000
5095 with gcc & native cc (hp, dec only) allowing
5096 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5098 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5099 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5100 structure of old version, but most details differ.)