2 This is a version (aka dlmalloc) of malloc/free/realloc written by
3 Doug Lea and released to the public domain, as explained at
4 http://creativecommons.org/licenses/publicdomain. Send questions,
5 comments, complaints, performance data, etc to dl@cs.oswego.edu
7 * Version 2.8.3 Thu Sep 22 11:16:15 2005 Doug Lea (dl at gee)
9 Note: There may be an updated version of this malloc obtainable at
10 ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11 Check before installing!
15 * Modifications made to the original version for mono:
16 * - added PROT_EXEC to MMAP_PROT
17 * - added PAGE_EXECUTE_READWRITE to the win32mmap and win32direct_mmap
18 * - a large portion of functions is #ifdef'ed out to make the native code smaller
22 #define USE_DL_PREFIX 1
24 /* Use mmap for allocating memory */
25 #define HAVE_MORECORE 0
27 #include <mono/utils/dlmalloc.h>
32 This library is all in one file to simplify the most common usage:
33 ftp it, compile it (-O3), and link it into another program. All of
34 the compile-time options default to reasonable values for use on
35 most platforms. You might later want to step through various
36 compile-time and dynamic tuning options.
38 For convenience, an include file for code using this malloc is at:
39 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
40 You don't really need this .h file unless you call functions not
41 defined in your system include files. The .h file contains only the
42 excerpts from this file needed for using this malloc on ANSI C/C++
43 systems, so long as you haven't changed compile-time options about
44 naming and tuning parameters. If you do, then you can create your
45 own malloc.h that does include all settings by cutting at the point
46 indicated below. Note that you may already by default be using a C
47 library containing a malloc that is based on some version of this
48 malloc (for example in linux). You might still want to use the one
49 in this file to customize settings or to avoid overheads associated
50 with library versions.
54 Supported pointer/size_t representation: 4 or 8 bytes
55 size_t MUST be an unsigned type of the same width as
56 pointers. (If you are using an ancient system that declares
57 size_t as a signed type, or need it to be a different width
58 than pointers, you can use a previous release of this malloc
59 (e.g. 2.7.2) supporting these.)
61 Alignment: 8 bytes (default)
62 This suffices for nearly all current machines and C compilers.
63 However, you can define MALLOC_ALIGNMENT to be wider than this
64 if necessary (up to 128bytes), at the expense of using more space.
66 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
67 8 or 16 bytes (if 8byte sizes)
68 Each malloced chunk has a hidden word of overhead holding size
69 and status information, and additional cross-check word
70 if FOOTERS is defined.
72 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead)
73 8-byte ptrs: 32 bytes (including overhead)
75 Even a request for zero bytes (i.e., malloc(0)) returns a
76 pointer to something of the minimum allocatable size.
77 The maximum overhead wastage (i.e., number of extra bytes
78 allocated than were requested in malloc) is less than or equal
79 to the minimum size, except for requests >= mmap_threshold that
80 are serviced via mmap(), where the worst case wastage is about
81 32 bytes plus the remainder from a system page (the minimal
82 mmap unit); typically 4096 or 8192 bytes.
84 Security: static-safe; optionally more or less
85 The "security" of malloc refers to the ability of malicious
86 code to accentuate the effects of errors (for example, freeing
87 space that is not currently malloc'ed or overwriting past the
88 ends of chunks) in code that calls malloc. This malloc
89 guarantees not to modify any memory locations below the base of
90 heap, i.e., static variables, even in the presence of usage
91 errors. The routines additionally detect most improper frees
92 and reallocs. All this holds as long as the static bookkeeping
93 for malloc itself is not corrupted by some other means. This
94 is only one aspect of security -- these checks do not, and
95 cannot, detect all possible programming errors.
97 If FOOTERS is defined nonzero, then each allocated chunk
98 carries an additional check word to verify that it was malloced
99 from its space. These check words are the same within each
100 execution of a program using malloc, but differ across
101 executions, so externally crafted fake chunks cannot be
102 freed. This improves security by rejecting frees/reallocs that
103 could corrupt heap memory, in addition to the checks preventing
104 writes to statics that are always on. This may further improve
105 security at the expense of time and space overhead. (Note that
106 FOOTERS may also be worth using with MSPACES.)
108 By default detected errors cause the program to abort (calling
109 "abort()"). You can override this to instead proceed past
110 errors by defining PROCEED_ON_ERROR. In this case, a bad free
111 has no effect, and a malloc that encounters a bad address
112 caused by user overwrites will ignore the bad address by
113 dropping pointers and indices to all known memory. This may
114 be appropriate for programs that should continue if at all
115 possible in the face of programming errors, although they may
116 run out of memory because dropped memory is never reclaimed.
118 If you don't like either of these options, you can define
119 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
120 else. And if if you are sure that your program using malloc has
121 no errors or vulnerabilities, you can define INSECURE to 1,
122 which might (or might not) provide a small performance improvement.
124 Thread-safety: NOT thread-safe unless USE_LOCKS defined
125 When USE_LOCKS is defined, each public call to malloc, free,
126 etc is surrounded with either a pthread mutex or a win32
127 spinlock (depending on WIN32). This is not especially fast, and
128 can be a major bottleneck. It is designed only to provide
129 minimal protection in concurrent environments, and to provide a
130 basis for extensions. If you are using malloc in a concurrent
131 program, consider instead using ptmalloc, which is derived from
132 a version of this malloc. (See http://www.malloc.de).
134 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
135 This malloc can use unix sbrk or any emulation (invoked using
136 the CALL_MORECORE macro) and/or mmap/munmap or any emulation
137 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
138 memory. On most unix systems, it tends to work best if both
139 MORECORE and MMAP are enabled. On Win32, it uses emulations
140 based on VirtualAlloc. It also uses common C library functions
143 Compliance: I believe it is compliant with the Single Unix Specification
144 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
147 * Overview of algorithms
149 This is not the fastest, most space-conserving, most portable, or
150 most tunable malloc ever written. However it is among the fastest
151 while also being among the most space-conserving, portable and
152 tunable. Consistent balance across these factors results in a good
153 general-purpose allocator for malloc-intensive programs.
155 In most ways, this malloc is a best-fit allocator. Generally, it
156 chooses the best-fitting existing chunk for a request, with ties
157 broken in approximately least-recently-used order. (This strategy
158 normally maintains low fragmentation.) However, for requests less
159 than 256bytes, it deviates from best-fit when there is not an
160 exactly fitting available chunk by preferring to use space adjacent
161 to that used for the previous small request, as well as by breaking
162 ties in approximately most-recently-used order. (These enhance
163 locality of series of small allocations.) And for very large requests
164 (>= 256Kb by default), it relies on system memory mapping
165 facilities, if supported. (This helps avoid carrying around and
166 possibly fragmenting memory used only for large chunks.)
168 All operations (except malloc_stats and mallinfo) have execution
169 times that are bounded by a constant factor of the number of bits in
170 a size_t, not counting any clearing in calloc or copying in realloc,
171 or actions surrounding MORECORE and MMAP that have times
172 proportional to the number of non-contiguous regions returned by
173 system allocation routines, which is often just 1.
175 The implementation is not very modular and seriously overuses
176 macros. Perhaps someday all C compilers will do as good a job
177 inlining modular code as can now be done by brute-force expansion,
178 but now, enough of them seem not to.
180 Some compilers issue a lot of warnings about code that is
181 dead/unreachable only on some platforms, and also about intentional
182 uses of negation on unsigned types. All known cases of each can be
185 For a longer but out of date high-level description, see
186 http://gee.cs.oswego.edu/dl/html/malloc.html
189 If MSPACES is defined, then in addition to malloc, free, etc.,
190 this file also defines mspace_malloc, mspace_free, etc. These
191 are versions of malloc routines that take an "mspace" argument
192 obtained using create_mspace, to control all internal bookkeeping.
193 If ONLY_MSPACES is defined, only these versions are compiled.
194 So if you would like to use this allocator for only some allocations,
195 and your system malloc for others, you can compile with
196 ONLY_MSPACES and then do something like...
197 static mspace mymspace = create_mspace(0,0); // for example
198 #define mymalloc(bytes) mspace_malloc(mymspace, bytes)
200 (Note: If you only need one instance of an mspace, you can instead
201 use "USE_DL_PREFIX" to relabel the global malloc.)
203 You can similarly create thread-local allocators by storing
204 mspaces as thread-locals. For example:
205 static __thread mspace tlms = 0;
206 void* tlmalloc(size_t bytes) {
207 if (tlms == 0) tlms = create_mspace(0, 0);
208 return mspace_malloc(tlms, bytes);
210 void tlfree(void* mem) { mspace_free(tlms, mem); }
212 Unless FOOTERS is defined, each mspace is completely independent.
213 You cannot allocate from one and free to another (although
214 conformance is only weakly checked, so usage errors are not always
215 caught). If FOOTERS is defined, then each chunk carries around a tag
216 indicating its originating mspace, and frees are directed to their
219 ------------------------- Compile-time options ---------------------------
221 Be careful in setting #define values for numerical constants of type
222 size_t. On some systems, literal values are not automatically extended
223 to size_t precision unless they are explicitly casted.
225 WIN32 default: defined if _WIN32 defined
226 Defining WIN32 sets up defaults for MS environment and compilers.
227 Otherwise defaults are for unix.
229 MALLOC_ALIGNMENT default: (size_t)8
230 Controls the minimum alignment for malloc'ed chunks. It must be a
231 power of two and at least 8, even on machines for which smaller
232 alignments would suffice. It may be defined as larger than this
233 though. Note however that code and data structures are optimized for
234 the case of 8-byte alignment.
236 MSPACES default: 0 (false)
237 If true, compile in support for independent allocation spaces.
238 This is only supported if HAVE_MMAP is true.
240 ONLY_MSPACES default: 0 (false)
241 If true, only compile in mspace versions, not regular versions.
243 USE_LOCKS default: 0 (false)
244 Causes each call to each public routine to be surrounded with
245 pthread or WIN32 mutex lock/unlock. (If set true, this can be
246 overridden on a per-mspace basis for mspace versions.)
249 If true, provide extra checking and dispatching by placing
250 information in the footers of allocated chunks. This adds
251 space and time overhead.
254 If true, omit checks for usage errors and heap space overwrites.
256 USE_DL_PREFIX default: NOT defined
257 Causes compiler to prefix all public routines with the string 'dl'.
258 This can be useful when you only want to use this malloc in one part
259 of a program, using your regular system malloc elsewhere.
261 ABORT default: defined as abort()
262 Defines how to abort on failed checks. On most systems, a failed
263 check cannot die with an "assert" or even print an informative
264 message, because the underlying print routines in turn call malloc,
265 which will fail again. Generally, the best policy is to simply call
266 abort(). It's not very useful to do more than this because many
267 errors due to overwriting will show up as address faults (null, odd
268 addresses etc) rather than malloc-triggered checks, so will also
269 abort. Also, most compilers know that abort() does not return, so
270 can better optimize code conditionally calling it.
272 PROCEED_ON_ERROR default: defined as 0 (false)
273 Controls whether detected bad addresses cause them to bypassed
274 rather than aborting. If set, detected bad arguments to free and
275 realloc are ignored. And all bookkeeping information is zeroed out
276 upon a detected overwrite of freed heap space, thus losing the
277 ability to ever return it from malloc again, but enabling the
278 application to proceed. If PROCEED_ON_ERROR is defined, the
279 static variable malloc_corruption_error_count is compiled in
280 and can be examined to see if errors have occurred. This option
281 generates slower code than the default abort policy.
283 DEBUG default: NOT defined
284 The DEBUG setting is mainly intended for people trying to modify
285 this code or diagnose problems when porting to new platforms.
286 However, it may also be able to better isolate user errors than just
287 using runtime checks. The assertions in the check routines spell
288 out in more detail the assumptions and invariants underlying the
289 algorithms. The checking is fairly extensive, and will slow down
290 execution noticeably. Calling malloc_stats or mallinfo with DEBUG
291 set will attempt to check every non-mmapped allocated and free chunk
292 in the course of computing the summaries.
294 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true)
295 Debugging assertion failures can be nearly impossible if your
296 version of the assert macro causes malloc to be called, which will
297 lead to a cascade of further failures, blowing the runtime stack.
298 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
299 which will usually make debugging easier.
301 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32
302 The action to take before "return 0" when malloc fails to be able to
303 return memory because there is none available.
305 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES
306 True if this system supports sbrk or an emulation of it.
308 MORECORE default: sbrk
309 The name of the sbrk-style system routine to call to obtain more
310 memory. See below for guidance on writing custom MORECORE
311 functions. The type of the argument to sbrk/MORECORE varies across
312 systems. It cannot be size_t, because it supports negative
313 arguments, so it is normally the signed type of the same width as
314 size_t (sometimes declared as "intptr_t"). It doesn't much matter
315 though. Internally, we only call it with arguments less than half
316 the max value of a size_t, which should work across all reasonable
317 possibilities, although sometimes generating compiler warnings. See
318 near the end of this file for guidelines for creating a custom
321 MORECORE_CONTIGUOUS default: 1 (true)
322 If true, take advantage of fact that consecutive calls to MORECORE
323 with positive arguments always return contiguous increasing
324 addresses. This is true of unix sbrk. It does not hurt too much to
325 set it true anyway, since malloc copes with non-contiguities.
326 Setting it false when definitely non-contiguous saves time
327 and possibly wasted space it would take to discover this though.
329 MORECORE_CANNOT_TRIM default: NOT defined
330 True if MORECORE cannot release space back to the system when given
331 negative arguments. This is generally necessary only if you are
332 using a hand-crafted MORECORE function that cannot handle negative
335 HAVE_MMAP default: 1 (true)
336 True if this system supports mmap or an emulation of it. If so, and
337 HAVE_MORECORE is not true, MMAP is used for all system
338 allocation. If set and HAVE_MORECORE is true as well, MMAP is
339 primarily used to directly allocate very large blocks. It is also
340 used as a backup strategy in cases where MORECORE fails to provide
341 space from system. Note: A single call to MUNMAP is assumed to be
342 able to unmap memory that may have be allocated using multiple calls
343 to MMAP, so long as they are adjacent.
345 HAVE_MREMAP default: 1 on linux, else 0
346 If true realloc() uses mremap() to re-allocate large blocks and
347 extend or shrink allocation spaces.
349 MMAP_CLEARS default: 1 on unix
350 True if mmap clears memory so calloc doesn't need to. This is true
351 for standard unix mmap using /dev/zero.
353 USE_BUILTIN_FFS default: 0 (i.e., not used)
354 Causes malloc to use the builtin ffs() function to compute indices.
355 Some compilers may recognize and intrinsify ffs to be faster than the
356 supplied C version. Also, the case of x86 using gcc is special-cased
357 to an asm instruction, so is already as fast as it can be, and so
358 this setting has no effect. (On most x86s, the asm version is only
359 slightly faster than the C version.)
361 malloc_getpagesize default: derive from system includes, or 4096.
362 The system page size. To the extent possible, this malloc manages
363 memory from the system in page-size units. This may be (and
364 usually is) a function rather than a constant. This is ignored
365 if WIN32, where page size is determined using getSystemInfo during
368 USE_DEV_RANDOM default: 0 (i.e., not used)
369 Causes malloc to use /dev/random to initialize secure magic seed for
370 stamping footers. Otherwise, the current time is used.
372 NO_MALLINFO default: 0
373 If defined, don't compile "mallinfo". This can be a simple way
374 of dealing with mismatches between system declarations and
377 MALLINFO_FIELD_TYPE default: size_t
378 The type of the fields in the mallinfo struct. This was originally
379 defined as "int" in SVID etc, but is more usefully defined as
380 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set
382 REALLOC_ZERO_BYTES_FREES default: not defined
383 This should be set if a call to realloc with zero bytes should
384 be the same as a call to free. Some people think it should. Otherwise,
385 since this malloc returns a unique pointer for malloc(0), so does
388 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
389 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H
390 LACKS_STDLIB_H default: NOT defined unless on WIN32
391 Define these if your system does not have these header files.
392 You might need to manually insert some of the declarations they provide.
394 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS,
395 system_info.dwAllocationGranularity in WIN32,
397 Also settable using mallopt(M_GRANULARITY, x)
398 The unit for allocating and deallocating memory from the system. On
399 most systems with contiguous MORECORE, there is no reason to
400 make this more than a page. However, systems with MMAP tend to
401 either require or encourage larger granularities. You can increase
402 this value to prevent system allocation functions to be called so
403 often, especially if they are slow. The value must be at least one
404 page and must be a power of two. Setting to 0 causes initialization
405 to either page size or win32 region size. (Note: In previous
406 versions of malloc, the equivalent of this option was called
409 DEFAULT_TRIM_THRESHOLD default: 2MB
410 Also settable using mallopt(M_TRIM_THRESHOLD, x)
411 The maximum amount of unused top-most memory to keep before
412 releasing via malloc_trim in free(). Automatic trimming is mainly
413 useful in long-lived programs using contiguous MORECORE. Because
414 trimming via sbrk can be slow on some systems, and can sometimes be
415 wasteful (in cases where programs immediately afterward allocate
416 more large chunks) the value should be high enough so that your
417 overall system performance would improve by releasing this much
418 memory. As a rough guide, you might set to a value close to the
419 average size of a process (program) running on your system.
420 Releasing this much memory would allow such a process to run in
421 memory. Generally, it is worth tuning trim thresholds when a
422 program undergoes phases where several large chunks are allocated
423 and released in ways that can reuse each other's storage, perhaps
424 mixed with phases where there are no such chunks at all. The trim
425 value must be greater than page size to have any useful effect. To
426 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
427 some people use of mallocing a huge space and then freeing it at
428 program startup, in an attempt to reserve system memory, doesn't
429 have the intended effect under automatic trimming, since that memory
430 will immediately be returned to the system.
432 DEFAULT_MMAP_THRESHOLD default: 256K
433 Also settable using mallopt(M_MMAP_THRESHOLD, x)
434 The request size threshold for using MMAP to directly service a
435 request. Requests of at least this size that cannot be allocated
436 using already-existing space will be serviced via mmap. (If enough
437 normal freed space already exists it is used instead.) Using mmap
438 segregates relatively large chunks of memory so that they can be
439 individually obtained and released from the host system. A request
440 serviced through mmap is never reused by any other request (at least
441 not directly; the system may just so happen to remap successive
442 requests to the same locations). Segregating space in this way has
443 the benefits that: Mmapped space can always be individually released
444 back to the system, which helps keep the system level memory demands
445 of a long-lived program low. Also, mapped memory doesn't become
446 `locked' between other chunks, as can happen with normally allocated
447 chunks, which means that even trimming via malloc_trim would not
448 release them. However, it has the disadvantage that the space
449 cannot be reclaimed, consolidated, and then used to service later
450 requests, as happens with normal chunks. The advantages of mmap
451 nearly always outweigh disadvantages for "large" chunks, but the
452 value of "large" may vary across systems. The default is an
453 empirically derived value that works well in most systems. You can
454 disable mmap by setting to MAX_SIZE_T.
464 #define WIN32_LEAN_AND_MEAN
467 #define HAVE_MORECORE 0
468 #define LACKS_UNISTD_H
469 #define LACKS_SYS_PARAM_H
470 #define LACKS_SYS_MMAN_H
471 #define LACKS_STRING_H
472 #define LACKS_STRINGS_H
473 #define LACKS_SYS_TYPES_H
474 #define LACKS_ERRNO_H
475 #define MALLOC_FAILURE_ACTION
476 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
479 #if defined(DARWIN) || defined(_DARWIN)
480 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
481 #ifndef HAVE_MORECORE
482 #define HAVE_MORECORE 0
484 #endif /* HAVE_MORECORE */
487 #if defined(__native_client__)
491 #define HAVE_MREMAP 0
494 #ifndef LACKS_SYS_TYPES_H
495 #include <sys/types.h> /* For size_t */
496 #endif /* LACKS_SYS_TYPES_H */
498 /* The maximum possible size_t value has all bits set */
499 #define MAX_SIZE_T (~(size_t)0)
502 #define ONLY_MSPACES 0
503 #endif /* ONLY_MSPACES */
507 #else /* ONLY_MSPACES */
509 #endif /* ONLY_MSPACES */
511 #ifndef MALLOC_ALIGNMENT
512 #define MALLOC_ALIGNMENT ((size_t)8U)
513 #endif /* MALLOC_ALIGNMENT */
518 #define ABORT abort()
520 #ifndef ABORT_ON_ASSERT_FAILURE
521 #define ABORT_ON_ASSERT_FAILURE 1
522 #endif /* ABORT_ON_ASSERT_FAILURE */
523 #ifndef PROCEED_ON_ERROR
524 #define PROCEED_ON_ERROR 0
525 #endif /* PROCEED_ON_ERROR */
528 #endif /* USE_LOCKS */
531 #endif /* INSECURE */
534 #endif /* HAVE_MMAP */
536 #define MMAP_CLEARS 1
537 #endif /* MMAP_CLEARS */
540 #define HAVE_MREMAP 1
542 #define HAVE_MREMAP 0
544 #endif /* HAVE_MREMAP */
545 #ifndef MALLOC_FAILURE_ACTION
546 #define MALLOC_FAILURE_ACTION errno = ENOMEM;
547 #endif /* MALLOC_FAILURE_ACTION */
548 #ifndef HAVE_MORECORE
550 #define HAVE_MORECORE 0
551 #else /* ONLY_MSPACES */
552 #define HAVE_MORECORE 1
553 #endif /* ONLY_MSPACES */
554 #endif /* HAVE_MORECORE */
556 #define MORECORE_CONTIGUOUS 0
557 #else /* !HAVE_MORECORE */
559 #define MORECORE sbrk
560 #endif /* MORECORE */
561 #ifndef MORECORE_CONTIGUOUS
562 #define MORECORE_CONTIGUOUS 1
563 #endif /* MORECORE_CONTIGUOUS */
564 #endif /* HAVE_MORECORE */
565 #ifndef DEFAULT_GRANULARITY
566 #if MORECORE_CONTIGUOUS
567 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */
568 #else /* MORECORE_CONTIGUOUS */
569 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
570 #endif /* MORECORE_CONTIGUOUS */
571 #endif /* DEFAULT_GRANULARITY */
572 #ifndef DEFAULT_TRIM_THRESHOLD
573 #ifndef MORECORE_CANNOT_TRIM
574 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
575 #else /* MORECORE_CANNOT_TRIM */
576 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
577 #endif /* MORECORE_CANNOT_TRIM */
578 #endif /* DEFAULT_TRIM_THRESHOLD */
579 #ifndef DEFAULT_MMAP_THRESHOLD
581 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
582 #else /* HAVE_MMAP */
583 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
584 #endif /* HAVE_MMAP */
585 #endif /* DEFAULT_MMAP_THRESHOLD */
586 #ifndef USE_BUILTIN_FFS
587 #define USE_BUILTIN_FFS 0
588 #endif /* USE_BUILTIN_FFS */
589 #ifndef USE_DEV_RANDOM
590 #define USE_DEV_RANDOM 0
591 #endif /* USE_DEV_RANDOM */
593 #define NO_MALLINFO 0
594 #endif /* NO_MALLINFO */
595 #ifndef MALLINFO_FIELD_TYPE
596 #define MALLINFO_FIELD_TYPE size_t
597 #endif /* MALLINFO_FIELD_TYPE */
600 mallopt tuning options. SVID/XPG defines four standard parameter
601 numbers for mallopt, normally defined in malloc.h. None of these
602 are used in this malloc, so setting them has no effect. But this
603 malloc does support the following options.
606 #define M_TRIM_THRESHOLD (-1)
607 #define M_GRANULARITY (-2)
608 #define M_MMAP_THRESHOLD (-3)
610 /* ------------------------ Mallinfo declarations ------------------------ */
614 This version of malloc supports the standard SVID/XPG mallinfo
615 routine that returns a struct containing usage properties and
616 statistics. It should work on any system that has a
617 /usr/include/malloc.h defining struct mallinfo. The main
618 declaration needed is the mallinfo struct that is returned (by-copy)
619 by mallinfo(). The malloinfo struct contains a bunch of fields that
620 are not even meaningful in this version of malloc. These fields are
621 are instead filled by mallinfo() with other numbers that might be of
624 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
625 /usr/include/malloc.h file that includes a declaration of struct
626 mallinfo. If so, it is included; else a compliant version is
627 declared below. These must be precisely the same for mallinfo() to
628 work. The original SVID version of this struct, defined on most
629 systems with mallinfo, declares all fields as ints. But some others
630 define as unsigned long. If your system defines the fields using a
631 type of different width than listed here, you MUST #include your
632 system version and #define HAVE_USR_INCLUDE_MALLOC_H.
635 /* #define HAVE_USR_INCLUDE_MALLOC_H */
637 #ifdef HAVE_USR_INCLUDE_MALLOC_H
638 #include "/usr/include/malloc.h"
639 #else /* HAVE_USR_INCLUDE_MALLOC_H */
642 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
643 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */
644 MALLINFO_FIELD_TYPE smblks; /* always 0 */
645 MALLINFO_FIELD_TYPE hblks; /* always 0 */
646 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
647 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */
648 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */
649 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
650 MALLINFO_FIELD_TYPE fordblks; /* total free space */
651 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
654 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
655 #endif /* NO_MALLINFO */
659 #endif /* __cplusplus */
663 /* ------------------- Declarations of public routines ------------------- */
665 #ifndef USE_DL_PREFIX
666 #define dlcalloc calloc
668 #define dlmalloc malloc
669 #define dlmemalign memalign
670 #define dlrealloc realloc
671 #define dlvalloc valloc
672 #define dlpvalloc pvalloc
673 #define dlmallinfo mallinfo
674 #define dlmallopt mallopt
675 #define dlmalloc_trim malloc_trim
676 #define dlmalloc_stats malloc_stats
677 #define dlmalloc_usable_size malloc_usable_size
678 #define dlmalloc_footprint malloc_footprint
679 #define dlmalloc_max_footprint malloc_max_footprint
680 #define dlindependent_calloc independent_calloc
681 #define dlindependent_comalloc independent_comalloc
682 #endif /* USE_DL_PREFIX */
687 Returns a pointer to a newly allocated chunk of at least n bytes, or
688 null if no space is available, in which case errno is set to ENOMEM
691 If n is zero, malloc returns a minimum-sized chunk. (The minimum
692 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
693 systems.) Note that size_t is an unsigned type, so calls with
694 arguments that would be negative if signed are interpreted as
695 requests for huge amounts of space, which will often fail. The
696 maximum supported value of n differs across systems, but is in all
697 cases less than the maximum representable value of a size_t.
699 void* dlmalloc(size_t);
703 Releases the chunk of memory pointed to by p, that had been previously
704 allocated using malloc or a related routine such as realloc.
705 It has no effect if p is null. If p was not malloced or already
706 freed, free(p) will by default cause the current program to abort.
711 calloc(size_t n_elements, size_t element_size);
712 Returns a pointer to n_elements * element_size bytes, with all locations
715 void* dlcalloc(size_t, size_t);
718 realloc(void* p, size_t n)
719 Returns a pointer to a chunk of size n that contains the same data
720 as does chunk p up to the minimum of (n, p's size) bytes, or null
721 if no space is available.
723 The returned pointer may or may not be the same as p. The algorithm
724 prefers extending p in most cases when possible, otherwise it
725 employs the equivalent of a malloc-copy-free sequence.
727 If p is null, realloc is equivalent to malloc.
729 If space is not available, realloc returns null, errno is set (if on
730 ANSI) and p is NOT freed.
732 if n is for fewer bytes than already held by p, the newly unused
733 space is lopped off and freed if possible. realloc with a size
734 argument of zero (re)allocates a minimum-sized chunk.
736 The old unix realloc convention of allowing the last-free'd chunk
737 to be used as an argument to realloc is not supported.
740 void* dlrealloc(void*, size_t);
743 memalign(size_t alignment, size_t n);
744 Returns a pointer to a newly allocated chunk of n bytes, aligned
745 in accord with the alignment argument.
747 The alignment argument should be a power of two. If the argument is
748 not a power of two, the nearest greater power is used.
749 8-byte alignment is guaranteed by normal malloc calls, so don't
750 bother calling memalign with an argument of 8 or less.
752 Overreliance on memalign is a sure way to fragment space.
754 void* dlmemalign(size_t, size_t);
758 Equivalent to memalign(pagesize, n), where pagesize is the page
759 size of the system. If the pagesize is unknown, 4096 is used.
761 void* dlvalloc(size_t);
764 mallopt(int parameter_number, int parameter_value)
765 Sets tunable parameters The format is to provide a
766 (parameter-number, parameter-value) pair. mallopt then sets the
767 corresponding parameter to the argument value if it can (i.e., so
768 long as the value is meaningful), and returns 1 if successful else
769 0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
770 normally defined in malloc.h. None of these are use in this malloc,
771 so setting them has no effect. But this malloc also supports other
772 options in mallopt. See below for details. Briefly, supported
773 parameters are as follows (listed defaults are for "typical"
776 Symbol param # default allowed param values
777 M_TRIM_THRESHOLD -1 2*1024*1024 any (MAX_SIZE_T disables)
778 M_GRANULARITY -2 page size any power of 2 >= page size
779 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
781 int dlmallopt(int, int);
785 Returns the number of bytes obtained from the system. The total
786 number of bytes allocated by malloc, realloc etc., is less than this
787 value. Unlike mallinfo, this function returns only a precomputed
788 result, so can be called frequently to monitor memory consumption.
789 Even if locks are otherwise defined, this function does not use them,
790 so results might not be up to date.
792 size_t dlmalloc_footprint(void);
795 malloc_max_footprint();
796 Returns the maximum number of bytes obtained from the system. This
797 value will be greater than current footprint if deallocated space
798 has been reclaimed by the system. The peak number of bytes allocated
799 by malloc, realloc etc., is less than this value. Unlike mallinfo,
800 this function returns only a precomputed result, so can be called
801 frequently to monitor memory consumption. Even if locks are
802 otherwise defined, this function does not use them, so results might
805 size_t dlmalloc_max_footprint(void);
810 Returns (by copy) a struct containing various summary statistics:
812 arena: current total non-mmapped bytes allocated from system
813 ordblks: the number of free chunks
815 hblks: current number of mmapped regions
816 hblkhd: total bytes held in mmapped regions
817 usmblks: the maximum total allocated space. This will be greater
818 than current total if trimming has occurred.
820 uordblks: current total allocated space (normal or mmapped)
821 fordblks: total free space
822 keepcost: the maximum number of bytes that could ideally be released
823 back to system via malloc_trim. ("ideally" means that
824 it ignores page restrictions etc.)
826 Because these fields are ints, but internal bookkeeping may
827 be kept as longs, the reported values may wrap around zero and
830 struct mallinfo dlmallinfo(void);
831 #endif /* NO_MALLINFO */
834 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
836 independent_calloc is similar to calloc, but instead of returning a
837 single cleared space, it returns an array of pointers to n_elements
838 independent elements that can hold contents of size elem_size, each
839 of which starts out cleared, and can be independently freed,
840 realloc'ed etc. The elements are guaranteed to be adjacently
841 allocated (this is not guaranteed to occur with multiple callocs or
842 mallocs), which may also improve cache locality in some
845 The "chunks" argument is optional (i.e., may be null, which is
846 probably the most typical usage). If it is null, the returned array
847 is itself dynamically allocated and should also be freed when it is
848 no longer needed. Otherwise, the chunks array must be of at least
849 n_elements in length. It is filled in with the pointers to the
852 In either case, independent_calloc returns this pointer array, or
853 null if the allocation failed. If n_elements is zero and "chunks"
854 is null, it returns a chunk representing an array with zero elements
855 (which should be freed if not wanted).
857 Each element must be individually freed when it is no longer
858 needed. If you'd like to instead be able to free all at once, you
859 should instead use regular calloc and assign pointers into this
860 space to represent elements. (In this case though, you cannot
861 independently free elements.)
863 independent_calloc simplifies and speeds up implementations of many
864 kinds of pools. It may also be useful when constructing large data
865 structures that initially have a fixed number of fixed-sized nodes,
866 but the number is not known at compile time, and some of the nodes
867 may later need to be freed. For example:
869 struct Node { int item; struct Node* next; };
871 struct Node* build_list() {
873 int n = read_number_of_nodes_needed();
874 if (n <= 0) return 0;
875 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
876 if (pool == 0) die();
877 // organize into a linked list...
878 struct Node* first = pool[0];
879 for (i = 0; i < n-1; ++i)
880 pool[i]->next = pool[i+1];
881 free(pool); // Can now free the array (or not, if it is needed later)
885 void** dlindependent_calloc(size_t, size_t, void**);
888 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
890 independent_comalloc allocates, all at once, a set of n_elements
891 chunks with sizes indicated in the "sizes" array. It returns
892 an array of pointers to these elements, each of which can be
893 independently freed, realloc'ed etc. The elements are guaranteed to
894 be adjacently allocated (this is not guaranteed to occur with
895 multiple callocs or mallocs), which may also improve cache locality
896 in some applications.
898 The "chunks" argument is optional (i.e., may be null). If it is null
899 the returned array is itself dynamically allocated and should also
900 be freed when it is no longer needed. Otherwise, the chunks array
901 must be of at least n_elements in length. It is filled in with the
902 pointers to the chunks.
904 In either case, independent_comalloc returns this pointer array, or
905 null if the allocation failed. If n_elements is zero and chunks is
906 null, it returns a chunk representing an array with zero elements
907 (which should be freed if not wanted).
909 Each element must be individually freed when it is no longer
910 needed. If you'd like to instead be able to free all at once, you
911 should instead use a single regular malloc, and assign pointers at
912 particular offsets in the aggregate space. (In this case though, you
913 cannot independently free elements.)
915 independent_comallac differs from independent_calloc in that each
916 element may have a different size, and also that it does not
917 automatically clear elements.
919 independent_comalloc can be used to speed up allocation in cases
920 where several structs or objects must always be allocated at the
921 same time. For example:
926 void send_message(char* msg) {
927 int msglen = strlen(msg);
928 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
930 if (independent_comalloc(3, sizes, chunks) == 0)
932 struct Head* head = (struct Head*)(chunks[0]);
933 char* body = (char*)(chunks[1]);
934 struct Foot* foot = (struct Foot*)(chunks[2]);
938 In general though, independent_comalloc is worth using only for
939 larger values of n_elements. For small values, you probably won't
940 detect enough difference from series of malloc calls to bother.
942 Overuse of independent_comalloc can increase overall memory usage,
943 since it cannot reuse existing noncontiguous small chunks that
944 might be available for some of the elements.
946 void** dlindependent_comalloc(size_t, size_t*, void**);
951 Equivalent to valloc(minimum-page-that-holds(n)), that is,
952 round up n to nearest pagesize.
954 void* dlpvalloc(size_t);
957 malloc_trim(size_t pad);
959 If possible, gives memory back to the system (via negative arguments
960 to sbrk) if there is unused memory at the `high' end of the malloc
961 pool or in unused MMAP segments. You can call this after freeing
962 large blocks of memory to potentially reduce the system-level memory
963 requirements of a program. However, it cannot guarantee to reduce
964 memory. Under some allocation patterns, some large free blocks of
965 memory will be locked between two used chunks, so they cannot be
966 given back to the system.
968 The `pad' argument to malloc_trim represents the amount of free
969 trailing space to leave untrimmed. If this argument is zero, only
970 the minimum amount of memory to maintain internal data structures
971 will be left. Non-zero arguments can be supplied to maintain enough
972 trailing space to service future expected allocations without having
973 to re-obtain memory from the system.
975 Malloc_trim returns 1 if it actually released any memory, else 0.
977 int dlmalloc_trim(size_t);
980 malloc_usable_size(void* p);
982 Returns the number of bytes you can actually use in
983 an allocated chunk, which may be more than you requested (although
984 often not) due to alignment and minimum size constraints.
985 You can use this many bytes without worrying about
986 overwriting other allocated objects. This is not a particularly great
987 programming practice. malloc_usable_size can be more useful in
988 debugging and assertions, for example:
991 assert(malloc_usable_size(p) >= 256);
993 size_t dlmalloc_usable_size(void*);
997 Prints on stderr the amount of space obtained from the system (both
998 via sbrk and mmap), the maximum amount (which may be more than
999 current if malloc_trim and/or munmap got called), and the current
1000 number of bytes allocated via malloc (or realloc, etc) but not yet
1001 freed. Note that this is the number of bytes allocated, not the
1002 number requested. It will be larger than the number requested
1003 because of alignment and bookkeeping overhead. Because it includes
1004 alignment wastage as being in use, this figure may be greater than
1005 zero even when no user-level chunks are allocated.
1007 The reported current and maximum system memory can be inaccurate if
1008 a program makes other calls to system memory allocation functions
1009 (normally sbrk) outside of malloc.
1011 malloc_stats prints only the most commonly interesting statistics.
1012 More information can be obtained by calling mallinfo.
1014 void dlmalloc_stats(void);
1016 #endif /* ONLY_MSPACES */
1021 mspace is an opaque type representing an independent
1022 region of space that supports mspace_malloc, etc.
1024 typedef void* mspace;
1027 create_mspace creates and returns a new independent space with the
1028 given initial capacity, or, if 0, the default granularity size. It
1029 returns null if there is no system memory available to create the
1030 space. If argument locked is non-zero, the space uses a separate
1031 lock to control access. The capacity of the space will grow
1032 dynamically as needed to service mspace_malloc requests. You can
1033 control the sizes of incremental increases of this space by
1034 compiling with a different DEFAULT_GRANULARITY or dynamically
1035 setting with mallopt(M_GRANULARITY, value).
1037 mspace create_mspace(size_t capacity, int locked);
1040 destroy_mspace destroys the given space, and attempts to return all
1041 of its memory back to the system, returning the total number of
1042 bytes freed. After destruction, the results of access to all memory
1043 used by the space become undefined.
1045 size_t destroy_mspace(mspace msp);
1048 create_mspace_with_base uses the memory supplied as the initial base
1049 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1050 space is used for bookkeeping, so the capacity must be at least this
1051 large. (Otherwise 0 is returned.) When this initial space is
1052 exhausted, additional memory will be obtained from the system.
1053 Destroying this space will deallocate all additionally allocated
1054 space (if possible) but not the initial base.
1056 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1059 mspace_malloc behaves as malloc, but operates within
1062 void* mspace_malloc(mspace msp, size_t bytes);
1065 mspace_free behaves as free, but operates within
1068 If compiled with FOOTERS==1, mspace_free is not actually needed.
1069 free may be called instead of mspace_free because freed chunks from
1070 any space are handled by their originating spaces.
1072 void mspace_free(mspace msp, void* mem);
1075 mspace_realloc behaves as realloc, but operates within
1078 If compiled with FOOTERS==1, mspace_realloc is not actually
1079 needed. realloc may be called instead of mspace_realloc because
1080 realloced chunks from any space are handled by their originating
1083 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1086 mspace_calloc behaves as calloc, but operates within
1089 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1092 mspace_memalign behaves as memalign, but operates within
1095 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1098 mspace_independent_calloc behaves as independent_calloc, but
1099 operates within the given space.
1101 void** mspace_independent_calloc(mspace msp, size_t n_elements,
1102 size_t elem_size, void* chunks[]);
1105 mspace_independent_comalloc behaves as independent_comalloc, but
1106 operates within the given space.
1108 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1109 size_t sizes[], void* chunks[]);
1112 mspace_footprint() returns the number of bytes obtained from the
1113 system for this space.
1115 size_t mspace_footprint(mspace msp);
1118 mspace_max_footprint() returns the peak number of bytes obtained from the
1119 system for this space.
1121 size_t mspace_max_footprint(mspace msp);
1126 mspace_mallinfo behaves as mallinfo, but reports properties of
1129 struct mallinfo mspace_mallinfo(mspace msp);
1130 #endif /* NO_MALLINFO */
1133 mspace_malloc_stats behaves as malloc_stats, but reports
1134 properties of the given space.
1136 void mspace_malloc_stats(mspace msp);
1139 mspace_trim behaves as malloc_trim, but
1140 operates within the given space.
1142 int mspace_trim(mspace msp, size_t pad);
1145 An alias for mallopt.
1147 int mspace_mallopt(int, int);
1149 #endif /* MSPACES */
1152 }; /* end of extern "C" */
1153 #endif /* __cplusplus */
1156 ========================================================================
1157 To make a fully customizable malloc.h header file, cut everything
1158 above this line, put into file malloc.h, edit to suit, and #include it
1159 on the next line, as well as in programs that use this malloc.
1160 ========================================================================
1163 /* #include "malloc.h" */
1165 /*------------------------------ internal #includes ---------------------- */
1168 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1171 #include <stdio.h> /* for printing in malloc_stats */
1173 #ifndef LACKS_ERRNO_H
1174 #include <errno.h> /* for MALLOC_FAILURE_ACTION */
1175 #endif /* LACKS_ERRNO_H */
1177 #include <time.h> /* for magic initialization */
1178 #endif /* FOOTERS */
1179 #ifndef LACKS_STDLIB_H
1180 #include <stdlib.h> /* for abort() */
1181 #endif /* LACKS_STDLIB_H */
1183 #if ABORT_ON_ASSERT_FAILURE
1184 #define assert(x) if(!(x)) ABORT
1185 #else /* ABORT_ON_ASSERT_FAILURE */
1187 #endif /* ABORT_ON_ASSERT_FAILURE */
1191 #ifndef LACKS_STRING_H
1192 #include <string.h> /* for memset etc */
1193 #endif /* LACKS_STRING_H */
1195 #ifndef LACKS_STRINGS_H
1196 #include <strings.h> /* for ffs */
1197 #endif /* LACKS_STRINGS_H */
1198 #endif /* USE_BUILTIN_FFS */
1200 #ifndef LACKS_SYS_MMAN_H
1201 #include <sys/mman.h> /* for mmap */
1202 #endif /* LACKS_SYS_MMAN_H */
1203 #ifndef LACKS_FCNTL_H
1205 #endif /* LACKS_FCNTL_H */
1206 #endif /* HAVE_MMAP */
1208 #ifndef LACKS_UNISTD_H
1209 #include <unistd.h> /* for sbrk */
1210 #else /* LACKS_UNISTD_H */
1211 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1212 extern void* sbrk(ptrdiff_t);
1213 #endif /* FreeBSD etc */
1214 #endif /* LACKS_UNISTD_H */
1215 #endif /* HAVE_MMAP */
1218 #ifndef malloc_getpagesize
1219 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
1220 # ifndef _SC_PAGE_SIZE
1221 # define _SC_PAGE_SIZE _SC_PAGESIZE
1224 # ifdef _SC_PAGE_SIZE
1225 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1227 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1228 extern size_t getpagesize();
1229 # define malloc_getpagesize getpagesize()
1231 # ifdef WIN32 /* use supplied emulation of getpagesize */
1232 # define malloc_getpagesize getpagesize()
1234 # ifndef LACKS_SYS_PARAM_H
1235 # include <sys/param.h>
1237 # ifdef EXEC_PAGESIZE
1238 # define malloc_getpagesize EXEC_PAGESIZE
1242 # define malloc_getpagesize NBPG
1244 # define malloc_getpagesize (NBPG * CLSIZE)
1248 # define malloc_getpagesize NBPC
1251 # define malloc_getpagesize PAGESIZE
1252 # else /* just guess */
1253 # define malloc_getpagesize ((size_t)4096U)
1264 /* ------------------- size_t and alignment properties -------------------- */
1266 /* The byte and bit size of a size_t */
1267 #define SIZE_T_SIZE (sizeof(size_t))
1268 #define SIZE_T_BITSIZE (sizeof(size_t) << 3)
1270 /* Some constants coerced to size_t */
1271 /* Annoying but necessary to avoid errors on some plaftorms */
1272 #define SIZE_T_ZERO ((size_t)0)
1273 #define SIZE_T_ONE ((size_t)1)
1274 #define SIZE_T_TWO ((size_t)2)
1275 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1)
1276 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2)
1277 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1278 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U)
1280 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1281 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE)
1283 /* True if address a has acceptable alignment */
1284 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1286 /* the number of bytes to offset an address to align it */
1287 #define align_offset(A)\
1288 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1289 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1291 /* -------------------------- MMAP preliminaries ------------------------- */
1294 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1295 checks to fail so compiler optimizer can delete code rather than
1296 using so many "#if"s.
1300 /* MORECORE and MMAP must return MFAIL on failure */
1301 #define MFAIL ((void*)(MAX_SIZE_T))
1302 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */
1305 #define IS_MMAPPED_BIT (SIZE_T_ZERO)
1306 #define USE_MMAP_BIT (SIZE_T_ZERO)
1307 #define CALL_MMAP(s) MFAIL
1308 #define CALL_MUNMAP(a, s) (-1)
1309 #define DIRECT_MMAP(s) MFAIL
1311 #else /* HAVE_MMAP */
1312 #define IS_MMAPPED_BIT (SIZE_T_ONE)
1313 #define USE_MMAP_BIT (SIZE_T_ONE)
1316 #define CALL_MUNMAP(a, s) munmap((a), (s))
1317 #define MMAP_PROT (PROT_READ|PROT_WRITE|PROT_EXEC)
1318 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1319 #define MAP_ANONYMOUS MAP_ANON
1320 #endif /* MAP_ANON */
1321 #ifdef MAP_ANONYMOUS
1322 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS)
1323 #define CALL_MMAP(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1324 #else /* MAP_ANONYMOUS */
1326 Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1327 is unlikely to be needed, but is supplied just in case.
1329 #define MMAP_FLAGS (MAP_PRIVATE)
1330 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1331 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1332 (dev_zero_fd = open("/dev/zero", O_RDWR), \
1333 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1334 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1335 #endif /* MAP_ANONYMOUS */
1337 #define DIRECT_MMAP(s) CALL_MMAP(s)
1340 /* Win32 MMAP via VirtualAlloc */
1341 static void* win32mmap(size_t size) {
1342 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_EXECUTE_READWRITE);
1343 return (ptr != 0)? ptr: MFAIL;
1346 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
1347 static void* win32direct_mmap(size_t size) {
1348 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1349 PAGE_EXECUTE_READWRITE);
1350 return (ptr != 0)? ptr: MFAIL;
1353 /* This function supports releasing coalesed segments */
1354 static int win32munmap(void* ptr, size_t size) {
1355 MEMORY_BASIC_INFORMATION minfo;
1358 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1360 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1361 minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1363 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1365 cptr += minfo.RegionSize;
1366 size -= minfo.RegionSize;
1371 #define CALL_MMAP(s) win32mmap(s)
1372 #define CALL_MUNMAP(a, s) win32munmap((a), (s))
1373 #define DIRECT_MMAP(s) win32direct_mmap(s)
1375 #endif /* HAVE_MMAP */
1377 #if HAVE_MMAP && HAVE_MREMAP
1378 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1379 #else /* HAVE_MMAP && HAVE_MREMAP */
1380 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1381 #endif /* HAVE_MMAP && HAVE_MREMAP */
1384 #define CALL_MORECORE(S) MORECORE(S)
1385 #else /* HAVE_MORECORE */
1386 #define CALL_MORECORE(S) MFAIL
1387 #endif /* HAVE_MORECORE */
1389 /* mstate bit set if continguous morecore disabled or failed */
1390 #define USE_NONCONTIGUOUS_BIT (4U)
1392 /* segment bit set in create_mspace_with_base */
1393 #define EXTERN_BIT (8U)
1396 /* --------------------------- Lock preliminaries ------------------------ */
1401 When locks are defined, there are up to two global locks:
1403 * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1404 MORECORE. In many cases sys_alloc requires two calls, that should
1405 not be interleaved with calls by other threads. This does not
1406 protect against direct calls to MORECORE by other threads not
1407 using this lock, so there is still code to cope the best we can on
1410 * magic_init_mutex ensures that mparams.magic and other
1411 unique mparams values are initialized only once.
1415 /* By default use posix locks */
1416 #include <pthread.h>
1417 #define MLOCK_T pthread_mutex_t
1418 #define INITIAL_LOCK(l) pthread_mutex_init(l, NULL)
1419 #define ACQUIRE_LOCK(l) pthread_mutex_lock(l)
1420 #define RELEASE_LOCK(l) pthread_mutex_unlock(l)
1423 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1424 #endif /* HAVE_MORECORE */
1426 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1430 Because lock-protected regions have bounded times, and there
1431 are no recursive lock calls, we can use simple spinlocks.
1434 #define MLOCK_T long
1435 static int win32_acquire_lock (MLOCK_T *sl) {
1437 #ifdef InterlockedCompareExchangePointer
1438 if (!InterlockedCompareExchange(sl, 1, 0))
1440 #else /* Use older void* version */
1441 if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1443 #endif /* InterlockedCompareExchangePointer */
1448 static void win32_release_lock (MLOCK_T *sl) {
1449 InterlockedExchange (sl, 0);
1452 #define INITIAL_LOCK(l) *(l)=0
1453 #define ACQUIRE_LOCK(l) win32_acquire_lock(l)
1454 #define RELEASE_LOCK(l) win32_release_lock(l)
1456 static MLOCK_T morecore_mutex;
1457 #endif /* HAVE_MORECORE */
1458 static MLOCK_T magic_init_mutex;
1461 #define USE_LOCK_BIT (2U)
1462 #else /* USE_LOCKS */
1463 #define USE_LOCK_BIT (0U)
1464 #define INITIAL_LOCK(l)
1465 #endif /* USE_LOCKS */
1467 #if USE_LOCKS && HAVE_MORECORE
1468 #define ACQUIRE_MORECORE_LOCK() ACQUIRE_LOCK(&morecore_mutex);
1469 #define RELEASE_MORECORE_LOCK() RELEASE_LOCK(&morecore_mutex);
1470 #else /* USE_LOCKS && HAVE_MORECORE */
1471 #define ACQUIRE_MORECORE_LOCK()
1472 #define RELEASE_MORECORE_LOCK()
1473 #endif /* USE_LOCKS && HAVE_MORECORE */
1476 #define ACQUIRE_MAGIC_INIT_LOCK() ACQUIRE_LOCK(&magic_init_mutex);
1477 #define RELEASE_MAGIC_INIT_LOCK() RELEASE_LOCK(&magic_init_mutex);
1478 #else /* USE_LOCKS */
1479 #define ACQUIRE_MAGIC_INIT_LOCK()
1480 #define RELEASE_MAGIC_INIT_LOCK()
1481 #endif /* USE_LOCKS */
1484 /* ----------------------- Chunk representations ------------------------ */
1487 (The following includes lightly edited explanations by Colin Plumb.)
1489 The malloc_chunk declaration below is misleading (but accurate and
1490 necessary). It declares a "view" into memory allowing access to
1491 necessary fields at known offsets from a given base.
1493 Chunks of memory are maintained using a `boundary tag' method as
1494 originally described by Knuth. (See the paper by Paul Wilson
1495 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1496 techniques.) Sizes of free chunks are stored both in the front of
1497 each chunk and at the end. This makes consolidating fragmented
1498 chunks into bigger chunks fast. The head fields also hold bits
1499 representing whether chunks are free or in use.
1501 Here are some pictures to make it clearer. They are "exploded" to
1502 show that the state of a chunk can be thought of as extending from
1503 the high 31 bits of the head field of its header through the
1504 prev_foot and PINUSE_BIT bit of the following chunk header.
1506 A chunk that's in use looks like:
1508 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1509 | Size of previous chunk (if P = 1) |
1510 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1511 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1512 | Size of this chunk 1| +-+
1513 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1519 +- size - sizeof(size_t) available payload bytes -+
1523 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1524 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1525 | Size of next chunk (may or may not be in use) | +-+
1526 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1528 And if it's free, it looks like this:
1531 | User payload (must be in use, or we would have merged!) |
1532 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1533 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1534 | Size of this chunk 0| +-+
1535 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1537 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1539 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1541 +- size - sizeof(struct chunk) unused bytes -+
1543 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1544 | Size of this chunk |
1545 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1546 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1547 | Size of next chunk (must be in use, or we would have merged)| +-+
1548 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1552 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1555 Note that since we always merge adjacent free chunks, the chunks
1556 adjacent to a free chunk must be in use.
1558 Given a pointer to a chunk (which can be derived trivially from the
1559 payload pointer) we can, in O(1) time, find out whether the adjacent
1560 chunks are free, and if so, unlink them from the lists that they
1561 are on and merge them with the current chunk.
1563 Chunks always begin on even word boundaries, so the mem portion
1564 (which is returned to the user) is also on an even word boundary, and
1565 thus at least double-word aligned.
1567 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1568 chunk size (which is always a multiple of two words), is an in-use
1569 bit for the *previous* chunk. If that bit is *clear*, then the
1570 word before the current chunk size contains the previous chunk
1571 size, and can be used to find the front of the previous chunk.
1572 The very first chunk allocated always has this bit set, preventing
1573 access to non-existent (or non-owned) memory. If pinuse is set for
1574 any given chunk, then you CANNOT determine the size of the
1575 previous chunk, and might even get a memory addressing fault when
1578 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1579 the chunk size redundantly records whether the current chunk is
1580 inuse. This redundancy enables usage checks within free and realloc,
1581 and reduces indirection when freeing and consolidating chunks.
1583 Each freshly allocated chunk must have both cinuse and pinuse set.
1584 That is, each allocated chunk borders either a previously allocated
1585 and still in-use chunk, or the base of its memory arena. This is
1586 ensured by making all allocations from the the `lowest' part of any
1587 found chunk. Further, no free chunk physically borders another one,
1588 so each free chunk is known to be preceded and followed by either
1589 inuse chunks or the ends of memory.
1591 Note that the `foot' of the current chunk is actually represented
1592 as the prev_foot of the NEXT chunk. This makes it easier to
1593 deal with alignments etc but can be very confusing when trying
1594 to extend or adapt this code.
1596 The exceptions to all this are
1598 1. The special chunk `top' is the top-most available chunk (i.e.,
1599 the one bordering the end of available memory). It is treated
1600 specially. Top is never included in any bin, is used only if
1601 no other chunk is available, and is released back to the
1602 system if it is very large (see M_TRIM_THRESHOLD). In effect,
1603 the top chunk is treated as larger (and thus less well
1604 fitting) than any other available chunk. The top chunk
1605 doesn't update its trailing size field since there is no next
1606 contiguous chunk that would have to index off it. However,
1607 space is still allocated for it (TOP_FOOT_SIZE) to enable
1608 separation or merging when space is extended.
1610 3. Chunks allocated via mmap, which have the lowest-order bit
1611 (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1612 PINUSE_BIT in their head fields. Because they are allocated
1613 one-by-one, each must carry its own prev_foot field, which is
1614 also used to hold the offset this chunk has within its mmapped
1615 region, which is needed to preserve alignment. Each mmapped
1616 chunk is trailed by the first two fields of a fake next-chunk
1617 for sake of usage checks.
1621 struct malloc_chunk {
1622 size_t prev_foot; /* Size of previous chunk (if free). */
1623 size_t head; /* Size and inuse bits. */
1624 struct malloc_chunk* fd; /* double links -- used only if free. */
1625 struct malloc_chunk* bk;
1628 typedef struct malloc_chunk mchunk;
1629 typedef struct malloc_chunk* mchunkptr;
1630 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */
1631 typedef unsigned int bindex_t; /* Described below */
1632 typedef unsigned int binmap_t; /* Described below */
1633 typedef unsigned int flag_t; /* The type of various bit flag sets */
1635 /* ------------------- Chunks sizes and alignments ----------------------- */
1637 #define MCHUNK_SIZE (sizeof(mchunk))
1640 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1642 #define CHUNK_OVERHEAD (SIZE_T_SIZE)
1643 #endif /* FOOTERS */
1645 /* MMapped chunks need a second word of overhead ... */
1646 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1647 /* ... and additional padding for fake next-chunk at foot */
1648 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES)
1650 /* The smallest size we can malloc is an aligned minimal chunk */
1651 #define MIN_CHUNK_SIZE\
1652 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1654 /* conversion from malloc headers to user pointers, and back */
1655 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES))
1656 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1657 /* chunk associated with aligned address A */
1658 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A)))
1660 /* Bounds on request (not chunk) sizes. */
1661 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2)
1662 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1664 /* pad request bytes into a usable size */
1665 #define pad_request(req) \
1666 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1668 /* pad request, checking for minimum (but not maximum) */
1669 #define request2size(req) \
1670 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1673 /* ------------------ Operations on head and foot fields ----------------- */
1676 The head field of a chunk is or'ed with PINUSE_BIT when previous
1677 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1678 use. If the chunk was obtained with mmap, the prev_foot field has
1679 IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1680 mmapped region to the base of the chunk.
1683 #define PINUSE_BIT (SIZE_T_ONE)
1684 #define CINUSE_BIT (SIZE_T_TWO)
1685 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT)
1687 /* Head value for fenceposts */
1688 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE)
1690 /* extraction of fields from head words */
1691 #define cinuse(p) ((p)->head & CINUSE_BIT)
1692 #define pinuse(p) ((p)->head & PINUSE_BIT)
1693 #define chunksize(p) ((p)->head & ~(INUSE_BITS))
1695 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT)
1696 #define clear_cinuse(p) ((p)->head &= ~CINUSE_BIT)
1698 /* Treat space at ptr +/- offset as a chunk */
1699 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
1700 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1702 /* Ptr to next or previous physical malloc_chunk. */
1703 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1704 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1706 /* extract next chunk's pinuse bit */
1707 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT)
1709 /* Get/set size at footer */
1710 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1711 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1713 /* Set size, pinuse bit, and foot */
1714 #define set_size_and_pinuse_of_free_chunk(p, s)\
1715 ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1717 /* Set size, pinuse bit, foot, and clear next pinuse */
1718 #define set_free_with_pinuse(p, s, n)\
1719 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1721 #define is_mmapped(p)\
1722 (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1724 /* Get the internal overhead associated with chunk p */
1725 #define overhead_for(p)\
1726 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1728 /* Return true if malloced space is not necessarily cleared */
1730 #define calloc_must_clear(p) (!is_mmapped(p))
1731 #else /* MMAP_CLEARS */
1732 #define calloc_must_clear(p) (1)
1733 #endif /* MMAP_CLEARS */
1735 /* ---------------------- Overlaid data structures ----------------------- */
1738 When chunks are not in use, they are treated as nodes of either
1741 "Small" chunks are stored in circular doubly-linked lists, and look
1744 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1745 | Size of previous chunk |
1746 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1747 `head:' | Size of chunk, in bytes |P|
1748 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1749 | Forward pointer to next chunk in list |
1750 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1751 | Back pointer to previous chunk in list |
1752 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1753 | Unused space (may be 0 bytes long) .
1756 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1757 `foot:' | Size of chunk, in bytes |
1758 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1760 Larger chunks are kept in a form of bitwise digital trees (aka
1761 tries) keyed on chunksizes. Because malloc_tree_chunks are only for
1762 free chunks greater than 256 bytes, their size doesn't impose any
1763 constraints on user chunk sizes. Each node looks like:
1765 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1766 | Size of previous chunk |
1767 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1768 `head:' | Size of chunk, in bytes |P|
1769 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1770 | Forward pointer to next chunk of same size |
1771 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1772 | Back pointer to previous chunk of same size |
1773 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1774 | Pointer to left child (child[0]) |
1775 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1776 | Pointer to right child (child[1]) |
1777 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1778 | Pointer to parent |
1779 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1780 | bin index of this chunk |
1781 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1784 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1785 `foot:' | Size of chunk, in bytes |
1786 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1788 Each tree holding treenodes is a tree of unique chunk sizes. Chunks
1789 of the same size are arranged in a circularly-linked list, with only
1790 the oldest chunk (the next to be used, in our FIFO ordering)
1791 actually in the tree. (Tree members are distinguished by a non-null
1792 parent pointer.) If a chunk with the same size an an existing node
1793 is inserted, it is linked off the existing node using pointers that
1794 work in the same way as fd/bk pointers of small chunks.
1796 Each tree contains a power of 2 sized range of chunk sizes (the
1797 smallest is 0x100 <= x < 0x180), which is is divided in half at each
1798 tree level, with the chunks in the smaller half of the range (0x100
1799 <= x < 0x140 for the top nose) in the left subtree and the larger
1800 half (0x140 <= x < 0x180) in the right subtree. This is, of course,
1801 done by inspecting individual bits.
1803 Using these rules, each node's left subtree contains all smaller
1804 sizes than its right subtree. However, the node at the root of each
1805 subtree has no particular ordering relationship to either. (The
1806 dividing line between the subtree sizes is based on trie relation.)
1807 If we remove the last chunk of a given size from the interior of the
1808 tree, we need to replace it with a leaf node. The tree ordering
1809 rules permit a node to be replaced by any leaf below it.
1811 The smallest chunk in a tree (a common operation in a best-fit
1812 allocator) can be found by walking a path to the leftmost leaf in
1813 the tree. Unlike a usual binary tree, where we follow left child
1814 pointers until we reach a null, here we follow the right child
1815 pointer any time the left one is null, until we reach a leaf with
1816 both child pointers null. The smallest chunk in the tree will be
1817 somewhere along that path.
1819 The worst case number of steps to add, find, or remove a node is
1820 bounded by the number of bits differentiating chunks within
1821 bins. Under current bin calculations, this ranges from 6 up to 21
1822 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1823 is of course much better.
1826 struct malloc_tree_chunk {
1827 /* The first four fields must be compatible with malloc_chunk */
1830 struct malloc_tree_chunk* fd;
1831 struct malloc_tree_chunk* bk;
1833 struct malloc_tree_chunk* child[2];
1834 struct malloc_tree_chunk* parent;
1838 typedef struct malloc_tree_chunk tchunk;
1839 typedef struct malloc_tree_chunk* tchunkptr;
1840 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1842 /* A little helper macro for trees */
1843 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1845 /* ----------------------------- Segments -------------------------------- */
1848 Each malloc space may include non-contiguous segments, held in a
1849 list headed by an embedded malloc_segment record representing the
1850 top-most space. Segments also include flags holding properties of
1851 the space. Large chunks that are directly allocated by mmap are not
1852 included in this list. They are instead independently created and
1853 destroyed without otherwise keeping track of them.
1855 Segment management mainly comes into play for spaces allocated by
1856 MMAP. Any call to MMAP might or might not return memory that is
1857 adjacent to an existing segment. MORECORE normally contiguously
1858 extends the current space, so this space is almost always adjacent,
1859 which is simpler and faster to deal with. (This is why MORECORE is
1860 used preferentially to MMAP when both are available -- see
1861 sys_alloc.) When allocating using MMAP, we don't use any of the
1862 hinting mechanisms (inconsistently) supported in various
1863 implementations of unix mmap, or distinguish reserving from
1864 committing memory. Instead, we just ask for space, and exploit
1865 contiguity when we get it. It is probably possible to do
1866 better than this on some systems, but no general scheme seems
1867 to be significantly better.
1869 Management entails a simpler variant of the consolidation scheme
1870 used for chunks to reduce fragmentation -- new adjacent memory is
1871 normally prepended or appended to an existing segment. However,
1872 there are limitations compared to chunk consolidation that mostly
1873 reflect the fact that segment processing is relatively infrequent
1874 (occurring only when getting memory from system) and that we
1875 don't expect to have huge numbers of segments:
1877 * Segments are not indexed, so traversal requires linear scans. (It
1878 would be possible to index these, but is not worth the extra
1879 overhead and complexity for most programs on most platforms.)
1880 * New segments are only appended to old ones when holding top-most
1881 memory; if they cannot be prepended to others, they are held in
1884 Except for the top-most segment of an mstate, each segment record
1885 is kept at the tail of its segment. Segments are added by pushing
1886 segment records onto the list headed by &mstate.seg for the
1889 Segment flags control allocation/merge/deallocation policies:
1890 * If EXTERN_BIT set, then we did not allocate this segment,
1891 and so should not try to deallocate or merge with others.
1892 (This currently holds only for the initial segment passed
1893 into create_mspace_with_base.)
1894 * If IS_MMAPPED_BIT set, the segment may be merged with
1895 other surrounding mmapped segments and trimmed/de-allocated
1897 * If neither bit is set, then the segment was obtained using
1898 MORECORE so can be merged with surrounding MORECORE'd segments
1899 and deallocated/trimmed using MORECORE with negative arguments.
1902 struct malloc_segment {
1903 char* base; /* base address */
1904 size_t size; /* allocated size */
1905 struct malloc_segment* next; /* ptr to next segment */
1906 flag_t sflags; /* mmap and extern flag */
1909 #define is_mmapped_segment(S) ((S)->sflags & IS_MMAPPED_BIT)
1910 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT)
1912 typedef struct malloc_segment msegment;
1913 typedef struct malloc_segment* msegmentptr;
1915 /* ---------------------------- malloc_state ----------------------------- */
1918 A malloc_state holds all of the bookkeeping for a space.
1919 The main fields are:
1922 The topmost chunk of the currently active segment. Its size is
1923 cached in topsize. The actual size of topmost space is
1924 topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1925 fenceposts and segment records if necessary when getting more
1926 space from the system. The size at which to autotrim top is
1927 cached from mparams in trim_check, except that it is disabled if
1930 Designated victim (dv)
1931 This is the preferred chunk for servicing small requests that
1932 don't have exact fits. It is normally the chunk split off most
1933 recently to service another small request. Its size is cached in
1934 dvsize. The link fields of this chunk are not maintained since it
1935 is not kept in a bin.
1938 An array of bin headers for free chunks. These bins hold chunks
1939 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
1940 chunks of all the same size, spaced 8 bytes apart. To simplify
1941 use in double-linked lists, each bin header acts as a malloc_chunk
1942 pointing to the real first node, if it exists (else pointing to
1943 itself). This avoids special-casing for headers. But to avoid
1944 waste, we allocate only the fd/bk pointers of bins, and then use
1945 repositioning tricks to treat these as the fields of a chunk.
1948 Treebins are pointers to the roots of trees holding a range of
1949 sizes. There are 2 equally spaced treebins for each power of two
1950 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
1954 There is one bit map for small bins ("smallmap") and one for
1955 treebins ("treemap). Each bin sets its bit when non-empty, and
1956 clears the bit when empty. Bit operations are then used to avoid
1957 bin-by-bin searching -- nearly all "search" is done without ever
1958 looking at bins that won't be selected. The bit maps
1959 conservatively use 32 bits per map word, even if on 64bit system.
1960 For a good description of some of the bit-based techniques used
1961 here, see Henry S. Warren Jr's book "Hacker's Delight" (and
1962 supplement at http://hackersdelight.org/). Many of these are
1963 intended to reduce the branchiness of paths through malloc etc, as
1964 well as to reduce the number of memory locations read or written.
1967 A list of segments headed by an embedded malloc_segment record
1968 representing the initial space.
1970 Address check support
1971 The least_addr field is the least address ever obtained from
1972 MORECORE or MMAP. Attempted frees and reallocs of any address less
1973 than this are trapped (unless INSECURE is defined).
1976 A cross-check field that should always hold same value as mparams.magic.
1979 Bits recording whether to use MMAP, locks, or contiguous MORECORE
1982 Each space keeps track of current and maximum system memory
1983 obtained via MORECORE or MMAP.
1986 If USE_LOCKS is defined, the "mutex" lock is acquired and released
1987 around every public call using this mspace.
1990 /* Bin types, widths and sizes */
1991 #define NSMALLBINS (32U)
1992 #define NTREEBINS (32U)
1993 #define SMALLBIN_SHIFT (3U)
1994 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT)
1995 #define TREEBIN_SHIFT (8U)
1996 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT)
1997 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE)
1998 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2000 struct malloc_state {
2010 mchunkptr smallbins[(NSMALLBINS+1)*2];
2011 tbinptr treebins[NTREEBINS];
2013 size_t max_footprint;
2016 MLOCK_T mutex; /* locate lock among fields that rarely change */
2017 #endif /* USE_LOCKS */
2021 typedef struct malloc_state* mstate;
2023 /* ------------- Global malloc_state and malloc_params ------------------- */
2026 malloc_params holds global properties, including those that can be
2027 dynamically set using mallopt. There is a single instance, mparams,
2028 initialized in init_mparams.
2031 struct malloc_params {
2035 size_t mmap_threshold;
2036 size_t trim_threshold;
2037 flag_t default_mflags;
2040 static struct malloc_params mparams;
2042 /* The global malloc_state used for all non-"mspace" calls */
2043 static struct malloc_state _gm_;
2045 #define is_global(M) ((M) == &_gm_)
2046 #define is_initialized(M) ((M)->top != 0)
2048 /* -------------------------- system alloc setup ------------------------- */
2050 /* Operations on mflags */
2052 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT)
2053 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT)
2054 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT)
2056 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT)
2057 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT)
2058 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT)
2060 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT)
2061 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT)
2063 #define set_lock(M,L)\
2064 ((M)->mflags = (L)?\
2065 ((M)->mflags | USE_LOCK_BIT) :\
2066 ((M)->mflags & ~USE_LOCK_BIT))
2068 /* page-align a size */
2069 #define page_align(S)\
2070 (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2072 /* granularity-align a size */
2073 #define granularity_align(S)\
2074 (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2076 #define is_page_aligned(S)\
2077 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2078 #define is_granularity_aligned(S)\
2079 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2081 /* True if segment S holds address A */
2082 #define segment_holds(S, A)\
2083 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2085 /* Return segment holding given address */
2086 static msegmentptr segment_holding(mstate m, char* addr) {
2087 msegmentptr sp = &m->seg;
2089 if (addr >= sp->base && addr < sp->base + sp->size)
2091 if ((sp = sp->next) == 0)
2096 /* Return true if segment contains a segment link */
2097 static int has_segment_link(mstate m, msegmentptr ss) {
2098 msegmentptr sp = &m->seg;
2100 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2102 if ((sp = sp->next) == 0)
2107 #ifndef MORECORE_CANNOT_TRIM
2108 #define should_trim(M,s) ((s) > (M)->trim_check)
2109 #else /* MORECORE_CANNOT_TRIM */
2110 #define should_trim(M,s) (0)
2111 #endif /* MORECORE_CANNOT_TRIM */
2114 TOP_FOOT_SIZE is padding at the end of a segment, including space
2115 that may be needed to place segment records and fenceposts when new
2116 noncontiguous segments are added.
2118 #define TOP_FOOT_SIZE\
2119 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2122 /* ------------------------------- Hooks -------------------------------- */
2125 PREACTION should be defined to return 0 on success, and nonzero on
2126 failure. If you are not using locking, you can redefine these to do
2132 /* Ensure locks are initialized */
2133 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2135 #define PREACTION(M) ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2136 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2137 #else /* USE_LOCKS */
2140 #define PREACTION(M) (0)
2141 #endif /* PREACTION */
2144 #define POSTACTION(M)
2145 #endif /* POSTACTION */
2147 #endif /* USE_LOCKS */
2150 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2151 USAGE_ERROR_ACTION is triggered on detected bad frees and
2152 reallocs. The argument p is an address that might have triggered the
2153 fault. It is ignored by the two predefined actions, but might be
2154 useful in custom actions that try to help diagnose errors.
2157 #if PROCEED_ON_ERROR
2159 /* A count of the number of corruption errors causing resets */
2160 int malloc_corruption_error_count;
2162 /* default corruption action */
2163 static void reset_on_error(mstate m);
2165 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m)
2166 #define USAGE_ERROR_ACTION(m, p)
2168 #else /* PROCEED_ON_ERROR */
2170 #ifndef CORRUPTION_ERROR_ACTION
2171 #define CORRUPTION_ERROR_ACTION(m) ABORT
2172 #endif /* CORRUPTION_ERROR_ACTION */
2174 #ifndef USAGE_ERROR_ACTION
2175 #define USAGE_ERROR_ACTION(m,p) ABORT
2176 #endif /* USAGE_ERROR_ACTION */
2178 #endif /* PROCEED_ON_ERROR */
2180 /* -------------------------- Debugging setup ---------------------------- */
2184 #define check_free_chunk(M,P)
2185 #define check_inuse_chunk(M,P)
2186 #define check_malloced_chunk(M,P,N)
2187 #define check_mmapped_chunk(M,P)
2188 #define check_malloc_state(M)
2189 #define check_top_chunk(M,P)
2192 #define check_free_chunk(M,P) do_check_free_chunk(M,P)
2193 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P)
2194 #define check_top_chunk(M,P) do_check_top_chunk(M,P)
2195 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2196 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P)
2197 #define check_malloc_state(M) do_check_malloc_state(M)
2199 static void do_check_any_chunk(mstate m, mchunkptr p);
2200 static void do_check_top_chunk(mstate m, mchunkptr p);
2201 static void do_check_mmapped_chunk(mstate m, mchunkptr p);
2202 static void do_check_inuse_chunk(mstate m, mchunkptr p);
2203 static void do_check_free_chunk(mstate m, mchunkptr p);
2204 static void do_check_malloced_chunk(mstate m, void* mem, size_t s);
2205 static void do_check_tree(mstate m, tchunkptr t);
2206 static void do_check_treebin(mstate m, bindex_t i);
2207 static void do_check_smallbin(mstate m, bindex_t i);
2208 static void do_check_malloc_state(mstate m);
2209 static int bin_find(mstate m, mchunkptr x);
2210 static size_t traverse_and_check(mstate m);
2213 /* ---------------------------- Indexing Bins ---------------------------- */
2215 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2216 #define small_index(s) ((s) >> SMALLBIN_SHIFT)
2217 #define small_index2size(i) ((i) << SMALLBIN_SHIFT)
2218 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE))
2220 /* addressing by index. See above about smallbin repositioning */
2221 #define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2222 #define treebin_at(M,i) (&((M)->treebins[i]))
2224 /* assign tree index for size S to variable I */
2225 #if defined(__GNUC__) && defined(i386)
2226 #define compute_tree_index(S, I)\
2228 size_t X = S >> TREEBIN_SHIFT;\
2231 else if (X > 0xFFFF)\
2235 __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm" (X));\
2236 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2240 #define compute_tree_index(S, I)\
2242 size_t X = S >> TREEBIN_SHIFT;\
2245 else if (X > 0xFFFF)\
2248 unsigned int Y = (unsigned int)X;\
2249 unsigned int N = ((Y - 0x100) >> 16) & 8;\
2250 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2252 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2253 K = 14 - N + ((Y <<= K) >> 15);\
2254 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2259 /* Bit representing maximum resolved size in a treebin at i */
2260 #define bit_for_tree_index(i) \
2261 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2263 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2264 #define leftshift_for_tree_index(i) \
2265 ((i == NTREEBINS-1)? 0 : \
2266 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2268 /* The size of the smallest chunk held in bin with index i */
2269 #define minsize_for_tree_index(i) \
2270 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \
2271 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2274 /* ------------------------ Operations on bin maps ----------------------- */
2276 /* bit corresponding to given index */
2277 #define idx2bit(i) ((binmap_t)(1) << (i))
2279 /* Mark/Clear bits with given index */
2280 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i))
2281 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i))
2282 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i))
2284 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i))
2285 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i))
2286 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i))
2288 /* index corresponding to given bit */
2290 #if defined(__GNUC__) && defined(i386)
2291 #define compute_bit2idx(X, I)\
2294 __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2300 #define compute_bit2idx(X, I) I = ffs(X)-1
2302 #else /* USE_BUILTIN_FFS */
2303 #define compute_bit2idx(X, I)\
2305 unsigned int Y = X - 1;\
2306 unsigned int K = Y >> (16-4) & 16;\
2307 unsigned int N = K; Y >>= K;\
2308 N += K = Y >> (8-3) & 8; Y >>= K;\
2309 N += K = Y >> (4-2) & 4; Y >>= K;\
2310 N += K = Y >> (2-1) & 2; Y >>= K;\
2311 N += K = Y >> (1-0) & 1; Y >>= K;\
2312 I = (bindex_t)(N + Y);\
2314 #endif /* USE_BUILTIN_FFS */
2317 /* isolate the least set bit of a bitmap */
2318 #define least_bit(x) ((x) & -(x))
2320 /* mask with all bits to left of least bit of x on */
2321 #define left_bits(x) ((x<<1) | -(x<<1))
2323 /* mask with all bits to left of or equal to least bit of x on */
2324 #define same_or_left_bits(x) ((x) | -(x))
2327 /* ----------------------- Runtime Check Support ------------------------- */
2330 For security, the main invariant is that malloc/free/etc never
2331 writes to a static address other than malloc_state, unless static
2332 malloc_state itself has been corrupted, which cannot occur via
2333 malloc (because of these checks). In essence this means that we
2334 believe all pointers, sizes, maps etc held in malloc_state, but
2335 check all of those linked or offsetted from other embedded data
2336 structures. These checks are interspersed with main code in a way
2337 that tends to minimize their run-time cost.
2339 When FOOTERS is defined, in addition to range checking, we also
2340 verify footer fields of inuse chunks, which can be used guarantee
2341 that the mstate controlling malloc/free is intact. This is a
2342 streamlined version of the approach described by William Robertson
2343 et al in "Run-time Detection of Heap-based Overflows" LISA'03
2344 http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2345 of an inuse chunk holds the xor of its mstate and a random seed,
2346 that is checked upon calls to free() and realloc(). This is
2347 (probablistically) unguessable from outside the program, but can be
2348 computed by any code successfully malloc'ing any chunk, so does not
2349 itself provide protection against code that has already broken
2350 security through some other means. Unlike Robertson et al, we
2351 always dynamically check addresses of all offset chunks (previous,
2352 next, etc). This turns out to be cheaper than relying on hashes.
2356 /* Check if address a is at least as high as any from MORECORE or MMAP */
2357 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2358 /* Check if address of next chunk n is higher than base chunk p */
2359 #define ok_next(p, n) ((char*)(p) < (char*)(n))
2360 /* Check if p has its cinuse bit on */
2361 #define ok_cinuse(p) cinuse(p)
2362 /* Check if p has its pinuse bit on */
2363 #define ok_pinuse(p) pinuse(p)
2365 #else /* !INSECURE */
2366 #define ok_address(M, a) (1)
2367 #define ok_next(b, n) (1)
2368 #define ok_cinuse(p) (1)
2369 #define ok_pinuse(p) (1)
2370 #endif /* !INSECURE */
2372 #if (FOOTERS && !INSECURE)
2373 /* Check if (alleged) mstate m has expected magic field */
2374 #define ok_magic(M) ((M)->magic == mparams.magic)
2375 #else /* (FOOTERS && !INSECURE) */
2376 #define ok_magic(M) (1)
2377 #endif /* (FOOTERS && !INSECURE) */
2380 /* In gcc, use __builtin_expect to minimize impact of checks */
2382 #if defined(__GNUC__) && __GNUC__ >= 3
2383 #define RTCHECK(e) __builtin_expect(e, 1)
2385 #define RTCHECK(e) (e)
2387 #else /* !INSECURE */
2388 #define RTCHECK(e) (1)
2389 #endif /* !INSECURE */
2391 /* macros to set up inuse chunks with or without footers */
2395 #define mark_inuse_foot(M,p,s)
2397 /* Set cinuse bit and pinuse bit of next chunk */
2398 #define set_inuse(M,p,s)\
2399 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2400 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2402 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2403 #define set_inuse_and_pinuse(M,p,s)\
2404 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2405 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2407 /* Set size, cinuse and pinuse bit of this chunk */
2408 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2409 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2413 /* Set foot of inuse chunk to be xor of mstate and seed */
2414 #define mark_inuse_foot(M,p,s)\
2415 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2417 #define get_mstate_for(p)\
2418 ((mstate)(((mchunkptr)((char*)(p) +\
2419 (chunksize(p))))->prev_foot ^ mparams.magic))
2421 #define set_inuse(M,p,s)\
2422 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2423 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2424 mark_inuse_foot(M,p,s))
2426 #define set_inuse_and_pinuse(M,p,s)\
2427 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2428 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2429 mark_inuse_foot(M,p,s))
2431 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2432 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2433 mark_inuse_foot(M, p, s))
2435 #endif /* !FOOTERS */
2437 /* ---------------------------- setting mparams -------------------------- */
2439 /* Initialize mparams */
2440 static int init_mparams(void) {
2441 if (mparams.page_size == 0) {
2444 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2445 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2446 #if MORECORE_CONTIGUOUS
2447 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2448 #else /* MORECORE_CONTIGUOUS */
2449 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2450 #endif /* MORECORE_CONTIGUOUS */
2452 #if (FOOTERS && !INSECURE)
2456 unsigned char buf[sizeof(size_t)];
2457 /* Try to use /dev/urandom, else fall back on using time */
2458 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2459 read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2460 s = *((size_t *) buf);
2464 #endif /* USE_DEV_RANDOM */
2465 s = (size_t)(time(0) ^ (size_t)0x55555555U);
2467 s |= (size_t)8U; /* ensure nonzero */
2468 s &= ~(size_t)7U; /* improve chances of fault for bad values */
2471 #else /* (FOOTERS && !INSECURE) */
2472 s = (size_t)0x58585858U;
2473 #endif /* (FOOTERS && !INSECURE) */
2474 ACQUIRE_MAGIC_INIT_LOCK();
2475 if (mparams.magic == 0) {
2477 /* Set up lock for main malloc area */
2478 INITIAL_LOCK(&gm->mutex);
2479 gm->mflags = mparams.default_mflags;
2481 RELEASE_MAGIC_INIT_LOCK();
2484 mparams.page_size = malloc_getpagesize;
2485 mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2486 DEFAULT_GRANULARITY : mparams.page_size);
2489 SYSTEM_INFO system_info;
2490 GetSystemInfo(&system_info);
2491 mparams.page_size = system_info.dwPageSize;
2492 mparams.granularity = system_info.dwAllocationGranularity;
2496 /* Sanity-check configuration:
2497 size_t must be unsigned and as wide as pointer type.
2498 ints must be at least 4 bytes.
2499 alignment must be at least 8.
2500 Alignment, min chunk size, and page size must all be powers of 2.
2502 if ((sizeof(size_t) != sizeof(char*)) ||
2503 (MAX_SIZE_T < MIN_CHUNK_SIZE) ||
2504 (sizeof(int) < 4) ||
2505 (MALLOC_ALIGNMENT < (size_t)8U) ||
2506 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
2507 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) ||
2508 ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2509 ((mparams.page_size & (mparams.page_size-SIZE_T_ONE)) != 0))
2516 /* support for mallopt */
2517 static int change_mparam(int param_number, int value) {
2518 size_t val = (size_t)value;
2520 switch(param_number) {
2521 case M_TRIM_THRESHOLD:
2522 mparams.trim_threshold = val;
2525 if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2526 mparams.granularity = val;
2531 case M_MMAP_THRESHOLD:
2532 mparams.mmap_threshold = val;
2541 /* ------------------------- Debugging Support --------------------------- */
2543 /* Check properties of any chunk, whether free, inuse, mmapped etc */
2544 static void do_check_any_chunk(mstate m, mchunkptr p) {
2545 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2546 assert(ok_address(m, p));
2549 /* Check properties of top chunk */
2550 static void do_check_top_chunk(mstate m, mchunkptr p) {
2551 msegmentptr sp = segment_holding(m, (char*)p);
2552 size_t sz = chunksize(p);
2554 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2555 assert(ok_address(m, p));
2556 assert(sz == m->topsize);
2558 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2560 assert(!next_pinuse(p));
2563 /* Check properties of (inuse) mmapped chunks */
2564 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2565 size_t sz = chunksize(p);
2566 size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2567 assert(is_mmapped(p));
2568 assert(use_mmap(m));
2569 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2570 assert(ok_address(m, p));
2571 assert(!is_small(sz));
2572 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2573 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2574 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2577 /* Check properties of inuse chunks */
2578 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2579 do_check_any_chunk(m, p);
2581 assert(next_pinuse(p));
2582 /* If not pinuse and not mmapped, previous chunk has OK offset */
2583 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2585 do_check_mmapped_chunk(m, p);
2588 /* Check properties of free chunks */
2589 static void do_check_free_chunk(mstate m, mchunkptr p) {
2590 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2591 mchunkptr next = chunk_plus_offset(p, sz);
2592 do_check_any_chunk(m, p);
2594 assert(!next_pinuse(p));
2595 assert (!is_mmapped(p));
2596 if (p != m->dv && p != m->top) {
2597 if (sz >= MIN_CHUNK_SIZE) {
2598 assert((sz & CHUNK_ALIGN_MASK) == 0);
2599 assert(is_aligned(chunk2mem(p)));
2600 assert(next->prev_foot == sz);
2602 assert (next == m->top || cinuse(next));
2603 assert(p->fd->bk == p);
2604 assert(p->bk->fd == p);
2606 else /* markers are always of size SIZE_T_SIZE */
2607 assert(sz == SIZE_T_SIZE);
2611 /* Check properties of malloced chunks at the point they are malloced */
2612 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2614 mchunkptr p = mem2chunk(mem);
2615 size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2616 do_check_inuse_chunk(m, p);
2617 assert((sz & CHUNK_ALIGN_MASK) == 0);
2618 assert(sz >= MIN_CHUNK_SIZE);
2620 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2621 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2625 /* Check a tree and its subtrees. */
2626 static void do_check_tree(mstate m, tchunkptr t) {
2629 bindex_t tindex = t->index;
2630 size_t tsize = chunksize(t);
2632 compute_tree_index(tsize, idx);
2633 assert(tindex == idx);
2634 assert(tsize >= MIN_LARGE_SIZE);
2635 assert(tsize >= minsize_for_tree_index(idx));
2636 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2638 do { /* traverse through chain of same-sized nodes */
2639 do_check_any_chunk(m, ((mchunkptr)u));
2640 assert(u->index == tindex);
2641 assert(chunksize(u) == tsize);
2643 assert(!next_pinuse(u));
2644 assert(u->fd->bk == u);
2645 assert(u->bk->fd == u);
2646 if (u->parent == 0) {
2647 assert(u->child[0] == 0);
2648 assert(u->child[1] == 0);
2651 assert(head == 0); /* only one node on chain has parent */
2653 assert(u->parent != u);
2654 assert (u->parent->child[0] == u ||
2655 u->parent->child[1] == u ||
2656 *((tbinptr*)(u->parent)) == u);
2657 if (u->child[0] != 0) {
2658 assert(u->child[0]->parent == u);
2659 assert(u->child[0] != u);
2660 do_check_tree(m, u->child[0]);
2662 if (u->child[1] != 0) {
2663 assert(u->child[1]->parent == u);
2664 assert(u->child[1] != u);
2665 do_check_tree(m, u->child[1]);
2667 if (u->child[0] != 0 && u->child[1] != 0) {
2668 assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2676 /* Check all the chunks in a treebin. */
2677 static void do_check_treebin(mstate m, bindex_t i) {
2678 tbinptr* tb = treebin_at(m, i);
2680 int empty = (m->treemap & (1U << i)) == 0;
2684 do_check_tree(m, t);
2687 /* Check all the chunks in a smallbin. */
2688 static void do_check_smallbin(mstate m, bindex_t i) {
2689 sbinptr b = smallbin_at(m, i);
2690 mchunkptr p = b->bk;
2691 unsigned int empty = (m->smallmap & (1U << i)) == 0;
2695 for (; p != b; p = p->bk) {
2696 size_t size = chunksize(p);
2698 /* each chunk claims to be free */
2699 do_check_free_chunk(m, p);
2700 /* chunk belongs in bin */
2701 assert(small_index(size) == i);
2702 assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2703 /* chunk is followed by an inuse chunk */
2705 if (q->head != FENCEPOST_HEAD)
2706 do_check_inuse_chunk(m, q);
2711 /* Find x in a bin. Used in other check functions. */
2712 static int bin_find(mstate m, mchunkptr x) {
2713 size_t size = chunksize(x);
2714 if (is_small(size)) {
2715 bindex_t sidx = small_index(size);
2716 sbinptr b = smallbin_at(m, sidx);
2717 if (smallmap_is_marked(m, sidx)) {
2722 } while ((p = p->fd) != b);
2727 compute_tree_index(size, tidx);
2728 if (treemap_is_marked(m, tidx)) {
2729 tchunkptr t = *treebin_at(m, tidx);
2730 size_t sizebits = size << leftshift_for_tree_index(tidx);
2731 while (t != 0 && chunksize(t) != size) {
2732 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2738 if (u == (tchunkptr)x)
2740 } while ((u = u->fd) != t);
2747 /* Traverse each chunk and check it; return total */
2748 static size_t traverse_and_check(mstate m) {
2750 if (is_initialized(m)) {
2751 msegmentptr s = &m->seg;
2752 sum += m->topsize + TOP_FOOT_SIZE;
2754 mchunkptr q = align_as_chunk(s->base);
2755 mchunkptr lastq = 0;
2757 while (segment_holds(s, q) &&
2758 q != m->top && q->head != FENCEPOST_HEAD) {
2759 sum += chunksize(q);
2761 assert(!bin_find(m, q));
2762 do_check_inuse_chunk(m, q);
2765 assert(q == m->dv || bin_find(m, q));
2766 assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2767 do_check_free_chunk(m, q);
2778 /* Check all properties of malloc_state. */
2779 static void do_check_malloc_state(mstate m) {
2783 for (i = 0; i < NSMALLBINS; ++i)
2784 do_check_smallbin(m, i);
2785 for (i = 0; i < NTREEBINS; ++i)
2786 do_check_treebin(m, i);
2788 if (m->dvsize != 0) { /* check dv chunk */
2789 do_check_any_chunk(m, m->dv);
2790 assert(m->dvsize == chunksize(m->dv));
2791 assert(m->dvsize >= MIN_CHUNK_SIZE);
2792 assert(bin_find(m, m->dv) == 0);
2795 if (m->top != 0) { /* check top chunk */
2796 do_check_top_chunk(m, m->top);
2797 assert(m->topsize == chunksize(m->top));
2798 assert(m->topsize > 0);
2799 assert(bin_find(m, m->top) == 0);
2802 total = traverse_and_check(m);
2803 assert(total <= m->footprint);
2804 assert(m->footprint <= m->max_footprint);
2808 /* ----------------------------- statistics ------------------------------ */
2811 static struct mallinfo internal_mallinfo(mstate m) {
2812 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2813 if (!PREACTION(m)) {
2814 check_malloc_state(m);
2815 if (is_initialized(m)) {
2816 size_t nfree = SIZE_T_ONE; /* top always free */
2817 size_t mfree = m->topsize + TOP_FOOT_SIZE;
2819 msegmentptr s = &m->seg;
2821 mchunkptr q = align_as_chunk(s->base);
2822 while (segment_holds(s, q) &&
2823 q != m->top && q->head != FENCEPOST_HEAD) {
2824 size_t sz = chunksize(q);
2837 nm.hblkhd = m->footprint - sum;
2838 nm.usmblks = m->max_footprint;
2839 nm.uordblks = m->footprint - mfree;
2840 nm.fordblks = mfree;
2841 nm.keepcost = m->topsize;
2848 #endif /* !NO_MALLINFO */
2851 static void internal_malloc_stats(mstate m) {
2852 if (!PREACTION(m)) {
2856 check_malloc_state(m);
2857 if (is_initialized(m)) {
2858 msegmentptr s = &m->seg;
2859 maxfp = m->max_footprint;
2861 used = fp - (m->topsize + TOP_FOOT_SIZE);
2864 mchunkptr q = align_as_chunk(s->base);
2865 while (segment_holds(s, q) &&
2866 q != m->top && q->head != FENCEPOST_HEAD) {
2868 used -= chunksize(q);
2875 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2876 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp));
2877 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used));
2884 /* ----------------------- Operations on smallbins ----------------------- */
2887 Various forms of linking and unlinking are defined as macros. Even
2888 the ones for trees, which are very long but have very short typical
2889 paths. This is ugly but reduces reliance on inlining support of
2893 /* Link a free chunk into a smallbin */
2894 #define insert_small_chunk(M, P, S) {\
2895 bindex_t I = small_index(S);\
2896 mchunkptr B = smallbin_at(M, I);\
2898 assert(S >= MIN_CHUNK_SIZE);\
2899 if (!smallmap_is_marked(M, I))\
2900 mark_smallmap(M, I);\
2901 else if (RTCHECK(ok_address(M, B->fd)))\
2904 CORRUPTION_ERROR_ACTION(M);\
2912 /* Unlink a chunk from a smallbin */
2913 #define unlink_small_chunk(M, P, S) {\
2914 mchunkptr F = P->fd;\
2915 mchunkptr B = P->bk;\
2916 bindex_t I = small_index(S);\
2919 assert(chunksize(P) == small_index2size(I));\
2921 clear_smallmap(M, I);\
2922 else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2923 (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2928 CORRUPTION_ERROR_ACTION(M);\
2932 /* Unlink the first chunk from a smallbin */
2933 #define unlink_first_small_chunk(M, B, P, I) {\
2934 mchunkptr F = P->fd;\
2937 assert(chunksize(P) == small_index2size(I));\
2939 clear_smallmap(M, I);\
2940 else if (RTCHECK(ok_address(M, F))) {\
2945 CORRUPTION_ERROR_ACTION(M);\
2949 /* Replace dv node, binning the old one */
2950 /* Used only when dvsize known to be small */
2951 #define replace_dv(M, P, S) {\
2952 size_t DVS = M->dvsize;\
2954 mchunkptr DV = M->dv;\
2955 assert(is_small(DVS));\
2956 insert_small_chunk(M, DV, DVS);\
2962 /* ------------------------- Operations on trees ------------------------- */
2964 /* Insert chunk into tree */
2965 #define insert_large_chunk(M, X, S) {\
2968 compute_tree_index(S, I);\
2969 H = treebin_at(M, I);\
2971 X->child[0] = X->child[1] = 0;\
2972 if (!treemap_is_marked(M, I)) {\
2973 mark_treemap(M, I);\
2975 X->parent = (tchunkptr)H;\
2980 size_t K = S << leftshift_for_tree_index(I);\
2982 if (chunksize(T) != S) {\
2983 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
2987 else if (RTCHECK(ok_address(M, C))) {\
2994 CORRUPTION_ERROR_ACTION(M);\
2999 tchunkptr F = T->fd;\
3000 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3008 CORRUPTION_ERROR_ACTION(M);\
3019 1. If x is a chained node, unlink it from its same-sized fd/bk links
3020 and choose its bk node as its replacement.
3021 2. If x was the last node of its size, but not a leaf node, it must
3022 be replaced with a leaf node (not merely one with an open left or
3023 right), to make sure that lefts and rights of descendents
3024 correspond properly to bit masks. We use the rightmost descendent
3025 of x. We could use any other leaf, but this is easy to locate and
3026 tends to counteract removal of leftmosts elsewhere, and so keeps
3027 paths shorter than minimally guaranteed. This doesn't loop much
3028 because on average a node in a tree is near the bottom.
3029 3. If x is the base of a chain (i.e., has parent links) relink
3030 x's parent and children to x's replacement (or null if none).
3033 #define unlink_large_chunk(M, X) {\
3034 tchunkptr XP = X->parent;\
3037 tchunkptr F = X->fd;\
3039 if (RTCHECK(ok_address(M, F))) {\
3044 CORRUPTION_ERROR_ACTION(M);\
3049 if (((R = *(RP = &(X->child[1]))) != 0) ||\
3050 ((R = *(RP = &(X->child[0]))) != 0)) {\
3052 while ((*(CP = &(R->child[1])) != 0) ||\
3053 (*(CP = &(R->child[0])) != 0)) {\
3056 if (RTCHECK(ok_address(M, RP)))\
3059 CORRUPTION_ERROR_ACTION(M);\
3064 tbinptr* H = treebin_at(M, X->index);\
3066 if ((*H = R) == 0) \
3067 clear_treemap(M, X->index);\
3069 else if (RTCHECK(ok_address(M, XP))) {\
3070 if (XP->child[0] == X) \
3076 CORRUPTION_ERROR_ACTION(M);\
3078 if (RTCHECK(ok_address(M, R))) {\
3081 if ((C0 = X->child[0]) != 0) {\
3082 if (RTCHECK(ok_address(M, C0))) {\
3087 CORRUPTION_ERROR_ACTION(M);\
3089 if ((C1 = X->child[1]) != 0) {\
3090 if (RTCHECK(ok_address(M, C1))) {\
3095 CORRUPTION_ERROR_ACTION(M);\
3099 CORRUPTION_ERROR_ACTION(M);\
3104 /* Relays to large vs small bin operations */
3106 #define insert_chunk(M, P, S)\
3107 if (is_small(S)) insert_small_chunk(M, P, S)\
3108 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3110 #define unlink_chunk(M, P, S)\
3111 if (is_small(S)) unlink_small_chunk(M, P, S)\
3112 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3115 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3118 #define internal_malloc(m, b) mspace_malloc(m, b)
3119 #define internal_free(m, mem) mspace_free(m,mem);
3120 #else /* ONLY_MSPACES */
3122 #define internal_malloc(m, b)\
3123 (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3124 #define internal_free(m, mem)\
3125 if (m == gm) dlfree(mem); else mspace_free(m,mem);
3127 #define internal_malloc(m, b) dlmalloc(b)
3128 #define internal_free(m, mem) dlfree(mem)
3129 #endif /* MSPACES */
3130 #endif /* ONLY_MSPACES */
3132 /* ----------------------- Direct-mmapping chunks ----------------------- */
3135 Directly mmapped chunks are set up with an offset to the start of
3136 the mmapped region stored in the prev_foot field of the chunk. This
3137 allows reconstruction of the required argument to MUNMAP when freed,
3138 and also allows adjustment of the returned chunk to meet alignment
3139 requirements (especially in memalign). There is also enough space
3140 allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3141 the PINUSE bit so frees can be checked.
3144 /* Malloc using mmap */
3145 static void* mmap_alloc(mstate m, size_t nb) {
3146 size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3147 if (mmsize > nb) { /* Check for wrap around 0 */
3148 char* mm = (char*)(DIRECT_MMAP(mmsize));
3150 size_t offset = align_offset(chunk2mem(mm));
3151 size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3152 mchunkptr p = (mchunkptr)(mm + offset);
3153 p->prev_foot = offset | IS_MMAPPED_BIT;
3154 (p)->head = (psize|CINUSE_BIT);
3155 mark_inuse_foot(m, p, psize);
3156 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3157 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3159 if (mm < m->least_addr)
3161 if ((m->footprint += mmsize) > m->max_footprint)
3162 m->max_footprint = m->footprint;
3163 assert(is_aligned(chunk2mem(p)));
3164 check_mmapped_chunk(m, p);
3165 return chunk2mem(p);
3173 /* Realloc using mmap */
3174 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3175 size_t oldsize = chunksize(oldp);
3176 if (is_small(nb)) /* Can't shrink mmap regions below small size */
3178 /* Keep old chunk if big enough but not too big */
3179 if (oldsize >= nb + SIZE_T_SIZE &&
3180 (oldsize - nb) <= (mparams.granularity << 1))
3183 size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3184 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3185 size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3187 char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3188 oldmmsize, newmmsize, 1);
3190 mchunkptr newp = (mchunkptr)(cp + offset);
3191 size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3192 newp->head = (psize|CINUSE_BIT);
3193 mark_inuse_foot(m, newp, psize);
3194 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3195 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3197 if (cp < m->least_addr)
3199 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3200 m->max_footprint = m->footprint;
3201 check_mmapped_chunk(m, newp);
3210 /* -------------------------- mspace management -------------------------- */
3212 /* Initialize top chunk and its size */
3213 static void init_top(mstate m, mchunkptr p, size_t psize) {
3214 /* Ensure alignment */
3215 size_t offset = align_offset(chunk2mem(p));
3216 p = (mchunkptr)((char*)p + offset);
3221 p->head = psize | PINUSE_BIT;
3222 /* set size of fake trailing chunk holding overhead space only once */
3223 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3224 m->trim_check = mparams.trim_threshold; /* reset on each update */
3227 /* Initialize bins for a new mstate that is otherwise zeroed out */
3228 static void init_bins(mstate m) {
3229 /* Establish circular links for smallbins */
3231 for (i = 0; i < NSMALLBINS; ++i) {
3232 sbinptr bin = smallbin_at(m,i);
3233 bin->fd = bin->bk = bin;
3237 #if PROCEED_ON_ERROR
3239 /* default corruption action */
3240 static void reset_on_error(mstate m) {
3242 ++malloc_corruption_error_count;
3243 /* Reinitialize fields to forget about all memory */
3244 m->smallbins = m->treebins = 0;
3245 m->dvsize = m->topsize = 0;
3250 for (i = 0; i < NTREEBINS; ++i)
3251 *treebin_at(m, i) = 0;
3254 #endif /* PROCEED_ON_ERROR */
3256 /* Allocate chunk and prepend remainder with chunk in successor base. */
3257 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3259 mchunkptr p = align_as_chunk(newbase);
3260 mchunkptr oldfirst = align_as_chunk(oldbase);
3261 size_t psize = (char*)oldfirst - (char*)p;
3262 mchunkptr q = chunk_plus_offset(p, nb);
3263 size_t qsize = psize - nb;
3264 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3266 assert((char*)oldfirst > (char*)q);
3267 assert(pinuse(oldfirst));
3268 assert(qsize >= MIN_CHUNK_SIZE);
3270 /* consolidate remainder with first chunk of old base */
3271 if (oldfirst == m->top) {
3272 size_t tsize = m->topsize += qsize;
3274 q->head = tsize | PINUSE_BIT;
3275 check_top_chunk(m, q);
3277 else if (oldfirst == m->dv) {
3278 size_t dsize = m->dvsize += qsize;
3280 set_size_and_pinuse_of_free_chunk(q, dsize);
3283 if (!cinuse(oldfirst)) {
3284 size_t nsize = chunksize(oldfirst);
3285 unlink_chunk(m, oldfirst, nsize);
3286 oldfirst = chunk_plus_offset(oldfirst, nsize);
3289 set_free_with_pinuse(q, qsize, oldfirst);
3290 insert_chunk(m, q, qsize);
3291 check_free_chunk(m, q);
3294 check_malloced_chunk(m, chunk2mem(p), nb);
3295 return chunk2mem(p);
3299 /* Add a segment to hold a new noncontiguous region */
3300 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3301 /* Determine locations and sizes of segment, fenceposts, old top */
3302 char* old_top = (char*)m->top;
3303 msegmentptr oldsp = segment_holding(m, old_top);
3304 char* old_end = oldsp->base + oldsp->size;
3305 size_t ssize = pad_request(sizeof(struct malloc_segment));
3306 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3307 size_t offset = align_offset(chunk2mem(rawsp));
3308 char* asp = rawsp + offset;
3309 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3310 mchunkptr sp = (mchunkptr)csp;
3311 msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3312 mchunkptr tnext = chunk_plus_offset(sp, ssize);
3313 mchunkptr p = tnext;
3316 /* reset top to new space */
3317 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3319 /* Set up segment record */
3320 assert(is_aligned(ss));
3321 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3322 *ss = m->seg; /* Push current record */
3323 m->seg.base = tbase;
3324 m->seg.size = tsize;
3325 m->seg.sflags = mmapped;
3328 /* Insert trailing fenceposts */
3330 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3331 p->head = FENCEPOST_HEAD;
3333 if ((char*)(&(nextp->head)) < old_end)
3338 assert(nfences >= 2);
3340 /* Insert the rest of old top into a bin as an ordinary free chunk */
3341 if (csp != old_top) {
3342 mchunkptr q = (mchunkptr)old_top;
3343 size_t psize = csp - old_top;
3344 mchunkptr tn = chunk_plus_offset(q, psize);
3345 set_free_with_pinuse(q, psize, tn);
3346 insert_chunk(m, q, psize);
3349 check_top_chunk(m, m->top);
3352 /* -------------------------- System allocation -------------------------- */
3354 /* Get memory from system using MORECORE or MMAP */
3355 static void* sys_alloc(mstate m, size_t nb) {
3356 char* tbase = CMFAIL;
3358 flag_t mmap_flag = 0;
3362 /* Directly map large chunks */
3363 if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3364 void* mem = mmap_alloc(m, nb);
3370 Try getting memory in any of three ways (in most-preferred to
3371 least-preferred order):
3372 1. A call to MORECORE that can normally contiguously extend memory.
3373 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3374 or main space is mmapped or a previous contiguous call failed)
3375 2. A call to MMAP new space (disabled if not HAVE_MMAP).
3376 Note that under the default settings, if MORECORE is unable to
3377 fulfill a request, and HAVE_MMAP is true, then mmap is
3378 used as a noncontiguous system allocator. This is a useful backup
3379 strategy for systems with holes in address spaces -- in this case
3380 sbrk cannot contiguously expand the heap, but mmap may be able to
3382 3. A call to MORECORE that cannot usually contiguously extend memory.
3383 (disabled if not HAVE_MORECORE)
3386 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3388 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3390 ACQUIRE_MORECORE_LOCK();
3392 if (ss == 0) { /* First time through or recovery */
3393 char* base = (char*)CALL_MORECORE(0);
3394 if (base != CMFAIL) {
3395 asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3396 /* Adjust to end on a page boundary */
3397 if (!is_page_aligned(base))
3398 asize += (page_align((size_t)base) - (size_t)base);
3399 /* Can't call MORECORE if size is negative when treated as signed */
3400 if (asize < HALF_MAX_SIZE_T &&
3401 (br = (char*)(CALL_MORECORE(asize))) == base) {
3408 /* Subtract out existing available top space from MORECORE request. */
3409 asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3410 /* Use mem here only if it did continuously extend old space */
3411 if (asize < HALF_MAX_SIZE_T &&
3412 (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3418 if (tbase == CMFAIL) { /* Cope with partial failure */
3419 if (br != CMFAIL) { /* Try to use/extend the space we did get */
3420 if (asize < HALF_MAX_SIZE_T &&
3421 asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3422 size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3423 if (esize < HALF_MAX_SIZE_T) {
3424 char* end = (char*)CALL_MORECORE(esize);
3427 else { /* Can't use; try to release */
3428 CALL_MORECORE(-asize);
3434 if (br != CMFAIL) { /* Use the space we did get */
3439 disable_contiguous(m); /* Don't try contiguous path in the future */
3442 RELEASE_MORECORE_LOCK();
3445 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */
3446 size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3447 size_t rsize = granularity_align(req);
3448 if (rsize > nb) { /* Fail if wraps around zero */
3449 char* mp = (char*)(CALL_MMAP(rsize));
3453 mmap_flag = IS_MMAPPED_BIT;
3458 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3459 size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3460 if (asize < HALF_MAX_SIZE_T) {
3463 ACQUIRE_MORECORE_LOCK();
3464 br = (char*)(CALL_MORECORE(asize));
3465 end = (char*)(CALL_MORECORE(0));
3466 RELEASE_MORECORE_LOCK();
3467 if (br != CMFAIL && end != CMFAIL && br < end) {
3468 size_t ssize = end - br;
3469 if (ssize > nb + TOP_FOOT_SIZE) {
3477 if (tbase != CMFAIL) {
3479 if ((m->footprint += tsize) > m->max_footprint)
3480 m->max_footprint = m->footprint;
3482 if (!is_initialized(m)) { /* first-time initialization */
3483 m->seg.base = m->least_addr = tbase;
3484 m->seg.size = tsize;
3485 m->seg.sflags = mmap_flag;
3486 m->magic = mparams.magic;
3489 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3491 /* Offset top by embedded malloc_state */
3492 mchunkptr mn = next_chunk(mem2chunk(m));
3493 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3498 /* Try to merge with an existing segment */
3499 msegmentptr sp = &m->seg;
3500 while (sp != 0 && tbase != sp->base + sp->size)
3503 !is_extern_segment(sp) &&
3504 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
3505 segment_holds(sp, m->top)) { /* append */
3507 init_top(m, m->top, m->topsize + tsize);
3510 if (tbase < m->least_addr)
3511 m->least_addr = tbase;
3513 while (sp != 0 && sp->base != tbase + tsize)
3516 !is_extern_segment(sp) &&
3517 (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
3518 char* oldbase = sp->base;
3521 return prepend_alloc(m, tbase, oldbase, nb);
3524 add_segment(m, tbase, tsize, mmap_flag);
3528 if (nb < m->topsize) { /* Allocate from new or extended top space */
3529 size_t rsize = m->topsize -= nb;
3530 mchunkptr p = m->top;
3531 mchunkptr r = m->top = chunk_plus_offset(p, nb);
3532 r->head = rsize | PINUSE_BIT;
3533 set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3534 check_top_chunk(m, m->top);
3535 check_malloced_chunk(m, chunk2mem(p), nb);
3536 return chunk2mem(p);
3540 MALLOC_FAILURE_ACTION;
3544 /* ----------------------- system deallocation -------------------------- */
3546 /* Unmap and unlink any mmapped segments that don't contain used chunks */
3547 static size_t release_unused_segments(mstate m) {
3548 size_t released = 0;
3549 msegmentptr pred = &m->seg;
3550 msegmentptr sp = pred->next;
3552 char* base = sp->base;
3553 size_t size = sp->size;
3554 msegmentptr next = sp->next;
3555 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3556 mchunkptr p = align_as_chunk(base);
3557 size_t psize = chunksize(p);
3558 /* Can unmap if first chunk holds entire segment and not pinned */
3559 if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3560 tchunkptr tp = (tchunkptr)p;
3561 assert(segment_holds(sp, (char*)sp));
3567 unlink_large_chunk(m, tp);
3569 if (CALL_MUNMAP(base, size) == 0) {
3571 m->footprint -= size;
3572 /* unlink obsoleted record */
3576 else { /* back out if cannot unmap */
3577 insert_large_chunk(m, tp, psize);
3587 static int sys_trim(mstate m, size_t pad) {
3588 size_t released = 0;
3589 if (pad < MAX_REQUEST && is_initialized(m)) {
3590 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3592 if (m->topsize > pad) {
3593 /* Shrink top space in granularity-size units, keeping at least one */
3594 size_t unit = mparams.granularity;
3595 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3597 msegmentptr sp = segment_holding(m, (char*)m->top);
3599 if (!is_extern_segment(sp)) {
3600 if (is_mmapped_segment(sp)) {
3602 sp->size >= extra &&
3603 !has_segment_link(m, sp)) { /* can't shrink if pinned */
3604 size_t newsize = sp->size - extra;
3605 /* Prefer mremap, fall back to munmap */
3606 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3607 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3612 else if (HAVE_MORECORE) {
3613 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3614 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3615 ACQUIRE_MORECORE_LOCK();
3617 /* Make sure end of memory is where we last set it. */
3618 char* old_br = (char*)(CALL_MORECORE(0));
3619 if (old_br == sp->base + sp->size) {
3620 char* rel_br = (char*)(CALL_MORECORE(-extra));
3621 char* new_br = (char*)(CALL_MORECORE(0));
3622 if (rel_br != CMFAIL && new_br < old_br)
3623 released = old_br - new_br;
3626 RELEASE_MORECORE_LOCK();
3630 if (released != 0) {
3631 sp->size -= released;
3632 m->footprint -= released;
3633 init_top(m, m->top, m->topsize - released);
3634 check_top_chunk(m, m->top);
3638 /* Unmap any unused mmapped segments */
3640 released += release_unused_segments(m);
3642 /* On failure, disable autotrim to avoid repeated failed future calls */
3644 m->trim_check = MAX_SIZE_T;
3647 return (released != 0)? 1 : 0;
3650 /* ---------------------------- malloc support --------------------------- */
3652 /* allocate a large request from the best fitting chunk in a treebin */
3653 static void* tmalloc_large(mstate m, size_t nb) {
3655 size_t rsize = -nb; /* Unsigned negation */
3658 compute_tree_index(nb, idx);
3660 if ((t = *treebin_at(m, idx)) != 0) {
3661 /* Traverse tree for this bin looking for node with size == nb */
3662 size_t sizebits = nb << leftshift_for_tree_index(idx);
3663 tchunkptr rst = 0; /* The deepest untaken right subtree */
3666 size_t trem = chunksize(t) - nb;
3669 if ((rsize = trem) == 0)
3673 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3674 if (rt != 0 && rt != t)
3677 t = rst; /* set t to least subtree holding sizes > nb */
3684 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3685 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3686 if (leftbits != 0) {
3688 binmap_t leastbit = least_bit(leftbits);
3689 compute_bit2idx(leastbit, i);
3690 t = *treebin_at(m, i);
3694 while (t != 0) { /* find smallest of tree or subtree */
3695 size_t trem = chunksize(t) - nb;
3700 t = leftmost_child(t);
3703 /* If dv is a better fit, return 0 so malloc will use it */
3704 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3705 if (RTCHECK(ok_address(m, v))) { /* split */
3706 mchunkptr r = chunk_plus_offset(v, nb);
3707 assert(chunksize(v) == rsize + nb);
3708 if (RTCHECK(ok_next(v, r))) {
3709 unlink_large_chunk(m, v);
3710 if (rsize < MIN_CHUNK_SIZE)
3711 set_inuse_and_pinuse(m, v, (rsize + nb));
3713 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3714 set_size_and_pinuse_of_free_chunk(r, rsize);
3715 insert_chunk(m, r, rsize);
3717 return chunk2mem(v);
3720 CORRUPTION_ERROR_ACTION(m);
3725 /* allocate a small request from the best fitting chunk in a treebin */
3726 static void* tmalloc_small(mstate m, size_t nb) {
3730 binmap_t leastbit = least_bit(m->treemap);
3731 compute_bit2idx(leastbit, i);
3733 v = t = *treebin_at(m, i);
3734 rsize = chunksize(t) - nb;
3736 while ((t = leftmost_child(t)) != 0) {
3737 size_t trem = chunksize(t) - nb;
3744 if (RTCHECK(ok_address(m, v))) {
3745 mchunkptr r = chunk_plus_offset(v, nb);
3746 assert(chunksize(v) == rsize + nb);
3747 if (RTCHECK(ok_next(v, r))) {
3748 unlink_large_chunk(m, v);
3749 if (rsize < MIN_CHUNK_SIZE)
3750 set_inuse_and_pinuse(m, v, (rsize + nb));
3752 set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3753 set_size_and_pinuse_of_free_chunk(r, rsize);
3754 replace_dv(m, r, rsize);
3756 return chunk2mem(v);
3760 CORRUPTION_ERROR_ACTION(m);
3764 /* --------------------------- realloc support --------------------------- */
3768 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3769 if (bytes >= MAX_REQUEST) {
3770 MALLOC_FAILURE_ACTION;
3773 if (!PREACTION(m)) {
3774 mchunkptr oldp = mem2chunk(oldmem);
3775 size_t oldsize = chunksize(oldp);
3776 mchunkptr next = chunk_plus_offset(oldp, oldsize);
3780 /* Try to either shrink or extend into top. Else malloc-copy-free */
3782 if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3783 ok_next(oldp, next) && ok_pinuse(next))) {
3784 size_t nb = request2size(bytes);
3785 if (is_mmapped(oldp))
3786 newp = mmap_resize(m, oldp, nb);
3787 else if (oldsize >= nb) { /* already big enough */
3788 size_t rsize = oldsize - nb;
3790 if (rsize >= MIN_CHUNK_SIZE) {
3791 mchunkptr remainder = chunk_plus_offset(newp, nb);
3792 set_inuse(m, newp, nb);
3793 set_inuse(m, remainder, rsize);
3794 extra = chunk2mem(remainder);
3797 else if (next == m->top && oldsize + m->topsize > nb) {
3798 /* Expand into top */
3799 size_t newsize = oldsize + m->topsize;
3800 size_t newtopsize = newsize - nb;
3801 mchunkptr newtop = chunk_plus_offset(oldp, nb);
3802 set_inuse(m, oldp, nb);
3803 newtop->head = newtopsize |PINUSE_BIT;
3805 m->topsize = newtopsize;
3810 USAGE_ERROR_ACTION(m, oldmem);
3819 internal_free(m, extra);
3821 check_inuse_chunk(m, newp);
3822 return chunk2mem(newp);
3825 void* newmem = internal_malloc(m, bytes);
3827 size_t oc = oldsize - overhead_for(oldp);
3828 memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3829 internal_free(m, oldmem);
3839 /* --------------------------- memalign support -------------------------- */
3841 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3842 if (alignment <= MALLOC_ALIGNMENT) /* Can just use malloc */
3843 return internal_malloc(m, bytes);
3844 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3845 alignment = MIN_CHUNK_SIZE;
3846 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3847 size_t a = MALLOC_ALIGNMENT << 1;
3848 while (a < alignment) a <<= 1;
3852 if (bytes >= MAX_REQUEST - alignment) {
3853 if (m != 0) { /* Test isn't needed but avoids compiler warning */
3854 MALLOC_FAILURE_ACTION;
3858 size_t nb = request2size(bytes);
3859 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3860 char* mem = (char*)internal_malloc(m, req);
3864 mchunkptr p = mem2chunk(mem);
3866 if (PREACTION(m)) return 0;
3867 if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3869 Find an aligned spot inside chunk. Since we need to give
3870 back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3871 the first calculation places us at a spot with less than
3872 MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3873 We've allocated enough total room so that this is always
3876 char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3880 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3882 mchunkptr newp = (mchunkptr)pos;
3883 size_t leadsize = pos - (char*)(p);
3884 size_t newsize = chunksize(p) - leadsize;
3886 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3887 newp->prev_foot = p->prev_foot + leadsize;
3888 newp->head = (newsize|CINUSE_BIT);
3890 else { /* Otherwise, give back leader, use the rest */
3891 set_inuse(m, newp, newsize);
3892 set_inuse(m, p, leadsize);
3893 leader = chunk2mem(p);
3898 /* Give back spare room at the end */
3899 if (!is_mmapped(p)) {
3900 size_t size = chunksize(p);
3901 if (size > nb + MIN_CHUNK_SIZE) {
3902 size_t remainder_size = size - nb;
3903 mchunkptr remainder = chunk_plus_offset(p, nb);
3904 set_inuse(m, p, nb);
3905 set_inuse(m, remainder, remainder_size);
3906 trailer = chunk2mem(remainder);
3910 assert (chunksize(p) >= nb);
3911 assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3912 check_inuse_chunk(m, p);
3915 internal_free(m, leader);
3918 internal_free(m, trailer);
3920 return chunk2mem(p);
3928 /* ------------------------ comalloc/coalloc support --------------------- */
3930 static void** ialloc(mstate m,
3936 This provides common support for independent_X routines, handling
3937 all of the combinations that can result.
3940 bit 0 set if all elements are same size (using sizes[0])
3941 bit 1 set if elements should be zeroed
3944 size_t element_size; /* chunksize of each element, if all same */
3945 size_t contents_size; /* total size of elements */
3946 size_t array_size; /* request size of pointer array */
3947 void* mem; /* malloced aggregate space */
3948 mchunkptr p; /* corresponding chunk */
3949 size_t remainder_size; /* remaining bytes while splitting */
3950 void** marray; /* either "chunks" or malloced ptr array */
3951 mchunkptr array_chunk; /* chunk for malloced ptr array */
3952 flag_t was_enabled; /* to disable mmap */
3956 /* compute array length, if needed */
3958 if (n_elements == 0)
3959 return chunks; /* nothing to do */
3964 /* if empty req, must still return chunk representing empty array */
3965 if (n_elements == 0)
3966 return (void**)internal_malloc(m, 0);
3968 array_size = request2size(n_elements * (sizeof(void*)));
3971 /* compute total element size */
3972 if (opts & 0x1) { /* all-same-size */
3973 element_size = request2size(*sizes);
3974 contents_size = n_elements * element_size;
3976 else { /* add up all the sizes */
3979 for (i = 0; i != n_elements; ++i)
3980 contents_size += request2size(sizes[i]);
3983 size = contents_size + array_size;
3986 Allocate the aggregate chunk. First disable direct-mmapping so
3987 malloc won't use it, since we would not be able to later
3988 free/realloc space internal to a segregated mmap region.
3990 was_enabled = use_mmap(m);
3992 mem = internal_malloc(m, size - CHUNK_OVERHEAD);
3998 if (PREACTION(m)) return 0;
4000 remainder_size = chunksize(p);
4002 assert(!is_mmapped(p));
4004 if (opts & 0x2) { /* optionally clear the elements */
4005 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4008 /* If not provided, allocate the pointer array as final part of chunk */
4010 size_t array_chunk_size;
4011 array_chunk = chunk_plus_offset(p, contents_size);
4012 array_chunk_size = remainder_size - contents_size;
4013 marray = (void**) (chunk2mem(array_chunk));
4014 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4015 remainder_size = contents_size;
4018 /* split out elements */
4019 for (i = 0; ; ++i) {
4020 marray[i] = chunk2mem(p);
4021 if (i != n_elements-1) {
4022 if (element_size != 0)
4023 size = element_size;
4025 size = request2size(sizes[i]);
4026 remainder_size -= size;
4027 set_size_and_pinuse_of_inuse_chunk(m, p, size);
4028 p = chunk_plus_offset(p, size);
4030 else { /* the final element absorbs any overallocation slop */
4031 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4037 if (marray != chunks) {
4038 /* final element must have exactly exhausted chunk */
4039 if (element_size != 0) {
4040 assert(remainder_size == element_size);
4043 assert(remainder_size == request2size(sizes[i]));
4045 check_inuse_chunk(m, mem2chunk(marray));
4047 for (i = 0; i != n_elements; ++i)
4048 check_inuse_chunk(m, mem2chunk(marray[i]));
4058 /* -------------------------- public routines ---------------------------- */
4062 void* dlmalloc(size_t bytes) {
4065 If a small request (< 256 bytes minus per-chunk overhead):
4066 1. If one exists, use a remainderless chunk in associated smallbin.
4067 (Remainderless means that there are too few excess bytes to
4068 represent as a chunk.)
4069 2. If it is big enough, use the dv chunk, which is normally the
4070 chunk adjacent to the one used for the most recent small request.
4071 3. If one exists, split the smallest available chunk in a bin,
4072 saving remainder in dv.
4073 4. If it is big enough, use the top chunk.
4074 5. If available, get memory from system and use it
4075 Otherwise, for a large request:
4076 1. Find the smallest available binned chunk that fits, and use it
4077 if it is better fitting than dv chunk, splitting if necessary.
4078 2. If better fitting than any binned chunk, use the dv chunk.
4079 3. If it is big enough, use the top chunk.
4080 4. If request size >= mmap threshold, try to directly mmap this chunk.
4081 5. If available, get memory from system and use it
4083 The ugly goto's here ensure that postaction occurs along all paths.
4086 if (!PREACTION(gm)) {
4089 if (bytes <= MAX_SMALL_REQUEST) {
4092 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4093 idx = small_index(nb);
4094 smallbits = gm->smallmap >> idx;
4096 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4098 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4099 b = smallbin_at(gm, idx);
4101 assert(chunksize(p) == small_index2size(idx));
4102 unlink_first_small_chunk(gm, b, p, idx);
4103 set_inuse_and_pinuse(gm, p, small_index2size(idx));
4105 check_malloced_chunk(gm, mem, nb);
4109 else if (nb > gm->dvsize) {
4110 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4114 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4115 binmap_t leastbit = least_bit(leftbits);
4116 compute_bit2idx(leastbit, i);
4117 b = smallbin_at(gm, i);
4119 assert(chunksize(p) == small_index2size(i));
4120 unlink_first_small_chunk(gm, b, p, i);
4121 rsize = small_index2size(i) - nb;
4122 /* Fit here cannot be remainderless if 4byte sizes */
4123 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4124 set_inuse_and_pinuse(gm, p, small_index2size(i));
4126 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4127 r = chunk_plus_offset(p, nb);
4128 set_size_and_pinuse_of_free_chunk(r, rsize);
4129 replace_dv(gm, r, rsize);
4132 check_malloced_chunk(gm, mem, nb);
4136 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4137 check_malloced_chunk(gm, mem, nb);
4142 else if (bytes >= MAX_REQUEST)
4143 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4145 nb = pad_request(bytes);
4146 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4147 check_malloced_chunk(gm, mem, nb);
4152 if (nb <= gm->dvsize) {
4153 size_t rsize = gm->dvsize - nb;
4154 mchunkptr p = gm->dv;
4155 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4156 mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4158 set_size_and_pinuse_of_free_chunk(r, rsize);
4159 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4161 else { /* exhaust dv */
4162 size_t dvs = gm->dvsize;
4165 set_inuse_and_pinuse(gm, p, dvs);
4168 check_malloced_chunk(gm, mem, nb);
4172 else if (nb < gm->topsize) { /* Split top */
4173 size_t rsize = gm->topsize -= nb;
4174 mchunkptr p = gm->top;
4175 mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4176 r->head = rsize | PINUSE_BIT;
4177 set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4179 check_top_chunk(gm, gm->top);
4180 check_malloced_chunk(gm, mem, nb);
4184 mem = sys_alloc(gm, nb);
4194 void dlfree(void* mem) {
4196 Consolidate freed chunks with preceeding or succeeding bordering
4197 free chunks, if they exist, and then place in a bin. Intermixed
4198 with special cases for top, dv, mmapped chunks, and usage errors.
4202 mchunkptr p = mem2chunk(mem);
4204 mstate fm = get_mstate_for(p);
4205 if (!ok_magic(fm)) {
4206 USAGE_ERROR_ACTION(fm, p);
4211 #endif /* FOOTERS */
4212 if (!PREACTION(fm)) {
4213 check_inuse_chunk(fm, p);
4214 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4215 size_t psize = chunksize(p);
4216 mchunkptr next = chunk_plus_offset(p, psize);
4218 size_t prevsize = p->prev_foot;
4219 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4220 prevsize &= ~IS_MMAPPED_BIT;
4221 psize += prevsize + MMAP_FOOT_PAD;
4222 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4223 fm->footprint -= psize;
4227 mchunkptr prev = chunk_minus_offset(p, prevsize);
4230 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4232 unlink_chunk(fm, p, prevsize);
4234 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4236 set_free_with_pinuse(p, psize, next);
4245 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4246 if (!cinuse(next)) { /* consolidate forward */
4247 if (next == fm->top) {
4248 size_t tsize = fm->topsize += psize;
4250 p->head = tsize | PINUSE_BIT;
4255 if (should_trim(fm, tsize))
4259 else if (next == fm->dv) {
4260 size_t dsize = fm->dvsize += psize;
4262 set_size_and_pinuse_of_free_chunk(p, dsize);
4266 size_t nsize = chunksize(next);
4268 unlink_chunk(fm, next, nsize);
4269 set_size_and_pinuse_of_free_chunk(p, psize);
4277 set_free_with_pinuse(p, psize, next);
4278 insert_chunk(fm, p, psize);
4279 check_free_chunk(fm, p);
4284 USAGE_ERROR_ACTION(fm, p);
4291 #endif /* FOOTERS */
4296 void* dlcalloc(size_t n_elements, size_t elem_size) {
4299 if (n_elements != 0) {
4300 req = n_elements * elem_size;
4301 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4302 (req / n_elements != elem_size))
4303 req = MAX_SIZE_T; /* force downstream failure on overflow */
4305 mem = dlmalloc(req);
4306 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4307 memset(mem, 0, req);
4311 void* dlrealloc(void* oldmem, size_t bytes) {
4313 return dlmalloc(bytes);
4314 #ifdef REALLOC_ZERO_BYTES_FREES
4319 #endif /* REALLOC_ZERO_BYTES_FREES */
4324 mstate m = get_mstate_for(mem2chunk(oldmem));
4326 USAGE_ERROR_ACTION(m, oldmem);
4329 #endif /* FOOTERS */
4330 return internal_realloc(m, oldmem, bytes);
4336 void* dlmemalign(size_t alignment, size_t bytes) {
4337 return internal_memalign(gm, alignment, bytes);
4342 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4344 size_t sz = elem_size; /* serves as 1-element array */
4345 return ialloc(gm, n_elements, &sz, 3, chunks);
4348 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4350 return ialloc(gm, n_elements, sizes, 0, chunks);
4353 void* dlvalloc(size_t bytes) {
4356 pagesz = mparams.page_size;
4357 return dlmemalign(pagesz, bytes);
4360 void* dlpvalloc(size_t bytes) {
4363 pagesz = mparams.page_size;
4364 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4367 int dlmalloc_trim(size_t pad) {
4369 if (!PREACTION(gm)) {
4370 result = sys_trim(gm, pad);
4376 size_t dlmalloc_footprint(void) {
4377 return gm->footprint;
4380 size_t dlmalloc_max_footprint(void) {
4381 return gm->max_footprint;
4385 struct mallinfo dlmallinfo(void) {
4386 return internal_mallinfo(gm);
4388 #endif /* NO_MALLINFO */
4390 void dlmalloc_stats() {
4391 internal_malloc_stats(gm);
4394 size_t dlmalloc_usable_size(void* mem) {
4396 mchunkptr p = mem2chunk(mem);
4398 return chunksize(p) - overhead_for(p);
4403 int dlmallopt(int param_number, int value) {
4404 return change_mparam(param_number, value);
4409 #endif /* !ONLY_MSPACES */
4411 /* ----------------------------- user mspaces ---------------------------- */
4415 static mstate init_user_mstate(char* tbase, size_t tsize) {
4416 size_t msize = pad_request(sizeof(struct malloc_state));
4418 mchunkptr msp = align_as_chunk(tbase);
4419 mstate m = (mstate)(chunk2mem(msp));
4420 memset(m, 0, msize);
4421 INITIAL_LOCK(&m->mutex);
4422 msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4423 m->seg.base = m->least_addr = tbase;
4424 m->seg.size = m->footprint = m->max_footprint = tsize;
4425 m->magic = mparams.magic;
4426 m->mflags = mparams.default_mflags;
4427 disable_contiguous(m);
4429 mn = next_chunk(mem2chunk(m));
4430 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4431 check_top_chunk(m, m->top);
4435 mspace create_mspace(size_t capacity, int locked) {
4437 size_t msize = pad_request(sizeof(struct malloc_state));
4438 init_mparams(); /* Ensure pagesize etc initialized */
4440 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4441 size_t rs = ((capacity == 0)? mparams.granularity :
4442 (capacity + TOP_FOOT_SIZE + msize));
4443 size_t tsize = granularity_align(rs);
4444 char* tbase = (char*)(CALL_MMAP(tsize));
4445 if (tbase != CMFAIL) {
4446 m = init_user_mstate(tbase, tsize);
4447 m->seg.sflags = IS_MMAPPED_BIT;
4448 set_lock(m, locked);
4454 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4456 size_t msize = pad_request(sizeof(struct malloc_state));
4457 init_mparams(); /* Ensure pagesize etc initialized */
4459 if (capacity > msize + TOP_FOOT_SIZE &&
4460 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4461 m = init_user_mstate((char*)base, capacity);
4462 m->seg.sflags = EXTERN_BIT;
4463 set_lock(m, locked);
4468 size_t destroy_mspace(mspace msp) {
4470 mstate ms = (mstate)msp;
4472 msegmentptr sp = &ms->seg;
4474 char* base = sp->base;
4475 size_t size = sp->size;
4476 flag_t flag = sp->sflags;
4478 if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4479 CALL_MUNMAP(base, size) == 0)
4484 USAGE_ERROR_ACTION(ms,ms);
4490 mspace versions of routines are near-clones of the global
4491 versions. This is not so nice but better than the alternatives.
4494 void* mspace_malloc(mspace msp, size_t bytes) {
4495 mstate ms = (mstate)msp;
4496 if (!ok_magic(ms)) {
4497 USAGE_ERROR_ACTION(ms,ms);
4500 if (!PREACTION(ms)) {
4503 if (bytes <= MAX_SMALL_REQUEST) {
4506 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4507 idx = small_index(nb);
4508 smallbits = ms->smallmap >> idx;
4510 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4512 idx += ~smallbits & 1; /* Uses next bin if idx empty */
4513 b = smallbin_at(ms, idx);
4515 assert(chunksize(p) == small_index2size(idx));
4516 unlink_first_small_chunk(ms, b, p, idx);
4517 set_inuse_and_pinuse(ms, p, small_index2size(idx));
4519 check_malloced_chunk(ms, mem, nb);
4523 else if (nb > ms->dvsize) {
4524 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4528 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4529 binmap_t leastbit = least_bit(leftbits);
4530 compute_bit2idx(leastbit, i);
4531 b = smallbin_at(ms, i);
4533 assert(chunksize(p) == small_index2size(i));
4534 unlink_first_small_chunk(ms, b, p, i);
4535 rsize = small_index2size(i) - nb;
4536 /* Fit here cannot be remainderless if 4byte sizes */
4537 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4538 set_inuse_and_pinuse(ms, p, small_index2size(i));
4540 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4541 r = chunk_plus_offset(p, nb);
4542 set_size_and_pinuse_of_free_chunk(r, rsize);
4543 replace_dv(ms, r, rsize);
4546 check_malloced_chunk(ms, mem, nb);
4550 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4551 check_malloced_chunk(ms, mem, nb);
4556 else if (bytes >= MAX_REQUEST)
4557 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4559 nb = pad_request(bytes);
4560 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4561 check_malloced_chunk(ms, mem, nb);
4566 if (nb <= ms->dvsize) {
4567 size_t rsize = ms->dvsize - nb;
4568 mchunkptr p = ms->dv;
4569 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4570 mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4572 set_size_and_pinuse_of_free_chunk(r, rsize);
4573 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4575 else { /* exhaust dv */
4576 size_t dvs = ms->dvsize;
4579 set_inuse_and_pinuse(ms, p, dvs);
4582 check_malloced_chunk(ms, mem, nb);
4586 else if (nb < ms->topsize) { /* Split top */
4587 size_t rsize = ms->topsize -= nb;
4588 mchunkptr p = ms->top;
4589 mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4590 r->head = rsize | PINUSE_BIT;
4591 set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4593 check_top_chunk(ms, ms->top);
4594 check_malloced_chunk(ms, mem, nb);
4598 mem = sys_alloc(ms, nb);
4608 void mspace_free(mspace msp, void* mem) {
4610 mchunkptr p = mem2chunk(mem);
4612 mstate fm = get_mstate_for(p);
4614 mstate fm = (mstate)msp;
4615 #endif /* FOOTERS */
4616 if (!ok_magic(fm)) {
4617 USAGE_ERROR_ACTION(fm, p);
4620 if (!PREACTION(fm)) {
4621 check_inuse_chunk(fm, p);
4622 if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4623 size_t psize = chunksize(p);
4624 mchunkptr next = chunk_plus_offset(p, psize);
4626 size_t prevsize = p->prev_foot;
4627 if ((prevsize & IS_MMAPPED_BIT) != 0) {
4628 prevsize &= ~IS_MMAPPED_BIT;
4629 psize += prevsize + MMAP_FOOT_PAD;
4630 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4631 fm->footprint -= psize;
4635 mchunkptr prev = chunk_minus_offset(p, prevsize);
4638 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4640 unlink_chunk(fm, p, prevsize);
4642 else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4644 set_free_with_pinuse(p, psize, next);
4653 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4654 if (!cinuse(next)) { /* consolidate forward */
4655 if (next == fm->top) {
4656 size_t tsize = fm->topsize += psize;
4658 p->head = tsize | PINUSE_BIT;
4663 if (should_trim(fm, tsize))
4667 else if (next == fm->dv) {
4668 size_t dsize = fm->dvsize += psize;
4670 set_size_and_pinuse_of_free_chunk(p, dsize);
4674 size_t nsize = chunksize(next);
4676 unlink_chunk(fm, next, nsize);
4677 set_size_and_pinuse_of_free_chunk(p, psize);
4685 set_free_with_pinuse(p, psize, next);
4686 insert_chunk(fm, p, psize);
4687 check_free_chunk(fm, p);
4692 USAGE_ERROR_ACTION(fm, p);
4699 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4702 mstate ms = (mstate)msp;
4703 if (!ok_magic(ms)) {
4704 USAGE_ERROR_ACTION(ms,ms);
4707 if (n_elements != 0) {
4708 req = n_elements * elem_size;
4709 if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4710 (req / n_elements != elem_size))
4711 req = MAX_SIZE_T; /* force downstream failure on overflow */
4713 mem = internal_malloc(ms, req);
4714 if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4715 memset(mem, 0, req);
4719 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4721 return mspace_malloc(msp, bytes);
4722 #ifdef REALLOC_ZERO_BYTES_FREES
4724 mspace_free(msp, oldmem);
4727 #endif /* REALLOC_ZERO_BYTES_FREES */
4730 mchunkptr p = mem2chunk(oldmem);
4731 mstate ms = get_mstate_for(p);
4733 mstate ms = (mstate)msp;
4734 #endif /* FOOTERS */
4735 if (!ok_magic(ms)) {
4736 USAGE_ERROR_ACTION(ms,ms);
4739 return internal_realloc(ms, oldmem, bytes);
4743 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4744 mstate ms = (mstate)msp;
4745 if (!ok_magic(ms)) {
4746 USAGE_ERROR_ACTION(ms,ms);
4749 return internal_memalign(ms, alignment, bytes);
4752 void** mspace_independent_calloc(mspace msp, size_t n_elements,
4753 size_t elem_size, void* chunks[]) {
4754 size_t sz = elem_size; /* serves as 1-element array */
4755 mstate ms = (mstate)msp;
4756 if (!ok_magic(ms)) {
4757 USAGE_ERROR_ACTION(ms,ms);
4760 return ialloc(ms, n_elements, &sz, 3, chunks);
4763 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4764 size_t sizes[], void* chunks[]) {
4765 mstate ms = (mstate)msp;
4766 if (!ok_magic(ms)) {
4767 USAGE_ERROR_ACTION(ms,ms);
4770 return ialloc(ms, n_elements, sizes, 0, chunks);
4773 int mspace_trim(mspace msp, size_t pad) {
4775 mstate ms = (mstate)msp;
4777 if (!PREACTION(ms)) {
4778 result = sys_trim(ms, pad);
4783 USAGE_ERROR_ACTION(ms,ms);
4788 void mspace_malloc_stats(mspace msp) {
4789 mstate ms = (mstate)msp;
4791 internal_malloc_stats(ms);
4794 USAGE_ERROR_ACTION(ms,ms);
4798 size_t mspace_footprint(mspace msp) {
4800 mstate ms = (mstate)msp;
4802 result = ms->footprint;
4804 USAGE_ERROR_ACTION(ms,ms);
4809 size_t mspace_max_footprint(mspace msp) {
4811 mstate ms = (mstate)msp;
4813 result = ms->max_footprint;
4815 USAGE_ERROR_ACTION(ms,ms);
4821 struct mallinfo mspace_mallinfo(mspace msp) {
4822 mstate ms = (mstate)msp;
4823 if (!ok_magic(ms)) {
4824 USAGE_ERROR_ACTION(ms,ms);
4826 return internal_mallinfo(ms);
4828 #endif /* NO_MALLINFO */
4830 int mspace_mallopt(int param_number, int value) {
4831 return change_mparam(param_number, value);
4834 #endif /* MSPACES */
4836 /* -------------------- Alternative MORECORE functions ------------------- */
4839 Guidelines for creating a custom version of MORECORE:
4841 * For best performance, MORECORE should allocate in multiples of pagesize.
4842 * MORECORE may allocate more memory than requested. (Or even less,
4843 but this will usually result in a malloc failure.)
4844 * MORECORE must not allocate memory when given argument zero, but
4845 instead return one past the end address of memory from previous
4847 * For best performance, consecutive calls to MORECORE with positive
4848 arguments should return increasing addresses, indicating that
4849 space has been contiguously extended.
4850 * Even though consecutive calls to MORECORE need not return contiguous
4851 addresses, it must be OK for malloc'ed chunks to span multiple
4852 regions in those cases where they do happen to be contiguous.
4853 * MORECORE need not handle negative arguments -- it may instead
4854 just return MFAIL when given negative arguments.
4855 Negative arguments are always multiples of pagesize. MORECORE
4856 must not misinterpret negative args as large positive unsigned
4857 args. You can suppress all such calls from even occurring by defining
4858 MORECORE_CANNOT_TRIM,
4860 As an example alternative MORECORE, here is a custom allocator
4861 kindly contributed for pre-OSX macOS. It uses virtually but not
4862 necessarily physically contiguous non-paged memory (locked in,
4863 present and won't get swapped out). You can use it by uncommenting
4864 this section, adding some #includes, and setting up the appropriate
4867 #define MORECORE osMoreCore
4869 There is also a shutdown routine that should somehow be called for
4870 cleanup upon program exit.
4872 #define MAX_POOL_ENTRIES 100
4873 #define MINIMUM_MORECORE_SIZE (64 * 1024U)
4874 static int next_os_pool;
4875 void *our_os_pools[MAX_POOL_ENTRIES];
4877 void *osMoreCore(int size)
4880 static void *sbrk_top = 0;
4884 if (size < MINIMUM_MORECORE_SIZE)
4885 size = MINIMUM_MORECORE_SIZE;
4886 if (CurrentExecutionLevel() == kTaskLevel)
4887 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4890 return (void *) MFAIL;
4892 // save ptrs so they can be freed during cleanup
4893 our_os_pools[next_os_pool] = ptr;
4895 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4896 sbrk_top = (char *) ptr + size;
4901 // we don't currently support shrink behavior
4902 return (void *) MFAIL;
4910 // cleanup any allocated memory pools
4911 // called as last thing before shutting down driver
4913 void osCleanupMem(void)
4917 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4920 PoolDeallocate(*ptr);
4928 /* -----------------------------------------------------------------------
4930 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee)
4931 * Add max_footprint functions
4932 * Ensure all appropriate literals are size_t
4933 * Fix conditional compilation problem for some #define settings
4934 * Avoid concatenating segments with the one provided
4935 in create_mspace_with_base
4936 * Rename some variables to avoid compiler shadowing warnings
4937 * Use explicit lock initialization.
4938 * Better handling of sbrk interference.
4939 * Simplify and fix segment insertion, trimming and mspace_destroy
4940 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
4941 * Thanks especially to Dennis Flanagan for help on these.
4943 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee)
4944 * Fix memalign brace error.
4946 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee)
4947 * Fix improper #endif nesting in C++
4948 * Add explicit casts needed for C++
4950 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee)
4951 * Use trees for large bins
4953 * Use segments to unify sbrk-based and mmap-based system allocation,
4954 removing need for emulation on most platforms without sbrk.
4955 * Default safety checks
4956 * Optional footer checks. Thanks to William Robertson for the idea.
4957 * Internal code refactoring
4958 * Incorporate suggestions and platform-specific changes.
4959 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
4960 Aaron Bachmann, Emery Berger, and others.
4961 * Speed up non-fastbin processing enough to remove fastbins.
4962 * Remove useless cfree() to avoid conflicts with other apps.
4963 * Remove internal memcpy, memset. Compilers handle builtins better.
4964 * Remove some options that no one ever used and rename others.
4966 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee)
4967 * Fix malloc_state bitmap array misdeclaration
4969 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee)
4970 * Allow tuning of FIRST_SORTED_BIN_SIZE
4971 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
4972 * Better detection and support for non-contiguousness of MORECORE.
4973 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
4974 * Bypass most of malloc if no frees. Thanks To Emery Berger.
4975 * Fix freeing of old top non-contiguous chunk im sysmalloc.
4976 * Raised default trim and map thresholds to 256K.
4977 * Fix mmap-related #defines. Thanks to Lubos Lunak.
4978 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
4979 * Branch-free bin calculation
4980 * Default trim and mmap thresholds now 256K.
4982 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee)
4983 * Introduce independent_comalloc and independent_calloc.
4984 Thanks to Michael Pachos for motivation and help.
4985 * Make optional .h file available
4986 * Allow > 2GB requests on 32bit systems.
4987 * new WIN32 sbrk, mmap, munmap, lock code from <Walter@GeNeSys-e.de>.
4988 Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
4990 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
4992 * memalign: check alignment arg
4993 * realloc: don't try to shift chunks backwards, since this
4994 leads to more fragmentation in some programs and doesn't
4995 seem to help in any others.
4996 * Collect all cases in malloc requiring system memory into sysmalloc
4997 * Use mmap as backup to sbrk
4998 * Place all internal state in malloc_state
4999 * Introduce fastbins (although similar to 2.5.1)
5000 * Many minor tunings and cosmetic improvements
5001 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5002 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5003 Thanks to Tony E. Bennett <tbennett@nvidia.com> and others.
5004 * Include errno.h to support default failure action.
5006 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
5007 * return null for negative arguments
5008 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5009 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5010 (e.g. WIN32 platforms)
5011 * Cleanup header file inclusion for WIN32 platforms
5012 * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5013 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5014 memory allocation routines
5015 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5016 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5017 usage of 'assert' in non-WIN32 code
5018 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5020 * Always call 'fREe()' rather than 'free()'
5022 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
5023 * Fixed ordering problem with boundary-stamping
5025 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
5026 * Added pvalloc, as recommended by H.J. Liu
5027 * Added 64bit pointer support mainly from Wolfram Gloger
5028 * Added anonymously donated WIN32 sbrk emulation
5029 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5030 * malloc_extend_top: fix mask error that caused wastage after
5032 * Add linux mremap support code from HJ Liu
5034 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
5035 * Integrated most documentation with the code.
5036 * Add support for mmap, with help from
5037 Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5038 * Use last_remainder in more cases.
5039 * Pack bins using idea from colin@nyx10.cs.du.edu
5040 * Use ordered bins instead of best-fit threshhold
5041 * Eliminate block-local decls to simplify tracing and debugging.
5042 * Support another case of realloc via move into top
5043 * Fix error occuring when initial sbrk_base not word-aligned.
5044 * Rely on page size for units instead of SBRK_UNIT to
5045 avoid surprises about sbrk alignment conventions.
5046 * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5047 (raymond@es.ele.tue.nl) for the suggestion.
5048 * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5049 * More precautions for cases where other routines call sbrk,
5050 courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de).
5051 * Added macros etc., allowing use in linux libc from
5052 H.J. Lu (hjl@gnu.ai.mit.edu)
5053 * Inverted this history list
5055 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
5056 * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5057 * Removed all preallocation code since under current scheme
5058 the work required to undo bad preallocations exceeds
5059 the work saved in good cases for most test programs.
5060 * No longer use return list or unconsolidated bins since
5061 no scheme using them consistently outperforms those that don't
5062 given above changes.
5063 * Use best fit for very large chunks to prevent some worst-cases.
5064 * Added some support for debugging
5066 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
5067 * Removed footers when chunks are in use. Thanks to
5068 Paul Wilson (wilson@cs.texas.edu) for the suggestion.
5070 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
5071 * Added malloc_trim, with help from Wolfram Gloger
5072 (wmglo@Dent.MED.Uni-Muenchen.DE).
5074 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
5076 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
5077 * realloc: try to expand in both directions
5078 * malloc: swap order of clean-bin strategy;
5079 * realloc: only conditionally expand backwards
5080 * Try not to scavenge used bins
5081 * Use bin counts as a guide to preallocation
5082 * Occasionally bin return list chunks in first scan
5083 * Add a few optimizations from colin@nyx10.cs.du.edu
5085 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
5086 * faster bin computation & slightly different binning
5087 * merged all consolidations to one part of malloc proper
5088 (eliminating old malloc_find_space & malloc_clean_bin)
5089 * Scan 2 returns chunks (not just 1)
5090 * Propagate failure in realloc if malloc returns 0
5091 * Add stuff to allow compilation on non-ANSI compilers
5092 from kpv@research.att.com
5094 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
5095 * removed potential for odd address access in prev_chunk
5096 * removed dependency on getpagesize.h
5097 * misc cosmetics and a bit more internal documentation
5098 * anticosmetics: mangled names in macros to evade debugger strangeness
5099 * tested on sparc, hp-700, dec-mips, rs6000
5100 with gcc & native cc (hp, dec only) allowing
5101 Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5103 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
5104 * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5105 structure of old version, but most details differ.)