Improve synchronization

- Fix bugs in kDos2Errno definition
- malloc() should now be thread safe
- Fix bug in rollup.com header generator
- Fix open(O_APPEND) on the New Technology
- Fix select() on the New Technology and test it
- Work towards refactoring i/o for thread safety
- Socket reads and writes on NT now poll for signals
- Work towards i/o completion ports on the New Technology
- Make read() and write() intermittently check for signals
- Blinkenlights keyboard i/o so much better on NT w/ poll()
- You can now poll() files and sockets at the same time on NT
- Fix bug in appendr() that manifests with dlmalloc footers off
This commit is contained in:
Justine Tunney 2022-04-14 23:39:48 -07:00
parent 233144b19d
commit 933411ba99
266 changed files with 8761 additions and 4344 deletions

View file

@ -1,10 +0,0 @@
/ Since dlmalloc is public domain, we intend to keep it that way. To the
/ extent possible under law, Justine Tunney has waived all copyright and
/ related or neighboring rights to her /third_party/dlmalloc changes, as
/ it is written in the following disclaimers:
/ • unlicense.org
/ • creativecommons.org/publicdomain/zero/1.0/
.ident "\n
dlmalloc (Public Domain CC0)
Credit: Doug Lea <dl@cs.oswego.edu>"

View file

@ -1,10 +1,9 @@
This is a version (aka dlmalloc) of malloc/free/realloc written by
Doug Lea and released to the public domain, as explained at
http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
comments, complaints, performance data, etc to dl@cs.oswego.edu
Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
* Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
Note: There may be an updated version of this malloc obtainable at
ftp://gee.cs.oswego.edu/pub/misc/malloc.c
Check before installing!
@ -41,7 +40,9 @@
(e.g. 2.7.2) supporting these.)
Alignment: 8 bytes (minimum)
Is set to 16 for NexGen32e.
This suffices for nearly all current machines and C compilers.
However, you can define MALLOC_ALIGNMENT to be wider than this
if necessary (up to 128bytes), at the expense of using more space.
Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes)
8 or 16 bytes (if 8byte sizes)
@ -98,7 +99,7 @@
If you don't like either of these options, you can define
CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
else. And if if you are sure that your program using malloc has
no errors or vulnerabilities, you can define TRUSTWORTHY to 1,
no errors or vulnerabilities, you can define INSECURE to 1,
which might (or might not) provide a small performance improvement.
It is also possible to limit the maximum total allocatable
@ -182,6 +183,371 @@
For a longer but out of date high-level description, see
http://gee.cs.oswego.edu/dl/html/malloc.html
----------------------- Chunk representations ------------------------
(The following includes lightly edited explanations by Colin Plumb.)
The malloc_chunk declaration below is misleading (but accurate and
necessary). It declares a "view" into memory allowing access to
necessary fields at known offsets from a given base.
Chunks of memory are maintained using a `boundary tag' method as
originally described by Knuth. (See the paper by Paul Wilson
ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
techniques.) Sizes of free chunks are stored both in the front of
each chunk and at the end. This makes consolidating fragmented
chunks into bigger chunks fast. The head fields also hold bits
representing whether chunks are free or in use.
Here are some pictures to make it clearer. They are "exploded" to
show that the state of a chunk can be thought of as extending from
the high 31 bits of the head field of its header through the
prev_foot and PINUSE_BIT bit of the following chunk header.
A chunk that's in use looks like:
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Size of previous chunk (if P = 0) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
| Size of this chunk 1| +-+
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+- -+
| |
+- -+
| :
+- size - sizeof(size_t) available payload bytes -+
: |
chunk-> +- -+
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
| Size of next chunk (may or may not be in use) | +-+
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
And if it's free, it looks like this:
chunk-> +- -+
| User payload (must be in use, or we would have merged!) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
| Size of this chunk 0| +-+
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Next pointer |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Prev pointer |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| :
+- size - sizeof(struct chunk) unused bytes -+
: |
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Size of this chunk |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
| Size of next chunk (must be in use, or we would have merged)| +-+
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| :
+- User payload -+
: |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|0|
+-+
Note that since we always merge adjacent free chunks, the chunks
adjacent to a free chunk must be in use.
Given a pointer to a chunk (which can be derived trivially from the
payload pointer) we can, in O(1) time, find out whether the adjacent
chunks are free, and if so, unlink them from the lists that they
are on and merge them with the current chunk.
Chunks always begin on even word boundaries, so the mem portion
(which is returned to the user) is also on an even word boundary, and
thus at least double-word aligned.
The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
chunk size (which is always a multiple of two words), is an in-use
bit for the *previous* chunk. If that bit is *clear*, then the
word before the current chunk size contains the previous chunk
size, and can be used to find the front of the previous chunk.
The very first chunk allocated always has this bit set, preventing
access to non-existent (or non-owned) memory. If pinuse is set for
any given chunk, then you CANNOT determine the size of the
previous chunk, and might even get a memory addressing fault when
trying to do so.
The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
the chunk size redundantly records whether the current chunk is
inuse (unless the chunk is mmapped). This redundancy enables usage
checks within free and realloc, and reduces indirection when freeing
and consolidating chunks.
Each freshly allocated chunk must have both cinuse and pinuse set.
That is, each allocated chunk borders either a previously allocated
and still in-use chunk, or the base of its memory arena. This is
ensured by making all allocations from the `lowest' part of any
found chunk. Further, no free chunk physically borders another one,
so each free chunk is known to be preceded and followed by either
inuse chunks or the ends of memory.
Note that the `foot' of the current chunk is actually represented
as the prev_foot of the NEXT chunk. This makes it easier to
deal with alignments etc but can be very confusing when trying
to extend or adapt this code.
The exceptions to all this are
1. The special chunk `top' is the top-most available chunk (i.e.,
the one bordering the end of available memory). It is treated
specially. Top is never included in any bin, is used only if
no other chunk is available, and is released back to the
system if it is very large (see M_TRIM_THRESHOLD). In effect,
the top chunk is treated as larger (and thus less well
fitting) than any other available chunk. The top chunk
doesn't update its trailing size field since there is no next
contiguous chunk that would have to index off it. However,
space is still allocated for it (TOP_FOOT_SIZE) to enable
separation or merging when space is extended.
3. Chunks allocated via mmap, have both cinuse and pinuse bits
cleared in their head fields. Because they are allocated
one-by-one, each must carry its own prev_foot field, which is
also used to hold the offset this chunk has within its mmapped
region, which is needed to preserve alignment. Each mmapped
chunk is trailed by the first two fields of a fake next-chunk
for sake of usage checks.
---------------------- Overlaid data structures -----------------------
When chunks are not in use, they are treated as nodes of either
lists or trees.
"Small" chunks are stored in circular doubly-linked lists, and look
like this:
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Size of previous chunk |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
`head:' | Size of chunk, in bytes |P|
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Forward pointer to next chunk in list |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Back pointer to previous chunk in list |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Unused space (may be 0 bytes long) .
. .
. |
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
`foot:' | Size of chunk, in bytes |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Larger chunks are kept in a form of bitwise digital trees (aka
tries) keyed on chunksizes. Because malloc_tree_chunks are only for
free chunks greater than 256 bytes, their size doesn't impose any
constraints on user chunk sizes. Each node looks like:
chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Size of previous chunk |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
`head:' | Size of chunk, in bytes |P|
mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Forward pointer to next chunk of same size |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Back pointer to previous chunk of same size |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Pointer to left child (child[0]) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Pointer to right child (child[1]) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Pointer to parent |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| bin index of this chunk |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Unused space .
. |
nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
`foot:' | Size of chunk, in bytes |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Each tree holding treenodes is a tree of unique chunk sizes. Chunks
of the same size are arranged in a circularly-linked list, with only
the oldest chunk (the next to be used, in our FIFO ordering)
actually in the tree. (Tree members are distinguished by a non-null
parent pointer.) If a chunk with the same size an an existing node
is inserted, it is linked off the existing node using pointers that
work in the same way as fd/bk pointers of small chunks.
Each tree contains a power of 2 sized range of chunk sizes (the
smallest is 0x100 <= x < 0x180), which is is divided in half at each
tree level, with the chunks in the smaller half of the range (0x100
<= x < 0x140 for the top nose) in the left subtree and the larger
half (0x140 <= x < 0x180) in the right subtree. This is, of course,
done by inspecting individual bits.
Using these rules, each node's left subtree contains all smaller
sizes than its right subtree. However, the node at the root of each
subtree has no particular ordering relationship to either. (The
dividing line between the subtree sizes is based on trie relation.)
If we remove the last chunk of a given size from the interior of the
tree, we need to replace it with a leaf node. The tree ordering
rules permit a node to be replaced by any leaf below it.
The smallest chunk in a tree (a common operation in a best-fit
allocator) can be found by walking a path to the leftmost leaf in
the tree. Unlike a usual binary tree, where we follow left child
pointers until we reach a null, here we follow the right child
pointer any time the left one is null, until we reach a leaf with
both child pointers null. The smallest chunk in the tree will be
somewhere along that path.
The worst case number of steps to add, find, or remove a node is
bounded by the number of bits differentiating chunks within
bins. Under current bin calculations, this ranges from 6 up to 21
(for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
is of course much better.
----------------------------- Segments --------------------------------
Each malloc space may include non-contiguous segments, held in a
list headed by an embedded malloc_segment record representing the
top-most space. Segments also include flags holding properties of
the space. Large chunks that are directly allocated by mmap are not
included in this list. They are instead independently created and
destroyed without otherwise keeping track of them.
Segment management mainly comes into play for spaces allocated by
MMAP. Any call to MMAP might or might not return memory that is
adjacent to an existing segment. MORECORE normally contiguously
extends the current space, so this space is almost always adjacent,
which is simpler and faster to deal with. (This is why MORECORE is
used preferentially to MMAP when both are available -- see
sys_alloc.) When allocating using MMAP, we don't use any of the
hinting mechanisms (inconsistently) supported in various
implementations of unix mmap, or distinguish reserving from
committing memory. Instead, we just ask for space, and exploit
contiguity when we get it. It is probably possible to do
better than this on some systems, but no general scheme seems
to be significantly better.
Management entails a simpler variant of the consolidation scheme
used for chunks to reduce fragmentation -- new adjacent memory is
normally prepended or appended to an existing segment. However,
there are limitations compared to chunk consolidation that mostly
reflect the fact that segment processing is relatively infrequent
(occurring only when getting memory from system) and that we
don't expect to have huge numbers of segments:
* Segments are not indexed, so traversal requires linear scans. (It
would be possible to index these, but is not worth the extra
overhead and complexity for most programs on most platforms.)
* New segments are only appended to old ones when holding top-most
memory; if they cannot be prepended to others, they are held in
different segments.
Except for the top-most segment of an mstate, each segment record
is kept at the tail of its segment. Segments are added by pushing
segment records onto the list headed by &mstate.seg for the
containing mstate.
Segment flags control allocation/merge/deallocation policies:
* If EXTERN_BIT set, then we did not allocate this segment,
and so should not try to deallocate or merge with others.
(This currently holds only for the initial segment passed
into create_mspace_with_base.)
* If USE_MMAP_BIT set, the segment may be merged with
other surrounding mmapped segments and trimmed/de-allocated
using munmap.
* If neither bit is set, then the segment was obtained using
MORECORE so can be merged with surrounding MORECORE'd segments
and deallocated/trimmed using MORECORE with negative arguments.
---------------------------- malloc_state -----------------------------
A malloc_state holds all of the bookkeeping for a space.
The main fields are:
Top
The topmost chunk of the currently active segment. Its size is
cached in topsize. The actual size of topmost space is
topsize+TOP_FOOT_SIZE, which includes space reserved for adding
fenceposts and segment records if necessary when getting more
space from the system. The size at which to autotrim top is
cached from mparams in trim_check, except that it is disabled if
an autotrim fails.
Designated victim (dv)
This is the preferred chunk for servicing small requests that
don't have exact fits. It is normally the chunk split off most
recently to service another small request. Its size is cached in
dvsize. The link fields of this chunk are not maintained since it
is not kept in a bin.
SmallBins
An array of bin headers for free chunks. These bins hold chunks
with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
chunks of all the same size, spaced 8 bytes apart. To simplify
use in double-linked lists, each bin header acts as a malloc_chunk
pointing to the real first node, if it exists (else pointing to
itself). This avoids special-casing for headers. But to avoid
waste, we allocate only the fd/bk pointers of bins, and then use
repositioning tricks to treat these as the fields of a chunk.
TreeBins
Treebins are pointers to the roots of trees holding a range of
sizes. There are 2 equally spaced treebins for each power of two
from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
larger.
Bin maps
There is one bit map for small bins ("smallmap") and one for
treebins ("treemap). Each bin sets its bit when non-empty, and
clears the bit when empty. Bit operations are then used to avoid
bin-by-bin searching -- nearly all "search" is done without ever
looking at bins that won't be selected. The bit maps
conservatively use 32 bits per map word, even if on 64bit system.
For a good description of some of the bit-based techniques used
here, see Henry S. Warren Jr's book "Hacker's Delight" (and
supplement at http://hackersdelight.org/). Many of these are
intended to reduce the branchiness of paths through malloc etc, as
well as to reduce the number of memory locations read or written.
Segments
A list of segments headed by an embedded malloc_segment record
representing the initial space.
Address check support
The least_addr field is the least address ever obtained from
MORECORE or MMAP. Attempted frees and reallocs of any address less
than this are trapped (unless INSECURE is defined).
Magic tag
A cross-check field that should always hold same value as mparams.magic.
Max allowed footprint
The maximum allowed bytes to allocate from system (zero means no limit)
Flags
Bits recording whether to use MMAP, locks, or contiguous MORECORE
Statistics
Each space keeps track of current and maximum system memory
obtained via MORECORE or MMAP.
Trim support
Fields holding the amount of unused topmost memory that should trigger
trimming, and a counter to force periodic scanning to release unused
non-topmost segments.
Locking
If USE_LOCKS is defined, the "mutex" lock is acquired and released
around every public call using this mspace.
Extension support
A void* pointer and a size_t field that can be used to help implement
extensions to this malloc.
////////////////////////////////////////////////////////////////////////////////
* MSPACES
If MSPACES is defined, then in addition to malloc, free, etc.,
this file also defines mspace_malloc, mspace_free, etc. These
@ -213,12 +579,12 @@
indicating its originating mspace, and frees are directed to their
originating spaces. Normally, this requires use of locks.
───────────────────────── Compile-time options ───────────────────────────
------------------------- Compile-time options ---------------------------
Be careful in setting #define values for numerical constants of type
size_t. On some systems, literal values are not automatically extended
to size_t precision unless they are explicitly casted. You can also
use the symbolic values SIZE_MAX, SIZE_T_ONE, etc below.
use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
WIN32 default: defined if _WIN32 defined
Defining WIN32 sets up defaults for MS environment and compilers.
@ -287,7 +653,7 @@ FOOTERS default: 0
information in the footers of allocated chunks. This adds
space and time overhead.
TRUSTWORTHY default: 0
INSECURE default: 0
If true, omit checks for usage errors and heap space overwrites.
USE_DL_PREFIX default: NOT defined
@ -301,7 +667,7 @@ MALLOC_INSPECT_ALL default: NOT defined
functions is otherwise restricted, you probably do not want to
include them in secure implementations.
MALLOC_ABORT default: defined as abort()
ABORT default: defined as abort()
Defines how to abort on failed checks. On most systems, a failed
check cannot die with an "assert" or even print an informative
message, because the underlying print routines in turn call malloc,
@ -389,6 +755,10 @@ HAVE_MMAP default: 1 (true)
able to unmap memory that may have be allocated using multiple calls
to MMAP, so long as they are adjacent.
HAVE_MREMAP default: 1 on linux, else 0
If true realloc() uses mremap() to re-allocate large blocks and
extend or shrink allocation spaces.
MMAP_CLEARS default: 1 except on WINCE.
True if mmap clears memory so calloc doesn't need to. This is true
for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
@ -408,6 +778,10 @@ malloc_getpagesize default: derive from system includes, or 4096.
if WIN32, where page size is determined using getSystemInfo during
initialization.
USE_DEV_RANDOM default: 0 (i.e., not used)
Causes malloc to use /dev/random to initialize secure magic seed for
stamping footers. Otherwise, the current time is used.
NO_MALLINFO default: 0
If defined, don't compile "mallinfo". This can be a simple way
of dealing with mismatches between system declarations and
@ -466,7 +840,7 @@ DEFAULT_TRIM_THRESHOLD default: 2MB
and released in ways that can reuse each other's storage, perhaps
mixed with phases where there are no such chunks at all. The trim
value must be greater than page size to have any useful effect. To
disable trimming completely, you can set to SIZE_MAX. Note that the trick
disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
some people use of mallocing a huge space and then freeing it at
program startup, in an attempt to reserve system memory, doesn't
have the intended effect under automatic trimming, since that memory
@ -494,7 +868,7 @@ DEFAULT_MMAP_THRESHOLD default: 256K
nearly always outweigh disadvantages for "large" chunks, but the
value of "large" may vary across systems. The default is an
empirically derived value that works well in most systems. You can
disable mmap by setting to SIZE_MAX.
disable mmap by setting to MAX_SIZE_T.
MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
The number of consolidated frees between checks to release
@ -507,13 +881,100 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP
consolidation. The best value for this parameter is a compromise
between slowing down frees with relatively costly checks that
rarely trigger versus holding on to unused memory. To effectively
disable, set to SIZE_MAX. This may lead to a very slight speed
disable, set to MAX_SIZE_T. This may lead to a very slight speed
improvement at the expense of carrying around more memory.
────────────────────────────────────────────────────────────────────────────────
Guidelines for creating a custom version of MORECORE:
* For best performance, MORECORE should allocate in multiples of pagesize.
* MORECORE may allocate more memory than requested. (Or even less,
but this will usually result in a malloc failure.)
* MORECORE must not allocate memory when given argument zero, but
instead return one past the end address of memory from previous
nonzero call.
* For best performance, consecutive calls to MORECORE with positive
arguments should return increasing addresses, indicating that
space has been contiguously extended.
* Even though consecutive calls to MORECORE need not return contiguous
addresses, it must be OK for malloc'ed chunks to span multiple
regions in those cases where they do happen to be contiguous.
* MORECORE need not handle negative arguments -- it may instead
just return MFAIL when given negative arguments.
Negative arguments are always multiples of pagesize. MORECORE
must not misinterpret negative args as large positive unsigned
args. You can suppress all such calls from even occurring by defining
MORECORE_CANNOT_TRIM,
As an example alternative MORECORE, here is a custom allocator
kindly contributed for pre-OSX macOS. It uses virtually but not
necessarily physically contiguous non-paged memory (locked in,
present and won't get swapped out). You can use it by uncommenting
this section, adding some #includes, and setting up the appropriate
defines above:
#define MORECORE osMoreCore
There is also a shutdown routine that should somehow be called for
cleanup upon program exit.
#define MAX_POOL_ENTRIES 100
#define MINIMUM_MORECORE_SIZE (64 * 1024U)
static int next_os_pool;
void *our_os_pools[MAX_POOL_ENTRIES];
void *osMoreCore(int size)
{
void *ptr = 0;
static void *sbrk_top = 0;
if (size > 0)
{
if (size < MINIMUM_MORECORE_SIZE)
size = MINIMUM_MORECORE_SIZE;
if (CurrentExecutionLevel() == kTaskLevel)
ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
if (ptr == 0)
{
return (void *) MFAIL;
}
// save ptrs so they can be freed during cleanup
our_os_pools[next_os_pool] = ptr;
next_os_pool++;
ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
sbrk_top = (char *) ptr + size;
return ptr;
}
else if (size < 0)
{
// we don't currently support shrink behavior
return (void *) MFAIL;
}
else
{
return sbrk_top;
}
}
// cleanup any allocated memory pools
// called as last thing before shutting down driver
void osCleanupMem(void)
{
void **ptr;
for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
if (*ptr)
{
PoolDeallocate(*ptr);
*ptr = 0;
}
}
*/
/* -----------------------------------------------------------------------
History:
v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea
* fix bad comparison in dlposix_memalign
* don't reuse adjusted asize in sys_alloc
@ -521,7 +982,7 @@ History:
* reduce compiler warnings -- thanks to all who reported/suggested these
v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee)
* Always perform unlink checks unless TRUSTWORTHY
* Always perform unlink checks unless INSECURE
* Add posix_memalign.
* Improve realloc to expand in more cases; expose realloc_in_place.
Thanks to Peter Buhr for the suggestion.
@ -728,94 +1189,3 @@ History:
Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
* Based loosely on libg++-1.2X malloc. (It retains some of the overall
structure of old version, but most details differ.)
/* ──────────────────── Alternative MORECORE functions ─────────────────── */
/*
Guidelines for creating a custom version of MORECORE:
* For best performance, MORECORE should allocate in multiples of pagesize.
* MORECORE may allocate more memory than requested. (Or even less,
but this will usually result in a malloc failure.)
* MORECORE must not allocate memory when given argument zero, but
instead return one past the end address of memory from previous
nonzero call.
* For best performance, consecutive calls to MORECORE with positive
arguments should return increasing addresses, indicating that
space has been contiguously extended.
* Even though consecutive calls to MORECORE need not return contiguous
addresses, it must be OK for malloc'ed chunks to span multiple
regions in those cases where they do happen to be contiguous.
* MORECORE need not handle negative arguments -- it may instead
just return MFAIL when given negative arguments.
Negative arguments are always multiples of pagesize. MORECORE
must not misinterpret negative args as large positive unsigned
args. You can suppress all such calls from even occurring by defining
MORECORE_CANNOT_TRIM,
As an example alternative MORECORE, here is a custom allocator
kindly contributed for pre-OSX macOS. It uses virtually but not
necessarily physically contiguous non-paged memory (locked in,
present and won't get swapped out). You can use it by uncommenting
this section, adding some #includes, and setting up the appropriate
defines above:
#define MORECORE osMoreCore
There is also a shutdown routine that should somehow be called for
cleanup upon program exit.
#define MAX_POOL_ENTRIES 100
#define MINIMUM_MORECORE_SIZE (64 * 1024U)
static int next_os_pool;
void *our_os_pools[MAX_POOL_ENTRIES];
void *osMoreCore(int size)
{
void *ptr = 0;
static void *sbrk_top = 0;
if (size > 0)
{
if (size < MINIMUM_MORECORE_SIZE)
size = MINIMUM_MORECORE_SIZE;
if (CurrentExecutionLevel() == kTaskLevel)
ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
if (ptr == 0)
{
return (void *) MFAIL;
}
// save ptrs so they can be freed during cleanup
our_os_pools[next_os_pool] = ptr;
next_os_pool++;
ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
sbrk_top = (char *) ptr + size;
return ptr;
}
else if (size < 0)
{
// we don't currently support shrink behavior
return (void *) MFAIL;
}
else
{
return sbrk_top;
}
}
// cleanup any allocated memory pools
// called as last thing before shutting down driver
void osCleanupMem(void)
{
void **ptr;
for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
if (*ptr)
{
PoolDeallocate(*ptr);
*ptr = 0;
}
}
*/

View file

@ -1,21 +1,13 @@
ORIGIN
DESCRIPTION
http://gee.cs.oswego.edu/
malloc-2.8.6
written by Doug Lea
LICENSE
http://creativecommons.org/publicdomain/zero/1.0/
LOCAL CHANGES
Numerous local changes were made while vendoring Doug Lee's original
dlmalloc sources. Those changes basically boil down to:
1. Fewer #ifdefs
2. More modules (so linker can do a better job)
3. Delete code we don't need (cf. Knight Capital)
4. Readability / stylistic consistency
Since we haven't made any genuine improvements to Doug Lee's legendary
allocator, we feel this folder faithfully presents his intended work, in
harmony with Cosmopolitan conventions.
The only deleted code we're sure has compelling merit is the mspace
functionality. If we ever need memory pools, they might be more
appropriately vendored under //third_party/dlmalloc_mspace.
- Introduce __oom_hook()
- Favor pause (rather than sched_yield) for spin locks

View file

@ -1,61 +0,0 @@
#include "libc/mem/mem.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/**
* Frees and clears (sets to NULL) each non-null pointer in the given
* array. This is twice as fast as freeing them one-by-one. If footers
* are used, pointers that have been allocated in different mspaces are
* not freed or cleared, and the count of all such pointers is returned.
* For large arrays of pointers with poor locality, it may be worthwhile
* to sort this array before calling bulk_free.
*/
size_t dlbulk_free(void *array[], size_t nelem) {
void **a, **b, *mem, **fence;
struct MallocChunk *p, *next;
size_t psize, newsize, unfreed;
/*
* Try to free all pointers in the given array. Note: this could be
* made faster, by delaying consolidation, at the price of disabling
* some user integrity checks, We still optimize some consolidations
* by combining adjacent chunks before freeing, which will occur often
* if allocated with ialloc or the array is sorted.
*/
unfreed = 0;
if (!PREACTION(g_dlmalloc)) {
a;
fence = &(array[nelem]);
for (a = array; a != fence; ++a) {
mem = *a;
if (mem != 0) {
p = mem2chunk(AddressDeathAction(mem));
psize = chunksize(p);
#if FOOTERS
if (get_mstate_for(p) != g_dlmalloc) {
++unfreed;
continue;
}
#endif
check_inuse_chunk(g_dlmalloc, p);
*a = 0;
if (RTCHECK(ok_address(g_dlmalloc, p) && ok_inuse(p))) {
b = a + 1; /* try to merge with next chunk */
next = next_chunk(p);
if (b != fence && *b == chunk2mem(next)) {
newsize = chunksize(next) + psize;
set_inuse(g_dlmalloc, p, newsize);
*b = chunk2mem(p);
} else
dlmalloc_dispose_chunk(g_dlmalloc, p, psize);
} else {
CORRUPTION_ERROR_ACTION(g_dlmalloc);
break;
}
}
}
if (should_trim(g_dlmalloc, g_dlmalloc->topsize)) {
dlmalloc_sys_trim(g_dlmalloc, 0);
}
POSTACTION(g_dlmalloc);
}
return unfreed;
}

View file

@ -1,13 +0,0 @@
#include "libc/str/str.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
void *dlcalloc(size_t n_elements, size_t elem_size) {
void *mem;
size_t req;
if (__builtin_mul_overflow(n_elements, elem_size, &req)) req = -1;
mem = dlmalloc(req);
if (mem && calloc_must_clear(mem2chunk(mem))) {
bzero(mem, req);
}
return mem;
}

View file

@ -1,227 +0,0 @@
#include "libc/mem/mem.h"
#include "libc/str/str.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/*
Common support for independent_X routines, handling
all of the combinations that can result.
The opts arg has:
bit 0 set if all elements are same size (using sizes[0])
bit 1 set if elements should be zeroed
*/
static void **ialloc(mstate m, size_t n_elements, size_t *sizes, int opts,
void *chunks[]) {
size_t element_size; /* chunksize of each element, if all same */
size_t contents_size; /* total size of elements */
size_t array_size; /* request size of pointer array */
void *mem; /* malloced aggregate space */
mchunkptr p; /* corresponding chunk */
size_t remainder_size; /* remaining bytes while splitting */
void **marray; /* either "chunks" or malloced ptr array */
mchunkptr array_chunk; /* chunk for malloced ptr array */
flag_t was_enabled; /* to disable mmap */
size_t size;
size_t i;
ensure_initialization();
/* compute array length, if needed */
if (chunks != 0) {
if (n_elements == 0) return chunks; /* nothing to do */
marray = chunks;
array_size = 0;
} else {
/* if empty req, must still return chunk representing empty array */
if (n_elements == 0) return (void **)dlmalloc(0);
marray = 0;
array_size = request2size(n_elements * (sizeof(void *)));
}
/* compute total element size */
if (opts & 0x1) { /* all-same-size */
element_size = request2size(*sizes);
contents_size = n_elements * element_size;
} else { /* add up all the sizes */
element_size = 0;
contents_size = 0;
for (i = 0; i != n_elements; ++i) contents_size += request2size(sizes[i]);
}
size = contents_size + array_size;
/*
Allocate the aggregate chunk. First disable direct-mmapping so
malloc won't use it, since we would not be able to later
free/realloc space internal to a segregated mmap region.
*/
was_enabled = use_mmap(m);
disable_mmap(m);
mem = dlmalloc(size - CHUNK_OVERHEAD);
if (was_enabled) enable_mmap(m);
if (mem == 0) return 0;
if (PREACTION(m)) return 0;
p = mem2chunk(mem);
remainder_size = chunksize(p);
assert(!is_mmapped(p));
if (opts & 0x2) { /* optionally clear the elements */
bzero((size_t *)mem, remainder_size - SIZE_T_SIZE - array_size);
}
/* If not provided, allocate the pointer array as final part of chunk */
if (marray == 0) {
size_t array_chunk_size;
array_chunk = chunk_plus_offset(p, contents_size);
array_chunk_size = remainder_size - contents_size;
marray = AddressBirthAction((void **)(chunk2mem(array_chunk)));
set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
remainder_size = contents_size;
}
/* split out elements */
for (i = 0;; ++i) {
marray[i] = AddressBirthAction(chunk2mem(p));
if (i != n_elements - 1) {
if (element_size != 0)
size = element_size;
else
size = request2size(sizes[i]);
remainder_size -= size;
set_size_and_pinuse_of_inuse_chunk(m, p, size);
p = chunk_plus_offset(p, size);
} else { /* the final element absorbs any overallocation slop */
set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
break;
}
}
#ifdef DEBUG
if (marray != chunks) {
/* final element must have exactly exhausted chunk */
if (element_size != 0) {
assert(remainder_size == element_size);
} else {
assert(remainder_size == request2size(sizes[i]));
}
check_inuse_chunk(m, mem2chunk(marray));
}
for (i = 0; i != n_elements; ++i) {
check_inuse_chunk(m, mem2chunk(marray[i]));
}
#endif /* IsModeDbg() */
POSTACTION(m);
return marray;
}
/**
* independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
*
* independent_calloc is similar to calloc, but instead of returning a
* single cleared space, it returns an array of pointers to n_elements
* independent elements that can hold contents of size elem_size, each
* of which starts out cleared, and can be independently freed,
* realloc'ed etc. The elements are guaranteed to be adjacently
* allocated (this is not guaranteed to occur with multiple callocs or
* mallocs), which may also improve cache locality in some applications.
*
* The "chunks" argument is optional (i.e., may be null, which is
* probably the most typical usage). If it is null, the returned array
* is itself dynamically allocated and should also be freed when it is
* no longer needed. Otherwise, the chunks array must be of at least
* n_elements in length. It is filled in with the pointers to the
* chunks.
*
* In either case, independent_calloc returns this pointer array, or
* null if the allocation failed. * If n_elements is zero and "chunks"
* is null, it returns a chunk representing an array with zero elements
* (which should be freed if not wanted).
*
* Each element must be freed when it is no longer needed. This can be
* done all at once using bulk_free.
*
* independent_calloc simplifies and speeds up implementations of many
* kinds of pools. * It may also be useful when constructing large data
* structures that initially have a fixed number of fixed-sized nodes,
* but the number is not known at compile time, and some of the nodes
* may later need to be freed. For example:
*
* struct Node { int item; struct Node* next; };
* struct Node* build_list() {
* struct Node **pool;
* int n = read_number_of_nodes_needed();
* if (n <= 0) return 0;
* pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
* if (pool == 0) __die();
* // organize into a linked list...
* struct Node* first = pool[0];
* for (i = 0; i < n-1; ++i)
* pool[i]->next = pool[i+1];
* free(pool); * // Can now free the array (or not, if it is needed later)
* return first;
* }
*/
void **dlindependent_calloc(size_t n_elements, size_t elem_size,
void *chunks[]) {
size_t sz = elem_size; /* serves as 1-element array */
return ialloc(g_dlmalloc, n_elements, &sz, 3, chunks);
}
/**
* independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
*
* independent_comalloc allocates, all at once, a set of n_elements
* chunks with sizes indicated in the "sizes" array. It returns an array
* of pointers to these elements, each of which can be independently
* freed, realloc'ed etc. The elements are guaranteed to be adjacently
* allocated (this is not guaranteed to occur with multiple callocs or
* mallocs), which may also improve cache locality in some applications.
*
* The "chunks" argument is optional (i.e., may be null). If it is null
* the returned array is itself dynamically allocated and should also
* be freed when it is no longer needed. Otherwise, the chunks array
* must be of at least n_elements in length. It is filled in with the
* pointers to the chunks.
*
* In either case, independent_comalloc returns this pointer array, or
* null if the allocation failed. If n_elements is zero and chunks is
* null, it returns a chunk representing an array with zero elements
* (which should be freed if not wanted).
*
* Each element must be freed when it is no longer needed. This can be
* done all at once using bulk_free.
*
* independent_comallac differs from independent_calloc in that each
* element may have a different size, and also that it does not
* automatically clear elements.
*
* independent_comalloc can be used to speed up allocation in cases
* where several structs or objects must always be allocated at the
* same time. For example:
*
* struct Head { ... }
* struct Foot { ... }
* void send_message(char* msg) {
* int msglen = strlen(msg);
* size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
* void* chunks[3];
* if (independent_comalloc(3, sizes, chunks) == 0) __die();
* struct Head* head = (struct Head*)(chunks[0]);
* char* body = (char*)(chunks[1]);
* struct Foot* foot = (struct Foot*)(chunks[2]);
* // ...
* }
*
* In general though, independent_comalloc is worth using only for
* larger values of n_elements. For small values, you probably won't
* detect enough difference from series of malloc calls to bother.
*
* Overuse of independent_comalloc can increase overall memory usage,
* since it cannot reuse existing noncontiguous small chunks that might
* be available for some of the elements.
*/
void **dlindependent_comalloc(size_t n_elements, size_t sizes[],
void *chunks[]) {
return ialloc(g_dlmalloc, n_elements, sizes, 0, chunks);
}

View file

@ -1,247 +0,0 @@
#include "third_party/dlmalloc/dlmalloc.internal.h"
/* Check properties of any chunk, whether free, inuse, mmapped etc */
forceinline void do_check_any_chunk(mstate m, mchunkptr p) {
assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
assert(ok_address(m, p));
}
/* Check properties of top chunk */
void do_check_top_chunk(mstate m, mchunkptr p) {
msegmentptr sp = segment_holding(m, (char*)p);
size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
assert(sp != 0);
assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
assert(ok_address(m, p));
assert(sz == m->topsize);
assert(sz > 0);
assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
assert(pinuse(p));
assert(!pinuse(chunk_plus_offset(p, sz)));
}
/* Check properties of (inuse) mmapped chunks */
void do_check_mmapped_chunk(mstate m, mchunkptr p) {
size_t sz = chunksize(p);
size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
assert(is_mmapped(p));
assert(use_mmap(m));
assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
assert(ok_address(m, p));
assert(!is_small(sz));
assert((len & (g_mparams.page_size - SIZE_T_ONE)) == 0);
assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
assert(chunk_plus_offset(p, sz + SIZE_T_SIZE)->head == 0);
}
/* Check properties of inuse chunks */
void do_check_inuse_chunk(mstate m, mchunkptr p) {
do_check_any_chunk(m, p);
assert(is_inuse(p));
assert(next_pinuse(p));
/* If not pinuse and not mmapped, previous chunk has OK offset */
assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
if (is_mmapped(p)) do_check_mmapped_chunk(m, p);
}
/* Check properties of free chunks */
void do_check_free_chunk(mstate m, mchunkptr p) {
size_t sz = chunksize(p);
mchunkptr next = chunk_plus_offset(p, sz);
do_check_any_chunk(m, p);
assert(!is_inuse(p));
assert(!next_pinuse(p));
assert(!is_mmapped(p));
if (p != m->dv && p != m->top) {
if (sz >= MIN_CHUNK_SIZE) {
assert((sz & CHUNK_ALIGN_MASK) == 0);
assert(is_aligned(chunk2mem(p)));
assert(next->prev_foot == sz);
assert(pinuse(p));
assert(next == m->top || is_inuse(next));
assert(p->fd->bk == p);
assert(p->bk->fd == p);
} else /* markers are always of size SIZE_T_SIZE */
assert(sz == SIZE_T_SIZE);
}
}
/* Check properties of malloced chunks at the point they are malloced */
void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
if (mem != 0) {
mchunkptr p = mem2chunk(mem);
size_t sz = p->head & ~INUSE_BITS;
do_check_inuse_chunk(m, p);
assert((sz & CHUNK_ALIGN_MASK) == 0);
assert(sz >= MIN_CHUNK_SIZE);
assert(sz >= s);
/* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
}
}
/* Check a tree and its subtrees. */
static void do_check_tree(mstate m, tchunkptr t) {
tchunkptr head = 0;
tchunkptr u = t;
bindex_t tindex = t->index;
size_t tsize = chunksize(t);
bindex_t idx;
compute_tree_index(tsize, idx);
assert(tindex == idx);
assert(tsize >= MIN_LARGE_SIZE);
assert(tsize >= minsize_for_tree_index(idx));
assert((idx == NTREEBINS - 1) || (tsize < minsize_for_tree_index((idx + 1))));
do { /* traverse through chain of same-sized nodes */
do_check_any_chunk(m, ((mchunkptr)u));
assert(u->index == tindex);
assert(chunksize(u) == tsize);
assert(!is_inuse(u));
assert(!next_pinuse(u));
assert(u->fd->bk == u);
assert(u->bk->fd == u);
if (u->parent == 0) {
assert(u->child[0] == 0);
assert(u->child[1] == 0);
} else {
assert(head == 0); /* only one node on chain has parent */
head = u;
assert(u->parent != u);
assert(u->parent->child[0] == u || u->parent->child[1] == u ||
*((tbinptr*)(u->parent)) == u);
if (u->child[0] != 0) {
assert(u->child[0]->parent == u);
assert(u->child[0] != u);
do_check_tree(m, u->child[0]);
}
if (u->child[1] != 0) {
assert(u->child[1]->parent == u);
assert(u->child[1] != u);
do_check_tree(m, u->child[1]);
}
if (u->child[0] != 0 && u->child[1] != 0) {
assert(chunksize(u->child[0]) < chunksize(u->child[1]));
}
}
u = u->fd;
} while (u != t);
assert(head != 0);
}
/* Check all the chunks in a treebin. */
static void do_check_treebin(mstate m, bindex_t i) {
tbinptr* tb = treebin_at(m, i);
tchunkptr t = *tb;
int empty = (m->treemap & (1U << i)) == 0;
if (t == 0) assert(empty);
if (!empty) do_check_tree(m, t);
}
/* Check all the chunks in a smallbin. */
static void do_check_smallbin(mstate m, bindex_t i) {
sbinptr b = smallbin_at(m, i);
mchunkptr p = b->bk;
unsigned int empty = (m->smallmap & (1U << i)) == 0;
if (p == b) assert(empty);
if (!empty) {
for (; p != b; p = p->bk) {
size_t size = chunksize(p);
mchunkptr q;
/* each chunk claims to be free */
do_check_free_chunk(m, p);
/* chunk belongs in bin */
assert(small_index(size) == i);
assert(p->bk == b || chunksize(p->bk) == chunksize(p));
/* chunk is followed by an inuse chunk */
q = next_chunk(p);
if (q->head != FENCEPOST_HEAD) do_check_inuse_chunk(m, q);
}
}
}
/* Find x in a bin. Used in other check functions. */
static int bin_find(mstate m, mchunkptr x) {
size_t size = chunksize(x);
if (is_small(size)) {
bindex_t sidx = small_index(size);
sbinptr b = smallbin_at(m, sidx);
if (smallmap_is_marked(m, sidx)) {
mchunkptr p = b;
do {
if (p == x) return 1;
} while ((p = p->fd) != b);
}
} else {
bindex_t tidx;
compute_tree_index(size, tidx);
if (treemap_is_marked(m, tidx)) {
tchunkptr t = *treebin_at(m, tidx);
size_t sizebits = size << leftshift_for_tree_index(tidx);
while (t != 0 && chunksize(t) != size) {
t = t->child[(sizebits >> (SIZE_T_BITSIZE - SIZE_T_ONE)) & 1];
sizebits <<= 1;
}
if (t != 0) {
tchunkptr u = t;
do {
if (u == (tchunkptr)x) return 1;
} while ((u = u->fd) != t);
}
}
}
return 0;
}
/* Traverse each chunk and check it; return total */
static size_t traverse_and_check(mstate m) {
size_t sum = 0;
if (is_initialized(m)) {
msegmentptr s = &m->seg;
sum += m->topsize + TOP_FOOT_SIZE;
while (s != 0) {
mchunkptr q = align_as_chunk(s->base);
mchunkptr lastq = 0;
assert(pinuse(q));
while (segment_holds(s, q) && q != m->top && q->head != FENCEPOST_HEAD) {
sum += chunksize(q);
if (is_inuse(q)) {
assert(!bin_find(m, q));
do_check_inuse_chunk(m, q);
} else {
assert(q == m->dv || bin_find(m, q));
assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
do_check_free_chunk(m, q);
}
lastq = q;
q = next_chunk(q);
}
s = s->next;
}
}
return sum;
}
/* Check all properties of MallocState. */
void do_check_malloc_state(mstate m) {
bindex_t i;
size_t total;
/* check bins */
for (i = 0; i < NSMALLBINS; ++i) do_check_smallbin(m, i);
for (i = 0; i < NTREEBINS; ++i) do_check_treebin(m, i);
if (m->dvsize != 0) { /* check dv chunk */
do_check_any_chunk(m, m->dv);
assert(m->dvsize == chunksize(m->dv));
assert(m->dvsize >= MIN_CHUNK_SIZE);
assert(bin_find(m, m->dv) == 0);
}
if (m->top != 0) { /* check top chunk */
do_check_top_chunk(m, m->top);
/*assert(m->topsize == chunksize(m->top)); redundant */
assert(m->topsize > 0);
assert(bin_find(m, m->top) == 0);
}
total = traverse_and_check(m);
assert(total <= m->footprint);
assert(m->footprint <= m->max_footprint);
}

File diff suppressed because it is too large Load diff

510
third_party/dlmalloc/dlmalloc.h vendored Normal file
View file

@ -0,0 +1,510 @@
#ifndef COSMOPOLITAN_THIRD_PARTY_DLMALLOC_DLMALLOC_H_
#define COSMOPOLITAN_THIRD_PARTY_DLMALLOC_DLMALLOC_H_
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
/*
malloc(size_t n)
Returns a pointer to a newly allocated chunk of at least n bytes, or
null if no space is available, in which case errno is set to ENOMEM
on ANSI C systems.
If n is zero, malloc returns a minimum-sized chunk. (The minimum
size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
systems.) Note that size_t is an unsigned type, so calls with
arguments that would be negative if signed are interpreted as
requests for huge amounts of space, which will often fail. The
maximum supported value of n differs across systems, but is in all
cases less than the maximum representable value of a size_t.
*/
void* dlmalloc(size_t);
/*
free(void* p)
Releases the chunk of memory pointed to by p, that had been previously
allocated using malloc or a related routine such as realloc.
It has no effect if p is null. If p was not malloced or already
freed, free(p) will by default cuase the current program to abort.
*/
void dlfree(void*);
/*
calloc(size_t n_elements, size_t element_size);
Returns a pointer to n_elements * element_size bytes, with all locations
set to zero.
*/
void* dlcalloc(size_t, size_t);
/*
realloc(void* p, size_t n)
Returns a pointer to a chunk of size n that contains the same data
as does chunk p up to the minimum of (n, p's size) bytes, or null
if no space is available.
The returned pointer may or may not be the same as p. The algorithm
prefers extending p in most cases when possible, otherwise it
employs the equivalent of a malloc-copy-free sequence.
If p is null, realloc is equivalent to malloc.
If space is not available, realloc returns null, errno is set (if on
ANSI) and p is NOT freed.
if n is for fewer bytes than already held by p, the newly unused
space is lopped off and freed if possible. realloc with a size
argument of zero (re)allocates a minimum-sized chunk.
The old unix realloc convention of allowing the last-free'd chunk
to be used as an argument to realloc is not supported.
*/
void* dlrealloc(void*, size_t);
/*
realloc_in_place(void* p, size_t n)
Resizes the space allocated for p to size n, only if this can be
done without moving p (i.e., only if there is adjacent space
available if n is greater than p's current allocated size, or n is
less than or equal to p's size). This may be used instead of plain
realloc if an alternative allocation strategy is needed upon failure
to expand space; for example, reallocation of a buffer that must be
memory-aligned or cleared. You can use realloc_in_place to trigger
these alternatives only when needed.
Returns p if successful; otherwise null.
*/
void* dlrealloc_in_place(void*, size_t);
/*
memalign(size_t alignment, size_t n);
Returns a pointer to a newly allocated chunk of n bytes, aligned
in accord with the alignment argument.
The alignment argument should be a power of two. If the argument is
not a power of two, the nearest greater power is used.
8-byte alignment is guaranteed by normal malloc calls, so don't
bother calling memalign with an argument of 8 or less.
Overreliance on memalign is a sure way to fragment space.
*/
void* dlmemalign(size_t, size_t);
/*
int posix_memalign(void** pp, size_t alignment, size_t n);
Allocates a chunk of n bytes, aligned in accord with the alignment
argument. Differs from memalign only in that it (1) assigns the
allocated memory to *pp rather than returning it, (2) fails and
returns EINVAL if the alignment is not a power of two (3) fails and
returns ENOMEM if memory cannot be allocated.
*/
int dlposix_memalign(void**, size_t, size_t);
/*
valloc(size_t n);
Equivalent to memalign(pagesize, n), where pagesize is the page
size of the system. If the pagesize is unknown, 4096 is used.
*/
void* dlvalloc(size_t);
/*
mallopt(int parameter_number, int parameter_value)
Sets tunable parameters The format is to provide a
(parameter-number, parameter-value) pair. mallopt then sets the
corresponding parameter to the argument value if it can (i.e., so
long as the value is meaningful), and returns 1 if successful else
0. SVID/XPG/ANSI defines four standard param numbers for mallopt,
normally defined in malloc.h. None of these are use in this malloc,
so setting them has no effect. But this malloc also supports other
options in mallopt:
Symbol param # default allowed param values
M_TRIM_THRESHOLD -1 2*1024*1024 any (-1U disables trimming)
M_GRANULARITY -2 page size any power of 2 >= page size
M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
*/
int dlmallopt(int, int);
/*
malloc_footprint();
Returns the number of bytes obtained from the system. The total
number of bytes allocated by malloc, realloc etc., is less than this
value. Unlike mallinfo, this function returns only a precomputed
result, so can be called frequently to monitor memory consumption.
Even if locks are otherwise defined, this function does not use them,
so results might not be up to date.
*/
size_t dlmalloc_footprint(void);
/*
malloc_max_footprint();
Returns the maximum number of bytes obtained from the system. This
value will be greater than current footprint if deallocated space
has been reclaimed by the system. The peak number of bytes allocated
by malloc, realloc etc., is less than this value. Unlike mallinfo,
this function returns only a precomputed result, so can be called
frequently to monitor memory consumption. Even if locks are
otherwise defined, this function does not use them, so results might
not be up to date.
*/
size_t dlmalloc_max_footprint(void);
/*
malloc_footprint_limit();
Returns the number of bytes that the heap is allowed to obtain from
the system, returning the last value returned by
malloc_set_footprint_limit, or the maximum size_t value if
never set. The returned value reflects a permission. There is no
guarantee that this number of bytes can actually be obtained from
the system.
*/
size_t dlmalloc_footprint_limit(void);
/*
malloc_set_footprint_limit();
Sets the maximum number of bytes to obtain from the system, causing
failure returns from malloc and related functions upon attempts to
exceed this value. The argument value may be subject to page
rounding to an enforceable limit; this actual value is returned.
Using an argument of the maximum possible size_t effectively
disables checks. If the argument is less than or equal to the
current malloc_footprint, then all future allocations that require
additional system memory will fail. However, invocation cannot
retroactively deallocate existing used memory.
*/
size_t dlmalloc_set_footprint_limit(size_t bytes);
/*
malloc_inspect_all(void(*handler)(void *start,
void *end,
size_t used_bytes,
void* callback_arg),
void* arg);
Traverses the heap and calls the given handler for each managed
region, skipping all bytes that are (or may be) used for bookkeeping
purposes. Traversal does not include include chunks that have been
directly memory mapped. Each reported region begins at the start
address, and continues up to but not including the end address. The
first used_bytes of the region contain allocated data. If
used_bytes is zero, the region is unallocated. The handler is
invoked with the given callback argument. If locks are defined, they
are held during the entire traversal. It is a bad idea to invoke
other malloc functions from within the handler.
For example, to count the number of in-use chunks with size greater
than 1000, you could write:
static int count = 0;
void count_chunks(void* start, void* end, size_t used, void* arg) {
if (used >= 1000) ++count;
}
then:
malloc_inspect_all(count_chunks, NULL);
malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
*/
void dlmalloc_inspect_all(void (*handler)(void*, void*, size_t, void*),
void* arg);
/*
mallinfo()
Returns (by copy) a struct containing various summary statistics:
arena: current total non-mmapped bytes allocated from system
ordblks: the number of free chunks
smblks: always zero.
hblks: current number of mmapped regions
hblkhd: total bytes held in mmapped regions
usmblks: the maximum total allocated space. This will be greater
than current total if trimming has occurred.
fsmblks: always zero
uordblks: current total allocated space (normal or mmapped)
fordblks: total free space
keepcost: the maximum number of bytes that could ideally be released
back to system via malloc_trim. ("ideally" means that
it ignores page restrictions etc.)
Because these fields are ints, but internal bookkeeping may
be kept as longs, the reported values may wrap around zero and
thus be inaccurate.
*/
struct mallinfo dlmallinfo(void);
/*
independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
independent_calloc is similar to calloc, but instead of returning a
single cleared space, it returns an array of pointers to n_elements
independent elements that can hold contents of size elem_size, each
of which starts out cleared, and can be independently freed,
realloc'ed etc. The elements are guaranteed to be adjacently
allocated (this is not guaranteed to occur with multiple callocs or
mallocs), which may also improve cache locality in some
applications.
The "chunks" argument is optional (i.e., may be null, which is
probably the most typical usage). If it is null, the returned array
is itself dynamically allocated and should also be freed when it is
no longer needed. Otherwise, the chunks array must be of at least
n_elements in length. It is filled in with the pointers to the
chunks.
In either case, independent_calloc returns this pointer array, or
null if the allocation failed. If n_elements is zero and "chunks"
is null, it returns a chunk representing an array with zero elements
(which should be freed if not wanted).
Each element must be freed when it is no longer needed. This can be
done all at once using bulk_free.
independent_calloc simplifies and speeds up implementations of many
kinds of pools. It may also be useful when constructing large data
structures that initially have a fixed number of fixed-sized nodes,
but the number is not known at compile time, and some of the nodes
may later need to be freed. For example:
struct Node { int item; struct Node* next; };
struct Node* build_list() {
struct Node** pool;
int n = read_number_of_nodes_needed();
if (n <= 0) return 0;
pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
if (pool == 0) die();
// organize into a linked list...
struct Node* first = pool[0];
for (i = 0; i < n-1; ++i)
pool[i]->next = pool[i+1];
free(pool); // Can now free the array (or not, if it is needed later)
return first;
}
*/
void** dlindependent_calloc(size_t, size_t, void**);
/*
independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
independent_comalloc allocates, all at once, a set of n_elements
chunks with sizes indicated in the "sizes" array. It returns
an array of pointers to these elements, each of which can be
independently freed, realloc'ed etc. The elements are guaranteed to
be adjacently allocated (this is not guaranteed to occur with
multiple callocs or mallocs), which may also improve cache locality
in some applications.
The "chunks" argument is optional (i.e., may be null). If it is null
the returned array is itself dynamically allocated and should also
be freed when it is no longer needed. Otherwise, the chunks array
must be of at least n_elements in length. It is filled in with the
pointers to the chunks.
In either case, independent_comalloc returns this pointer array, or
null if the allocation failed. If n_elements is zero and chunks is
null, it returns a chunk representing an array with zero elements
(which should be freed if not wanted).
Each element must be freed when it is no longer needed. This can be
done all at once using bulk_free.
independent_comallac differs from independent_calloc in that each
element may have a different size, and also that it does not
automatically clear elements.
independent_comalloc can be used to speed up allocation in cases
where several structs or objects must always be allocated at the
same time. For example:
struct Head { ... }
struct Foot { ... }
void send_message(char* msg) {
int msglen = strlen(msg);
size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
void* chunks[3];
if (independent_comalloc(3, sizes, chunks) == 0)
die();
struct Head* head = (struct Head*)(chunks[0]);
char* body = (char*)(chunks[1]);
struct Foot* foot = (struct Foot*)(chunks[2]);
// ...
}
In general though, independent_comalloc is worth using only for
larger values of n_elements. For small values, you probably won't
detect enough difference from series of malloc calls to bother.
Overuse of independent_comalloc can increase overall memory usage,
since it cannot reuse existing noncontiguous small chunks that
might be available for some of the elements.
*/
void** dlindependent_comalloc(size_t, size_t*, void**);
/*
bulk_free(void* array[], size_t n_elements)
Frees and clears (sets to null) each non-null pointer in the given
array. This is likely to be faster than freeing them one-by-one.
If footers are used, pointers that have been allocated in different
mspaces are not freed or cleared, and the count of all such pointers
is returned. For large arrays of pointers with poor locality, it
may be worthwhile to sort this array before calling bulk_free.
*/
size_t dlbulk_free(void**, size_t n_elements);
/*
pvalloc(size_t n);
Equivalent to valloc(minimum-page-that-holds(n)), that is,
round up n to nearest pagesize.
*/
void* dlpvalloc(size_t);
/*
malloc_trim(size_t pad);
If possible, gives memory back to the system (via negative arguments
to sbrk) if there is unused memory at the `high' end of the malloc
pool or in unused MMAP segments. You can call this after freeing
large blocks of memory to potentially reduce the system-level memory
requirements of a program. However, it cannot guarantee to reduce
memory. Under some allocation patterns, some large free blocks of
memory will be locked between two used chunks, so they cannot be
given back to the system.
The `pad' argument to malloc_trim represents the amount of free
trailing space to leave untrimmed. If this argument is zero, only
the minimum amount of memory to maintain internal data structures
will be left. Non-zero arguments can be supplied to maintain enough
trailing space to service future expected allocations without having
to re-obtain memory from the system.
Malloc_trim returns 1 if it actually released any memory, else 0.
*/
int dlmalloc_trim(size_t);
/*
malloc_stats();
Prints on stderr the amount of space obtained from the system (both
via sbrk and mmap), the maximum amount (which may be more than
current if malloc_trim and/or munmap got called), and the current
number of bytes allocated via malloc (or realloc, etc) but not yet
freed. Note that this is the number of bytes allocated, not the
number requested. It will be larger than the number requested
because of alignment and bookkeeping overhead. Because it includes
alignment wastage as being in use, this figure may be greater than
zero even when no user-level chunks are allocated.
The reported current and maximum system memory can be inaccurate if
a program makes other calls to system memory allocation functions
(normally sbrk) outside of malloc.
malloc_stats prints only the most commonly interesting statistics.
More information can be obtained by calling mallinfo.
malloc_stats is not compiled if NO_MALLOC_STATS is defined.
*/
void dlmalloc_stats(void);
/*
malloc_usable_size(void* p);
Returns the number of bytes you can actually use in
an allocated chunk, which may be more than you requested (although
often not) due to alignment and minimum size constraints.
You can use this many bytes without worrying about
overwriting other allocated objects. This is not a particularly great
programming practice. malloc_usable_size can be more useful in
debugging and assertions, for example:
p = malloc(n);
assert(malloc_usable_size(p) >= 256);
*/
size_t dlmalloc_usable_size(const void*);
/*
mspace is an opaque type representing an independent
region of space that supports mspace_malloc, etc.
*/
typedef void* mspace;
/*
create_mspace creates and returns a new independent space with the
given initial capacity, or, if 0, the default granularity size. It
returns null if there is no system memory available to create the
space. If argument locked is non-zero, the space uses a separate
lock to control access. The capacity of the space will grow
dynamically as needed to service mspace_malloc requests. You can
control the sizes of incremental increases of this space by
compiling with a different DEFAULT_GRANULARITY or dynamically
setting with mallopt(M_GRANULARITY, value).
*/
mspace create_mspace(size_t capacity, int locked);
/*
destroy_mspace destroys the given space, and attempts to return all
of its memory back to the system, returning the total number of
bytes freed. After destruction, the results of access to all memory
used by the space become undefined.
*/
size_t destroy_mspace(mspace msp);
/*
create_mspace_with_base uses the memory supplied as the initial base
of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
space is used for bookkeeping, so the capacity must be at least this
large. (Otherwise 0 is returned.) When this initial space is
exhausted, additional memory will be obtained from the system.
Destroying this space will deallocate all additionally allocated
space (if possible) but not the initial base.
*/
mspace create_mspace_with_base(void* base, size_t capacity, int locked);
/*
mspace_track_large_chunks controls whether requests for large chunks
are allocated in their own untracked mmapped regions, separate from
others in this mspace. By default large chunks are not tracked,
which reduces fragmentation. However, such chunks are not
necessarily released to the system upon destroy_mspace. Enabling
tracking by setting to true may increase fragmentation, but avoids
leakage when relying on destroy_mspace to release all memory
allocated using this space. The function returns the previous
setting.
*/
int mspace_track_large_chunks(mspace msp, int enable);
/*
mspace_mallinfo behaves as mallinfo, but reports properties of
the given space.
*/
struct mallinfo mspace_mallinfo(mspace msp);
/*
An alias for mallopt.
*/
int mspace_mallopt(int, int);
/*
The following operate identically to their malloc counterparts
but operate only for the given mspace argument
*/
void* mspace_malloc(mspace msp, size_t bytes);
void mspace_free(mspace msp, void* mem);
void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
void* mspace_realloc(mspace msp, void* mem, size_t newsize);
void* mspace_realloc_in_place(mspace msp, void* mem, size_t newsize);
void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
void** mspace_independent_calloc(mspace msp, size_t n_elements,
size_t elem_size, void* chunks[]);
void** mspace_independent_comalloc(mspace msp, size_t n_elements,
size_t sizes[], void* chunks[]);
size_t mspace_bulk_free(mspace msp, void**, size_t n_elements);
size_t mspace_usable_size(const void* mem);
void mspace_malloc_stats(mspace msp);
int mspace_trim(mspace msp, size_t pad);
size_t mspace_footprint(mspace msp);
size_t mspace_max_footprint(mspace msp);
size_t mspace_footprint_limit(mspace msp);
size_t mspace_set_footprint_limit(mspace msp, size_t bytes);
void mspace_inspect_all(mspace msp,
void (*handler)(void*, void*, size_t, void*),
void* arg);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* COSMOPOLITAN_THIRD_PARTY_DLMALLOC_DLMALLOC_H_ */

File diff suppressed because it is too large Load diff

View file

@ -31,9 +31,11 @@ THIRD_PARTY_DLMALLOC_A_DIRECTDEPS = \
LIBC_NEXGEN32E \
LIBC_RUNTIME \
LIBC_STR \
LIBC_RAND \
LIBC_STUBS \
LIBC_SYSV \
LIBC_SYSV_CALLS
LIBC_SYSV_CALLS \
THIRD_PARTY_COMPILER_RT
THIRD_PARTY_DLMALLOC_A_DEPS := \
$(call uniq,$(foreach x,$(THIRD_PARTY_DLMALLOC_A_DIRECTDEPS),$($(x))))
@ -50,13 +52,8 @@ $(THIRD_PARTY_DLMALLOC_A).pkg: \
$(THIRD_PARTY_DLMALLOC_A_OBJS): \
OVERRIDE_CFLAGS += \
$(NO_MAGIC) \
-fno-sanitize=address
ifneq ($(MODE),dbg)
$(THIRD_PARTY_DLMALLOC_A_OBJS): \
OVERRIDE_CFLAGS += \
-DNDEBUG
endif
-ffunction-sections \
-fdata-sections
THIRD_PARTY_DLMALLOC_LIBS = $(foreach x,$(THIRD_PARTY_DLMALLOC_ARTIFACTS),$($(x)))
THIRD_PARTY_DLMALLOC_SRCS = $(foreach x,$(THIRD_PARTY_DLMALLOC_ARTIFACTS),$($(x)_SRCS))

View file

@ -1,48 +0,0 @@
#include "libc/mem/mem.h"
#include "libc/str/str.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/**
* Prints on stderr the amount of space obtained from the system (both
* via sbrk and mmap), the maximum amount (which may be more than
* current if malloc_trim and/or munmap got called), and the current
* number of bytes allocated via malloc (or realloc, etc) but not yet
* freed. Note that this is the number of bytes allocated, not the
* number requested. It will be larger than the number requested because
* of alignment and bookkeeping overhead. Because it includes alignment
* wastage as being in use, this figure may be greater than zero even
* when no user-level chunks are allocated.
*
* The reported current and maximum system memory can be inaccurate if a
* program makes other calls to system memory allocation functions
* (normally sbrk) outside of malloc.
*
* malloc_stats prints only the most commonly interesting statistics.
* More information can be obtained by calling mallinfo.
*/
struct MallocStats dlmalloc_stats(mstate m) {
struct MallocChunk *q;
struct MallocStats res;
bzero(&res, sizeof(res));
ensure_initialization();
if (!PREACTION(m)) {
check_malloc_state(m);
if (is_initialized(m)) {
msegmentptr s = &m->seg;
res.maxfp = m->max_footprint;
res.fp = m->footprint;
res.used = res.fp - (m->topsize + TOP_FOOT_SIZE);
while (s != 0) {
q = align_as_chunk(s->base);
while (segment_holds(s, q) && q != m->top &&
q->head != FENCEPOST_HEAD) {
if (!is_inuse(q)) res.used -= chunksize(q);
q = next_chunk(q);
}
s = s->next;
}
}
POSTACTION(m); /* drop lock */
}
return res;
}

View file

@ -1,110 +0,0 @@
#include "libc/errno.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/* Realloc using mmap */
mchunkptr dlmalloc_mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
size_t oldsize = chunksize(oldp);
if (is_small(nb)) return 0; /* Can't shrink mmap regions below small size */
/* Keep old chunk if big enough but not too big */
if (oldsize >= nb + SIZE_T_SIZE &&
(oldsize - nb) <= (g_mparams.granularity << 1)) {
return oldp;
} else {
size_t offset = oldp->prev_foot;
size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
int err = errno;
char *cp = mremap((char *)oldp - offset, oldmmsize, newmmsize, flags, 0);
errno = err;
if (cp != CMFAIL) {
mchunkptr newp = (mchunkptr)(cp + offset);
size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
newp->head = psize;
mark_inuse_foot(m, newp, psize);
chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
chunk_plus_offset(newp, psize + SIZE_T_SIZE)->head = 0;
if (cp < m->least_addr) m->least_addr = cp;
if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint) {
m->max_footprint = m->footprint;
}
check_mmapped_chunk(m, newp);
return newp;
}
}
return 0;
}
/* Try to realloc; only in-place unless can_move true */
mchunkptr dlmalloc_try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
int can_move) {
mchunkptr newp = 0;
size_t oldsize = chunksize(p);
mchunkptr next = chunk_plus_offset(p, oldsize);
if (RTCHECK(ok_address(m, p) && ok_inuse(p) && ok_next(p, next) &&
ok_pinuse(next))) {
if (!is_mmapped(p)) {
if (oldsize >= nb) { /* already big enough */
size_t rsize = oldsize - nb;
if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */
mchunkptr r = chunk_plus_offset(p, nb);
set_inuse(m, p, nb);
set_inuse(m, r, rsize);
dlmalloc_dispose_chunk(m, r, rsize);
}
newp = p;
} else if (next == m->top) { /* extend into top */
if (oldsize + m->topsize > nb) {
size_t newsize = oldsize + m->topsize;
size_t newtopsize = newsize - nb;
mchunkptr newtop = chunk_plus_offset(p, nb);
set_inuse(m, p, nb);
newtop->head = newtopsize | PINUSE_BIT;
m->top = newtop;
m->topsize = newtopsize;
newp = p;
}
} else if (next == m->dv) { /* extend into dv */
size_t dvs = m->dvsize;
if (oldsize + dvs >= nb) {
size_t dsize = oldsize + dvs - nb;
if (dsize >= MIN_CHUNK_SIZE) {
mchunkptr r = chunk_plus_offset(p, nb);
mchunkptr n = chunk_plus_offset(r, dsize);
set_inuse(m, p, nb);
set_size_and_pinuse_of_free_chunk(r, dsize);
clear_pinuse(n);
m->dvsize = dsize;
m->dv = r;
} else { /* exhaust dv */
size_t newsize = oldsize + dvs;
set_inuse(m, p, newsize);
m->dvsize = 0;
m->dv = 0;
}
newp = p;
}
} else if (!cinuse(next)) { /* extend into next free chunk */
size_t nextsize = chunksize(next);
if (oldsize + nextsize >= nb) {
size_t rsize = oldsize + nextsize - nb;
unlink_chunk(m, next, nextsize);
if (rsize < MIN_CHUNK_SIZE) {
size_t newsize = oldsize + nextsize;
set_inuse(m, p, newsize);
} else {
mchunkptr r = chunk_plus_offset(p, nb);
set_inuse(m, p, nb);
set_inuse(m, r, rsize);
dlmalloc_dispose_chunk(m, r, rsize);
}
newp = p;
}
}
} else {
newp = dlmalloc_mmap_resize(m, p, nb, can_move);
}
} else {
USAGE_ERROR_ACTION(m, chunk2mem(p));
}
return newp;
}

View file

@ -1,47 +0,0 @@
#include "libc/bits/likely.h"
#include "libc/str/str.h"
#include "libc/sysv/errfuns.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
void *dlrealloc(void *oldmem, size_t bytes) {
void *mem = 0;
size_t oc, nb;
struct MallocState *m;
struct MallocChunk *oldp, *newp;
if (oldmem) {
if (LIKELY(bytes < MAX_REQUEST)) {
if (bytes) {
nb = request2size(bytes);
oldp = mem2chunk(oldmem);
#if !FOOTERS
m = g_dlmalloc;
#else /* FOOTERS */
m = get_mstate_for(oldp);
if (UNLIKELY(!ok_magic(m))) {
USAGE_ERROR_ACTION(m, oldmem);
return 0;
}
#endif /* FOOTERS */
if (!PREACTION(m)) {
newp = dlmalloc_try_realloc_chunk(m, oldp, nb, 1);
POSTACTION(m);
if (newp) {
check_inuse_chunk(m, newp);
mem = chunk2mem(newp);
} else if ((mem = dlmalloc(bytes))) {
oc = chunksize(oldp) - overhead_for(oldp);
memcpy(mem, oldmem, (oc < bytes) ? oc : bytes);
dlfree(oldmem);
}
}
} else {
dlfree(oldmem);
}
} else {
enomem();
}
} else {
mem = dlmalloc(bytes);
}
return mem;
}

View file

@ -1,33 +0,0 @@
#include "libc/mem/mem.h"
#include "libc/sysv/errfuns.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
void *dlrealloc_in_place(void *oldmem, size_t bytes) {
void *mem = 0;
if (oldmem != 0) {
if (bytes >= MAX_REQUEST) {
enomem();
} else {
size_t nb = request2size(bytes);
mchunkptr oldp = mem2chunk(oldmem);
#if !FOOTERS
mstate m = g_dlmalloc;
#else /* FOOTERS */
mstate m = get_mstate_for(oldp);
if (!ok_magic(m)) {
USAGE_ERROR_ACTION(m, oldmem);
return 0;
}
#endif /* FOOTERS */
if (!PREACTION(m)) {
mchunkptr newp = dlmalloc_try_realloc_chunk(m, oldp, nb, 0);
POSTACTION(m);
if (newp == oldp) {
check_inuse_chunk(m, newp);
mem = oldmem;
}
}
}
}
return mem;
}

View file

@ -1,69 +0,0 @@
#include "libc/mem/mem.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/**
* Returns (by copy) a struct containing various summary statistics:
*
* - arena: current total non-mmapped bytes allocated from system
*
* - ordblks: the number of free chunks
*
* - smblks: always zero.
*
* - hblks: current number of mmapped regions
*
* - hblkhd: total bytes held in mmapped regions
*
* - usmblks: the maximum total allocated space. This will be greater
* than current total if trimming has occurred.
*
* - fsmblks: always zero
*
* - uordblks: current total allocated space (normal or mmapped)
*
* - fordblks: total free space
*
* - keepcost: the maximum number of bytes that could ideally be
* released back to system via malloc_trim. ("ideally" means that it
* ignores page restrictions etc.)
*
* Because these fields are ints, but internal bookkeeping may
* be kept as longs, the reported values may wrap around zero and
* thus be inaccurate.
*/
struct mallinfo mallinfo(void) {
struct mallinfo nm = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
ensure_initialization();
if (!PREACTION(g_dlmalloc)) {
check_malloc_state(g_dlmalloc);
if (is_initialized(g_dlmalloc)) {
size_t nfree = SIZE_T_ONE; /* top always free */
size_t mfree = g_dlmalloc->topsize + TOP_FOOT_SIZE;
size_t sum = mfree;
msegmentptr s = &g_dlmalloc->seg;
while (s != 0) {
mchunkptr q = align_as_chunk(s->base);
while (segment_holds(s, q) && q != g_dlmalloc->top &&
q->head != FENCEPOST_HEAD) {
size_t sz = chunksize(q);
sum += sz;
if (!is_inuse(q)) {
mfree += sz;
++nfree;
}
q = next_chunk(q);
}
s = s->next;
}
nm.arena = sum;
nm.ordblks = nfree;
nm.hblkhd = g_dlmalloc->footprint - sum;
nm.usmblks = g_dlmalloc->max_footprint;
nm.uordblks = g_dlmalloc->footprint - mfree;
nm.fordblks = mfree;
nm.keepcost = g_dlmalloc->topsize;
}
POSTACTION(g_dlmalloc);
}
return nm;
}

View file

@ -1,14 +0,0 @@
#include "libc/mem/mem.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/**
* Returns the number of bytes obtained from the system. The total
* number of bytes allocated by malloc, realloc etc., is less than this
* value. Unlike mallinfo, this function returns only a precomputed
* result, so can be called frequently to monitor memory consumption.
* Even if locks are otherwise defined, this function does not use them,
* so results might not be up to date.
*/
size_t malloc_footprint(void) {
return g_dlmalloc->footprint;
}

View file

@ -1,15 +0,0 @@
#include "libc/limits.h"
#include "libc/mem/mem.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/**
* Returns the number of bytes that the heap is allowed to obtain from
* the system, returning the last value returned by
* malloc_set_footprint_limit, or the maximum size_t value if never set.
* The returned value reflects a permission. There is no guarantee that
* this number of bytes can actually be obtained from the system.
*/
size_t malloc_footprint_limit(void) {
size_t maf = g_dlmalloc->footprint_limit;
return maf == 0 ? SIZE_MAX : maf;
}

View file

@ -1,72 +0,0 @@
#include "libc/mem/mem.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
static void internal_inspect_all(mstate m,
void (*handler)(void *start, void *end,
size_t used_bytes,
void *callback_arg),
void *arg) {
if (is_initialized(m)) {
mchunkptr top = m->top;
msegmentptr s;
for (s = &m->seg; s != 0; s = s->next) {
mchunkptr q = align_as_chunk(s->base);
while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
mchunkptr next = next_chunk(q);
size_t sz = chunksize(q);
size_t used;
void *start;
if (is_inuse(q)) {
used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
start = chunk2mem(q);
} else {
used = 0;
if (is_small(sz)) { /* offset by possible bookkeeping */
start = (void *)((char *)q + sizeof(struct MallocChunk));
} else {
start = (void *)((char *)q + sizeof(struct MallocTreeChunk));
}
}
if (start < (void *)next) { /* skip if all space is bookkeeping */
handler(start, next, used, arg);
}
if (q == top) break;
q = next;
}
}
}
}
/**
* Traverses the heap and calls the given handler for each managed
* region, skipping all bytes that are (or may be) used for bookkeeping
* purposes. Traversal does not include include chunks that have been
* directly memory mapped. Each reported region begins at the start
* address, and continues up to but not including the end address. The
* first used_bytes of the region contain allocated data. If
* used_bytes is zero, the region is unallocated. The handler is
* invoked with the given callback argument. If locks are defined, they
* are held during the entire traversal. It is a bad idea to invoke
* other malloc functions from within the handler.
*
* For example, to count the number of in-use chunks with size greater
* than 1000, you could write:
*
* static int count = 0;
* void count_chunks(void* start, void* end, size_t used, void* arg) {
* if (used >= 1000) ++count;
* }
*
* then,
*
* malloc_inspect_all(count_chunks, NULL);
*/
void malloc_inspect_all(void (*handler)(void *start, void *end,
size_t used_bytes, void *callback_arg),
void *arg) {
ensure_initialization();
if (!PREACTION(g_dlmalloc)) {
internal_inspect_all(g_dlmalloc, handler, arg);
POSTACTION(g_dlmalloc);
}
}

View file

@ -1,16 +0,0 @@
#include "libc/mem/mem.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/**
* Returns the maximum number of bytes obtained from the system. This
* value will be greater than current footprint if deallocated space has
* been reclaimed by the system. The peak number of bytes allocated by
* malloc, realloc etc., is less than this value. Unlike mallinfo, this
* function returns only a precomputed result, so can be called
* frequently to monitor memory consumption. Even if locks are otherwise
* defined, this function does not use them, so results might not be up
* to date.
*/
size_t malloc_max_footprint(void) {
return g_dlmalloc->max_footprint;
}

View file

@ -1,25 +0,0 @@
#include "libc/limits.h"
#include "libc/mem/mem.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/**
* Sets the maximum number of bytes to obtain from the system, causing
* failure returns from malloc and related functions upon attempts to
* exceed this value. The argument value may be subject to page rounding
* to an enforceable limit; this actual value is returned. Using an
* argument of the maximum possible size_t effectively disables checks.
* If the argument is less than or equal to the current
* malloc_footprint, then all future allocations that require additional
* system memory will fail. However, invocation cannot retroactively
* deallocate existing used memory.
*/
size_t malloc_set_footprint_limit(size_t bytes) {
size_t result; /* invert sense of 0 */
if (bytes == 0) result = granularity_align(1); /* Use minimal size */
if (bytes == SIZE_MAX) {
result = 0; /* disable */
} else {
result = granularity_align(bytes);
}
return g_dlmalloc->footprint_limit = result;
}

View file

@ -1,32 +0,0 @@
#include "libc/mem/mem.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/**
* If possible, gives memory back to the system (via negative arguments
* to sbrk) if there is unused memory at the `high` end of the malloc
* pool or in unused MMAP segments. You can call this after freeing
* large blocks of memory to potentially reduce the system-level memory
* requirements of a program. However, it cannot guarantee to reduce
* memory. Under some allocation patterns, some large free blocks of
* memory will be locked between two used chunks, so they cannot be
* given back to the system.
*
* The `pad` argument to malloc_trim represents the amount of free
* trailing space to leave untrimmed. If this argument is zero, only the
* minimum amount of memory to maintain internal data structures will be
* left. Non-zero arguments can be supplied to maintain enough trailing
* space to service future expected allocations without having to
* re-obtain memory from the system.
*
* @return 1 if it actually released any memory, else 0
*/
int dlmalloc_trim(size_t pad) {
/* asan runtime depends on this function */
int result = 0;
ensure_initialization();
if (!PREACTION(g_dlmalloc)) {
result = dlmalloc_sys_trim(g_dlmalloc, pad);
POSTACTION(g_dlmalloc);
}
return result;
}

View file

@ -1,42 +0,0 @@
#include "libc/limits.h"
#include "libc/mem/mem.h"
#include "third_party/dlmalloc/dlmalloc.internal.h"
/**
* Sets memory allocation parameter.
*
* The format is to provide a (parameter-number, parameter-value) pair.
* mallopt then sets the corresponding parameter to the argument value
* if it can (i.e., so long as the value is meaningful), and returns 1
* if successful else 0. SVID/XPG/ANSI defines four standard param
* numbers for mallopt, normally defined in malloc.h. None of these are
* use in this malloc, so setting them has no effect. But this malloc
* also supports other options in mallopt:
*
* Symbol param # default allowed param values
* M_TRIM_THRESHOLD -1 2*1024*1024 any (-1U disables trimming)
* M_GRANULARITY -2 page size any power of 2 >= page size
* M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support)
*/
bool32 mallopt(int param_number, int value) {
size_t val;
ensure_initialization();
val = (value == -1) ? SIZE_MAX : (size_t)value;
switch (param_number) {
case M_TRIM_THRESHOLD:
g_mparams.trim_threshold = val;
return true;
case M_GRANULARITY:
if (val >= g_mparams.page_size && ((val & (val - 1)) == 0)) {
g_mparams.granularity = val;
return true;
} else {
return false;
}
case M_MMAP_THRESHOLD:
g_mparams.mmap_threshold = val;
return true;
default:
return false;
}
}

View file

@ -1,7 +1,7 @@
/*-*- mode:unix-assembly; indent-tabs-mode:t; tab-width:8; coding:utf-8 -*-│
vi: set et ft=asm ts=8 tw=8 fenc=utf-8 :vi
/*-*- mode:c;indent-tabs-mode:nil;c-basic-offset:2;tab-width:8;coding:utf-8 -*-│
vi: set net ft=c ts=2 sts=2 sw=2 fenc=utf-8 :vi
Copyright 2020 Justine Alexandra Roberts Tunney
Copyright 2022 Justine Alexandra Roberts Tunney
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
@ -16,13 +16,25 @@
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
*/
#include "libc/macros.internal.h"
#include "libc/bits/weaken.h"
#include "libc/calls/calls.h"
#include "libc/intrin/asan.internal.h"
#include "libc/intrin/asancodes.h"
#include "libc/intrin/kprintf.h"
#include "libc/runtime/runtime.h"
#include "libc/sysv/consts/map.h"
#include "libc/sysv/consts/prot.h"
// Sneak ahead ctor list b/c runtime weakly links malloc.
.init.start 800,_init_dlmalloc
push %rdi
push %rsi
call dlmalloc_init
pop %rsi
pop %rdi
.init.end 800,_init_dlmalloc,globl,hidden
/**
* Acquires more system memory for dlmalloc.
*/
void *dlmalloc_requires_more_vespene_gas(size_t size) {
char *p;
if ((p = mmap(0, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS,
-1, 0)) != MAP_FAILED) {
if (weaken(__asan_poison)) {
weaken(__asan_poison)((uintptr_t)p, size, kAsanHeapFree);
}
}
return p;
}

10
third_party/dlmalloc/vespene.internal.h vendored Normal file
View file

@ -0,0 +1,10 @@
#ifndef COSMOPOLITAN_THIRD_PARTY_DLMALLOC_VESPENE_INTERNAL_H_
#define COSMOPOLITAN_THIRD_PARTY_DLMALLOC_VESPENE_INTERNAL_H_
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
void *dlmalloc_requires_more_vespene_gas(size_t);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* COSMOPOLITAN_THIRD_PARTY_DLMALLOC_VESPENE_INTERNAL_H_ */