cosmopolitan/libc/mem/realloc.c

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

65 lines
3.3 KiB
C
Raw Normal View History

2020-06-15 14:18:57 +00:00
/*-*- mode:c;indent-tabs-mode:nil;c-basic-offset:2;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=2 sts=2 sw=2 fenc=utf-8 :vi
Copyright 2023 Justine Alexandra Roberts Tunney
2020-06-15 14:18:57 +00:00
2020-12-28 01:18:44 +00:00
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
2020-06-15 14:18:57 +00:00
2020-12-28 01:18:44 +00:00
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
2020-06-15 14:18:57 +00:00
*/
#include "libc/mem/mem.h"
#include "third_party/dlmalloc/dlmalloc.h"
2024-12-17 09:36:29 +00:00
__static_yoink("free");
/**
* Allocates / resizes / frees memory, e.g.
*
* Returns a pointer to a chunk of size n that contains the same data as
* does chunk p up to the minimum of (n, p's size) bytes, or null if no
* space is available.
*
* If p is NULL, then realloc() is equivalent to malloc().
*
* If p is not NULL and n is 0, then realloc() shrinks the allocation to
* zero bytes. The allocation isn't freed and still continues to be a
* uniquely allocated piece of memory. However it should be assumed that
* zero bytes can be accessed, since that's enforced by `MODE=asan`.
*
* The returned pointer may or may not be the same as p. The algorithm
* prefers extending p in most cases when possible, otherwise it employs
* the equivalent of a malloc-copy-free sequence.
*
* Please note that p is NOT free()'d should realloc() fail, thus:
*
* if ((p2 = realloc(p, n2))) {
* p = p2;
* ...
* } else {
* ...
* }
*
* if n is for fewer bytes than already held by p, the newly unused
* space is lopped off and freed if possible.
*
* The old unix realloc convention of allowing the last-free'd chunk to
* be used as an argument to realloc is not supported.
*
* @param p is address of current allocation or NULL
* @param n is number of bytes needed
* @return rax is result, or NULL w/ errno w/o free(p)
* @see dlrealloc()
*/
void *realloc(void *p, size_t n) {
Make malloc() go 200x faster If pthread_create() is linked into the binary, then the cosmo runtime will create an independent dlmalloc arena for each core. Whenever the malloc() function is used it will index `g_heaps[sched_getcpu() / 2]` to find the arena with the greatest hyperthread / numa locality. This may be configured via an environment variable. For example if you say `export COSMOPOLITAN_HEAP_COUNT=1` then you can restore the old ways. Your process may be configured to have anywhere between 1 - 128 heaps We need this revision because it makes multithreaded C++ applications faster. For example, an HTTP server I'm working on that makes extreme use of the STL went from 16k to 2000k requests per second, after this change was made. To understand why, try out the malloc_test benchmark which calls malloc() + realloc() in a loop across many threads, which sees a a 250x improvement in process clock time and 200x on wall time The tradeoff is this adds ~25ns of latency to individual malloc calls compared to MODE=tiny, once the cosmo runtime has transitioned into a fully multi-threaded state. If you don't need malloc() to be scalable then cosmo provides many options for you. For starters the heap count variable above can be set to put the process back in single heap mode plus you can go even faster still, if you include tinymalloc.inc like many of the programs in tool/build/.. are already doing since that'll shave tens of kb off your binary footprint too. Theres also MODE=tiny which is configured to use just 1 plain old dlmalloc arena by default Another tradeoff is we need more memory now (except in MODE=tiny), to track the provenance of memory allocation. This is so allocations can be freely shared across threads, and because OSes can reschedule code to different CPUs at any time.
2024-06-05 08:31:21 +00:00
return dlrealloc(p, n);
2020-06-15 14:18:57 +00:00
}