Make malloc() go 200x faster

If pthread_create() is linked into the binary, then the cosmo runtime
will create an independent dlmalloc arena for each core. Whenever the
malloc() function is used it will index `g_heaps[sched_getcpu() / 2]`
to find the arena with the greatest hyperthread / numa locality. This
may be configured via an environment variable. For example if you say
`export COSMOPOLITAN_HEAP_COUNT=1` then you can restore the old ways.
Your process may be configured to have anywhere between 1 - 128 heaps

We need this revision because it makes multithreaded C++ applications
faster. For example, an HTTP server I'm working on that makes extreme
use of the STL went from 16k to 2000k requests per second, after this
change was made. To understand why, try out the malloc_test benchmark
which calls malloc() + realloc() in a loop across many threads, which
sees a a 250x improvement in process clock time and 200x on wall time

The tradeoff is this adds ~25ns of latency to individual malloc calls
compared to MODE=tiny, once the cosmo runtime has transitioned into a
fully multi-threaded state. If you don't need malloc() to be scalable
then cosmo provides many options for you. For starters the heap count
variable above can be set to put the process back in single heap mode
plus you can go even faster still, if you include tinymalloc.inc like
many of the programs in tool/build/.. are already doing since that'll
shave tens of kb off your binary footprint too. Theres also MODE=tiny
which is configured to use just 1 plain old dlmalloc arena by default

Another tradeoff is we need more memory now (except in MODE=tiny), to
track the provenance of memory allocation. This is so allocations can
be freely shared across threads, and because OSes can reschedule code
to different CPUs at any time.
This commit is contained in:
Justine Tunney 2024-06-05 01:31:21 -07:00
parent 9906f299bb
commit 3609f65de3
No known key found for this signature in database
GPG key ID: BE714B4575D6E328
60 changed files with 858 additions and 1064 deletions

View file

@ -95,13 +95,6 @@ static void* _PyObject_Realloc(void *ctx, void *ptr, size_t size);
static inline void *
_PyMem_RawMalloc(void *ctx, size_t size)
{
#ifdef __COSMOPOLITAN__
#ifdef __SANITIZE_ADDRESS__
return __asan_memalign(16, size);
#else
return dlmalloc(size);
#endif
#else
/* PyMem_RawMalloc(0) means malloc(1). Some systems would return NULL
for malloc(0), which would be treated as an error. Some platforms would
return a pointer with no memory behind it, which would break pymalloc.
@ -109,19 +102,11 @@ _PyMem_RawMalloc(void *ctx, size_t size)
if (size == 0)
size = 1;
return malloc(size);
#endif
}
static inline void *
_PyMem_RawCalloc(void *ctx, size_t nelem, size_t elsize)
{
#ifdef __COSMOPOLITAN__
#ifdef __SANITIZE_ADDRESS__
return __asan_calloc(nelem, elsize);
#else
return dlcalloc(nelem, elsize);
#endif
#else
/* PyMem_RawCalloc(0, 0) means calloc(1, 1). Some systems would return NULL
for calloc(0, 0), which would be treated as an error. Some platforms
would return a pointer with no memory behind it, which would break
@ -131,7 +116,6 @@ _PyMem_RawCalloc(void *ctx, size_t nelem, size_t elsize)
elsize = 1;
}
return calloc(nelem, elsize);
#endif
}
static inline void *
@ -139,29 +123,13 @@ _PyMem_RawRealloc(void *ctx, void *ptr, size_t size)
{
if (size == 0)
size = 1;
#ifdef __COSMOPOLITAN__
#ifdef __SANITIZE_ADDRESS__
return __asan_realloc(ptr, size);
#else
return dlrealloc(ptr, size);
#endif
#else
return realloc(ptr, size);
#endif
}
static inline void
_PyMem_RawFree(void *ctx, void *ptr)
{
#ifdef __COSMOPOLITAN__
#ifdef __SANITIZE_ADDRESS__
__asan_free(ptr);
#else
dlfree(ptr);
#endif
#else
free(ptr);
#endif
}
#ifdef MS_WINDOWS