cosmopolitan/libc/runtime/stack.h
Justine Tunney ec480f5aa0
Make improvements
- Every unit test now passes on Apple Silicon. The final piece of this
  puzzle was porting our POSIX threads cancelation support, since that
  works differently on ARM64 XNU vs. AMD64. Our semaphore support on
  Apple Silicon is also superior now compared to AMD64, thanks to the
  grand central dispatch library which lets *NSYNC locks go faster.

- The Cosmopolitan runtime is now more stable, particularly on Windows.
  To do this, thread local storage is mandatory at all runtime levels,
  and the innermost packages of the C library is no longer being built
  using ASAN. TLS is being bootstrapped with a 128-byte TIB during the
  process startup phase, and then later on the runtime re-allocates it
  either statically or dynamically to support code using _Thread_local.
  fork() and execve() now do a better job cooperating with threads. We
  can now check how much stack memory is left in the process or thread
  when functions like kprintf() / execve() etc. call alloca(), so that
  ENOMEM can be raised, reduce a buffer size, or just print a warning.

- POSIX signal emulation is now implemented the same way kernels do it
  with pthread_kill() and raise(). Any thread can interrupt any other
  thread, regardless of what it's doing. If it's blocked on read/write
  then the killer thread will cancel its i/o operation so that EINTR can
  be returned in the mark thread immediately. If it's doing a tight CPU
  bound operation, then that's also interrupted by the signal delivery.
  Signal delivery works now by suspending a thread and pushing context
  data structures onto its stack, and redirecting its execution to a
  trampoline function, which calls SetThreadContext(GetCurrentThread())
  when it's done.

- We're now doing a better job managing locks and handles. On NetBSD we
  now close semaphore file descriptors in forked children. Semaphores on
  Windows can now be canceled immediately, which means mutexes/condition
  variables will now go faster. Apple Silicon semaphores can be canceled
  too. We're now using Apple's pthread_yield() funciton. Apple _nocancel
  syscalls are now used on XNU when appropriate to ensure pthread_cancel
  requests aren't lost. The MbedTLS library has been updated to support
  POSIX thread cancelations. See tool/build/runitd.c for an example of
  how it can be used for production multi-threaded tls servers. Handles
  on Windows now leak less often across processes. All i/o operations on
  Windows are now overlapped, which means file pointers can no longer be
  inherited across dup() and fork() for the time being.

- We now spawn a thread on Windows to deliver SIGCHLD and wakeup wait4()
  which means, for example, that posix_spawn() now goes 3x faster. POSIX
  spawn is also now more correct. Like Musl, it's now able to report the
  failure code of execve() via a pipe although our approach favors using
  shared memory to do that on systems that have a true vfork() function.

- We now spawn a thread to deliver SIGALRM to threads when setitimer()
  is used. This enables the most precise wakeups the OS makes possible.

- The Cosmopolitan runtime now uses less memory. On NetBSD for example,
  it turned out the kernel would actually commit the PT_GNU_STACK size
  which caused RSS to be 6mb for every process. Now it's down to ~4kb.
  On Apple Silicon, we reduce the mandatory upstream thread size to the
  smallest possible size to reduce the memory overhead of Cosmo threads.
  The examples directory has a program called greenbean which can spawn
  a web server on Linux with 10,000 worker threads and have the memory
  usage of the process be ~77mb. The 1024 byte overhead of POSIX-style
  thread-local storage is now optional; it won't be allocated until the
  pthread_setspecific/getspecific functions are called. On Windows, the
  threads that get spawned which are internal to the libc implementation
  use reserve rather than commit memory, which shaves a few hundred kb.

- sigaltstack() is now supported on Windows, however it's currently not
  able to be used to handle stack overflows, since crash signals are
  still generated by WIN32. However the crash handler will still switch
  to the alt stack, which is helpful in environments with tiny threads.

- Test binaries are now smaller. Many of the mandatory dependencies of
  the test runner have been removed. This ensures many programs can do a
  better job only linking the the thing they're testing. This caused the
  test binaries for LIBC_FMT for example, to decrease from 200kb to 50kb

- long double is no longer used in the implementation details of libc,
  except in the APIs that define it. The old code that used long double
  for time (instead of struct timespec) has now been thoroughly removed.

- ShowCrashReports() is now much tinier in MODE=tiny. Instead of doing
  backtraces itself, it'll just print a command you can run on the shell
  using our new `cosmoaddr2line` program to view the backtrace.

- Crash report signal handling now works in a much better way. Instead
  of terminating the process, it now relies on SA_RESETHAND so that the
  default SIG_IGN behavior can terminate the process if necessary.

- Our pledge() functionality has now been fully ported to AARCH64 Linux.
2023-09-18 21:04:47 -07:00

162 lines
4.9 KiB
C

#ifndef COSMOPOLITAN_LIBC_RUNTIME_STACK_H_
#define COSMOPOLITAN_LIBC_RUNTIME_STACK_H_
#if !(__ASSEMBLER__ + __LINKER__ + 0)
#ifdef _COSMO_SOURCE
/**
* Returns preferred size and alignment of thread stack.
*/
#define GetStackSize() 262144
/**
* Returns preferred stack guard size.
*
* This is the max cpu page size of supported architectures.
*/
#define GetGuardSize() 16384
/**
* Align APE main thread stack at startup.
*
* You need this in your main program module:
*
* STATIC_STACK_ALIGN(GetStackSize());
*
* If you want to use GetStackAddr() and HaveStackMemory() safely on
* your main thread in your process. It causes crt.S to waste a tiny
* amount of memory to ensure those macros go extremely fast.
*/
#define STATIC_STACK_ALIGN(BYTES) \
_STACK_SYMBOL("ape_stack_align", _STACK_STRINGIFY(BYTES) _STACK_EXTRA)
/**
* Makes program stack executable if declared, e.g.
*
* STATIC_EXEC_STACK();
* int main() {
* char code[16] = {
* 0x55, // push %rbp
* 0xb8, 0007, 0x00, 0x00, 0x00, // mov $7,%eax
* 0x5d, // push %rbp
* 0xc3, // ret
* };
* int (*func)(void) = (void *)code;
* printf("result %d should be 7\n", func());
* }
*/
#define STATIC_EXEC_STACK() _STACK_SYMBOL("ape_stack_pf", "7")
#define _STACK_STRINGIFY(ADDR) #ADDR
#define _STACK_SYMBOL(NAME, VALUE) \
__asm__(".equ\t" NAME "," VALUE "\n\t" \
".globl\t" NAME)
#ifdef __SANITIZE_ADDRESS__
#define _STACK_EXTRA "*2"
#else
#define _STACK_EXTRA ""
#endif
#if defined(__GNUC__) && defined(__ELF__) && !defined(__STRICT_ANSI__)
COSMOPOLITAN_C_START_
extern char ape_stack_prot[] __attribute__((__weak__));
extern char ape_stack_memsz[] __attribute__((__weak__));
extern char ape_stack_align[] __attribute__((__weak__));
/**
* Returns address of bottom of current stack.
*
* This always works on threads. If you want it to work on the main
* process too, then you'll need STATIC_STACK_ALIGN(GetStackSize())
* which will burn O(256kb) of memory to ensure thread invariants.
*/
#define GetStackAddr() ((GetStackPointer() - 1) & -GetStackSize())
#define GetStaticStackSize() ((uintptr_t)ape_stack_memsz)
/**
* Returns true if at least `n` bytes of stack are available.
*
* This always works on threads. If you want it to work on the main
* process too, then you'll need STATIC_STACK_ALIGN(GetStackSize())
* which will burn O(256kb) of memory to ensure thread invariants,
* which make this check exceedingly fast.
*/
#define HaveStackMemory(n) \
(GetStackPointer() >= GetStackAddr() + GetGuardSize() + (n))
/**
* Extends stack memory by poking large allocations.
*
* This can be particularly useful depending on how your system
* implements guard pages. For example, Windows can make stacks
* that aren't fully committed, in which case there's only 4096
* bytes of grows-down guard pages made by portable executable.
* If you alloca() more memory than that, you should call this,
* since it'll not only ensure stack overflows are detected, it
* will also trigger the stack to grow down safely.
*/
__funline void CheckLargeStackAllocation(void *p, ssize_t n) {
for (; n > 0; n -= 4096) {
((char *)p)[n - 1] = 0;
}
}
void *NewCosmoStack(void) vallocesque;
int FreeCosmoStack(void *) libcesque;
/**
* Tunes stack size of main thread on Windows.
*
* On UNIX systems use `RLIMIT_STACK` to tune the main thread size.
*/
#define STATIC_STACK_SIZE(BYTES) \
_STACK_SYMBOL("ape_stack_memsz", _STACK_STRINGIFY(BYTES) _STACK_EXTRA)
/**
* Tunes main thread stack address on Windows.
*/
#define STATIC_STACK_ADDR(ADDR) \
_STACK_SYMBOL("ape_stack_vaddr", _STACK_STRINGIFY(ADDR))
#ifdef __x86_64__
/**
* Returns preferred bottom address of main thread stack.
*
* On UNIX systems we favor the system provided stack, so this only
* really applies to Windows. It's configurable at link time. It is
* needed because polyfilling fork requires that we know, precicely
* where the stack memory begins and ends.
*/
#define GetStaticStackAddr(ADDEND) \
({ \
intptr_t vAddr; \
__asm__(".weak\tape_stack_vaddr\n\t" \
"movabs\t%1+ape_stack_vaddr,%0" \
: "=r"(vAddr) \
: "i"(ADDEND)); \
vAddr; \
})
#else
#define GetStaticStackAddr(ADDEND) (GetStackAddr() + ADDEND)
#endif
#define GetStackPointer() \
({ \
uintptr_t __sp; \
__asm__(__mov_sp : "=r"(__sp)); \
__sp; \
})
#ifdef __x86_64__
#define __mov_sp "mov\t%%rsp,%0"
#elif defined(__aarch64__)
#define __mov_sp "mov\t%0,sp"
#endif
COSMOPOLITAN_C_END_
#endif /* GNU ELF */
#endif /* _COSMO_SOURCE */
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* COSMOPOLITAN_LIBC_RUNTIME_STACK_H_ */