Make improvements

- Every unit test now passes on Apple Silicon. The final piece of this
  puzzle was porting our POSIX threads cancelation support, since that
  works differently on ARM64 XNU vs. AMD64. Our semaphore support on
  Apple Silicon is also superior now compared to AMD64, thanks to the
  grand central dispatch library which lets *NSYNC locks go faster.

- The Cosmopolitan runtime is now more stable, particularly on Windows.
  To do this, thread local storage is mandatory at all runtime levels,
  and the innermost packages of the C library is no longer being built
  using ASAN. TLS is being bootstrapped with a 128-byte TIB during the
  process startup phase, and then later on the runtime re-allocates it
  either statically or dynamically to support code using _Thread_local.
  fork() and execve() now do a better job cooperating with threads. We
  can now check how much stack memory is left in the process or thread
  when functions like kprintf() / execve() etc. call alloca(), so that
  ENOMEM can be raised, reduce a buffer size, or just print a warning.

- POSIX signal emulation is now implemented the same way kernels do it
  with pthread_kill() and raise(). Any thread can interrupt any other
  thread, regardless of what it's doing. If it's blocked on read/write
  then the killer thread will cancel its i/o operation so that EINTR can
  be returned in the mark thread immediately. If it's doing a tight CPU
  bound operation, then that's also interrupted by the signal delivery.
  Signal delivery works now by suspending a thread and pushing context
  data structures onto its stack, and redirecting its execution to a
  trampoline function, which calls SetThreadContext(GetCurrentThread())
  when it's done.

- We're now doing a better job managing locks and handles. On NetBSD we
  now close semaphore file descriptors in forked children. Semaphores on
  Windows can now be canceled immediately, which means mutexes/condition
  variables will now go faster. Apple Silicon semaphores can be canceled
  too. We're now using Apple's pthread_yield() funciton. Apple _nocancel
  syscalls are now used on XNU when appropriate to ensure pthread_cancel
  requests aren't lost. The MbedTLS library has been updated to support
  POSIX thread cancelations. See tool/build/runitd.c for an example of
  how it can be used for production multi-threaded tls servers. Handles
  on Windows now leak less often across processes. All i/o operations on
  Windows are now overlapped, which means file pointers can no longer be
  inherited across dup() and fork() for the time being.

- We now spawn a thread on Windows to deliver SIGCHLD and wakeup wait4()
  which means, for example, that posix_spawn() now goes 3x faster. POSIX
  spawn is also now more correct. Like Musl, it's now able to report the
  failure code of execve() via a pipe although our approach favors using
  shared memory to do that on systems that have a true vfork() function.

- We now spawn a thread to deliver SIGALRM to threads when setitimer()
  is used. This enables the most precise wakeups the OS makes possible.

- The Cosmopolitan runtime now uses less memory. On NetBSD for example,
  it turned out the kernel would actually commit the PT_GNU_STACK size
  which caused RSS to be 6mb for every process. Now it's down to ~4kb.
  On Apple Silicon, we reduce the mandatory upstream thread size to the
  smallest possible size to reduce the memory overhead of Cosmo threads.
  The examples directory has a program called greenbean which can spawn
  a web server on Linux with 10,000 worker threads and have the memory
  usage of the process be ~77mb. The 1024 byte overhead of POSIX-style
  thread-local storage is now optional; it won't be allocated until the
  pthread_setspecific/getspecific functions are called. On Windows, the
  threads that get spawned which are internal to the libc implementation
  use reserve rather than commit memory, which shaves a few hundred kb.

- sigaltstack() is now supported on Windows, however it's currently not
  able to be used to handle stack overflows, since crash signals are
  still generated by WIN32. However the crash handler will still switch
  to the alt stack, which is helpful in environments with tiny threads.

- Test binaries are now smaller. Many of the mandatory dependencies of
  the test runner have been removed. This ensures many programs can do a
  better job only linking the the thing they're testing. This caused the
  test binaries for LIBC_FMT for example, to decrease from 200kb to 50kb

- long double is no longer used in the implementation details of libc,
  except in the APIs that define it. The old code that used long double
  for time (instead of struct timespec) has now been thoroughly removed.

- ShowCrashReports() is now much tinier in MODE=tiny. Instead of doing
  backtraces itself, it'll just print a command you can run on the shell
  using our new `cosmoaddr2line` program to view the backtrace.

- Crash report signal handling now works in a much better way. Instead
  of terminating the process, it now relies on SA_RESETHAND so that the
  default SIG_IGN behavior can terminate the process if necessary.

- Our pledge() functionality has now been fully ported to AARCH64 Linux.
This commit is contained in:
Justine Tunney 2023-09-18 20:44:45 -07:00
parent c4eb838516
commit ec480f5aa0
No known key found for this signature in database
GPG key ID: BE714B4575D6E328
638 changed files with 7925 additions and 8282 deletions

View file

@ -15,9 +15,11 @@ ORIGIN
LOCAL CHANGES
- Time APIs were so good that they're now part of our libc
- Time APIs were so good that they're now in libc
- Double linked list API was so good that it's now part of our libc
- Double linked list API was so good that it's now in libc
- Ensure resources such as POSIX semaphores are are released on fork.
- Modified *NSYNC to allocate waiter objects on the stack. We need it
because we use *NSYNC mutexes to implement POSIX mutexes, which are
@ -27,10 +29,11 @@ LOCAL CHANGES
it works well with Cosmopolitan's fat runtime portability. *NSYNC's
unit test suite passes on all supported platforms. However the BSDs
currently appear to be overutilizing CPU time compared with others.
This appears to be the fault of the OSes rather than *NSYNC / Cosmo
- Support POSIX thread cancellation. APIs that wait on condition vars
are now cancellation points. In PTHREAD_CANCEL_MASKED mode they may
return ECANCELED. In PTHREAD_CANCEL_DEFERRED mode the POSIX threads
library will unwind the stack to unlock any locks and free waiters.
library will unwind the stack to re-acquire locks and free waiters.
On the other hand the *NSYNC APIs for mutexes will now safely block
thread cancellation, but you can still use *NSYNC notes to do that.

View file

@ -42,6 +42,7 @@
#include "libc/sysv/errfuns.h"
#include "libc/thread/freebsd.internal.h"
#include "libc/thread/posixthread.internal.h"
#include "libc/thread/thread.h"
#include "libc/thread/tls.h"
#include "third_party/nsync/atomic.h"
#include "third_party/nsync/common.internal.h"
@ -132,7 +133,7 @@ static int nsync_futex_polyfill_ (atomic_int *w, int expect, struct timespec *ab
}
nanos = 100;
maxnanos = __SIG_POLLING_INTERVAL_MS * 1000L * 1000;
maxnanos = __SIG_LOCK_INTERVAL_MS * 1000L * 1000;
for (;;) {
if (atomic_load_explicit (w, memory_order_acquire) != expect) {
return 0;
@ -163,8 +164,11 @@ static int nsync_futex_polyfill_ (atomic_int *w, int expect, struct timespec *ab
return -ETIMEDOUT;
}
static int nsync_futex_wait_win32_ (atomic_int *w, int expect, char pshare, const struct timespec *timeout) {
static int nsync_futex_wait_win32_ (atomic_int *w, int expect, char pshare,
const struct timespec *timeout,
struct PosixThread *pt) {
int rc;
bool32 ok;
struct timespec deadline, interval, remain, wait, now;
if (timeout) {
@ -180,12 +184,15 @@ static int nsync_futex_wait_win32_ (atomic_int *w, int expect, char pshare, cons
break;
}
remain = timespec_sub (deadline, now);
interval = timespec_frommillis (__SIG_POLLING_INTERVAL_MS);
interval = timespec_frommillis (__SIG_LOCK_INTERVAL_MS);
wait = timespec_cmp (remain, interval) > 0 ? interval : remain;
if (atomic_load_explicit (w, memory_order_acquire) != expect) {
break;
}
if (WaitOnAddress (w, &expect, sizeof(int), timespec_tomillis (wait))) {
if (pt) atomic_store_explicit (&pt->pt_futex, w, memory_order_release);
ok = WaitOnAddress (w, &expect, sizeof(int), timespec_tomillis (wait));
if (pt) atomic_store_explicit (&pt->pt_futex, 0, memory_order_release);
if (ok) {
break;
} else {
ASSERT (GetLastError () == ETIMEDOUT);
@ -212,7 +219,8 @@ static struct timespec *nsync_futex_timeout_ (struct timespec *memory,
int nsync_futex_wait_ (atomic_int *w, int expect, char pshare, const struct timespec *abstime) {
int e, rc, op;
struct PosixThread *pt = 0;
struct CosmoTib *tib;
struct PosixThread *pt;
struct timespec tsmem, *timeout;
cosmo_once (&nsync_futex_.once, nsync_futex_init_);
@ -239,12 +247,15 @@ int nsync_futex_wait_ (atomic_int *w, int expect, char pshare, const struct time
DescribeFutexOp (op), expect,
DescribeTimespec (0, timeout));
tib = __get_tls();
pt = (struct PosixThread *)tib->tib_pthread;
if (nsync_futex_.is_supported) {
e = errno;
if (IsWindows ()) {
// Windows 8 futexes don't support multiple processes :(
if (pshare) goto Polyfill;
rc = nsync_futex_wait_win32_ (w, expect, pshare, timeout);
rc = nsync_futex_wait_win32_ (w, expect, pshare, timeout, pt);
} else if (IsFreebsd ()) {
rc = sys_umtx_timedwait_uint (w, expect, pshare, timeout);
} else {
@ -255,9 +266,7 @@ int nsync_futex_wait_ (atomic_int *w, int expect, char pshare, const struct time
// unfortunately OpenBSD futex() defines
// its own ECANCELED condition, and that
// overlaps with our system call wrapper
if ((pt = (struct PosixThread *)__get_tls()->tib_pthread)) {
pt->flags &= ~PT_OPENBSD_KLUDGE;
}
if (pt) pt->pt_flags &= ~PT_OPENBSD_KLUDGE;
}
rc = sys_futex_cp (w, op, expect, timeout, 0, FUTEX_WAIT_BITS_);
if (IsOpenbsd()) {
@ -270,7 +279,7 @@ int nsync_futex_wait_ (atomic_int *w, int expect, char pshare, const struct time
// because a SA_RESTART signal handler was
// invoked, such as our SIGTHR callback.
if (rc == -1 && errno == ECANCELED &&
pt && (~pt->flags & PT_OPENBSD_KLUDGE)) {
pt && (~pt->pt_flags & PT_OPENBSD_KLUDGE)) {
errno = EINTR;
}
}
@ -281,9 +290,9 @@ int nsync_futex_wait_ (atomic_int *w, int expect, char pshare, const struct time
}
} else {
Polyfill:
__get_tls()->tib_flags |= TIB_FLAG_TIME_CRITICAL;
tib->tib_flags |= TIB_FLAG_TIME_CRITICAL;
rc = nsync_futex_polyfill_ (w, expect, timeout);
__get_tls()->tib_flags &= ~TIB_FLAG_TIME_CRITICAL;
tib->tib_flags &= ~TIB_FLAG_TIME_CRITICAL;
}
Finished:
@ -331,13 +340,13 @@ int nsync_futex_wake_ (atomic_int *w, int count, char pshare) {
}
} else {
Polyfill:
sched_yield ();
pthread_yield ();
rc = 0;
}
STRACE ("futex(%t [%d], %s, %d) → %s",
STRACE ("futex(%t [%d], %s, %d) → %d woken",
w, atomic_load_explicit (w, memory_order_relaxed),
DescribeFutexOp (op), count, DescribeErrno (rc));
DescribeFutexOp (op), count, rc);
return rc;
}

View file

@ -38,10 +38,17 @@ $(THIRD_PARTY_NSYNC_MEM_A).pkg: \
$(THIRD_PARTY_NSYNC_MEM_A_OBJS) \
$(foreach x,$(THIRD_PARTY_NSYNC_MEM_A_DIRECTDEPS),$($(x)_A).pkg)
$(THIRD_PARTY_NSYNC_MEM_A_OBJS): private \
CCFLAGS += \
-ffunction-sections \
-fdata-sections
# offer assurances about the stack safety of cosmo libc
$(THIRD_PARTY_NSYNC_MEM_A_OBJS): private COPTS += -Wframe-larger-than=4096 -Walloca-larger-than=4096
$(THIRD_PARTY_NSYNC_MEM_A_OBJS): private \
COPTS += \
-ffreestanding \
-fdata-sections \
-ffunction-sections \
-fno-sanitize=address \
-Wframe-larger-than=4096 \
-Walloca-larger-than=4096
THIRD_PARTY_NSYNC_MEM_LIBS = $(foreach x,$(THIRD_PARTY_NSYNC_MEM_ARTIFACTS),$($(x)))
THIRD_PARTY_NSYNC_MEM_SRCS = $(foreach x,$(THIRD_PARTY_NSYNC_MEM_ARTIFACTS),$($(x)_SRCS))

View file

@ -46,7 +46,10 @@ void nsync_mu_semaphore_destroy (nsync_semaphore *s) {
}
}
/* Wait until the count of *s exceeds 0, and decrement it. */
/* Wait until the count of *s exceeds 0, and decrement it. If POSIX cancellations
are currently disabled by the thread, then this function always succeeds. When
they're enabled in MASKED mode, this function may return ECANCELED. Otherwise,
cancellation will occur by unwinding cleanup handlers pushed to the stack. */
errno_t nsync_mu_semaphore_p (nsync_semaphore *s) {
errno_t err;
BEGIN_CANCELLATION_POINT;
@ -61,9 +64,10 @@ errno_t nsync_mu_semaphore_p (nsync_semaphore *s) {
return err;
}
/* Wait until one of:
the count of *s is non-zero, in which case decrement *s and return 0;
or abs_deadline expires, in which case return ETIMEDOUT. */
/* Like nsync_mu_semaphore_p() this waits for the count of *s to exceed 0,
while additionally supporting a time parameter specifying at what point
in the future ETIMEDOUT should be returned, if neither cancellation, or
semaphore release happens. */
errno_t nsync_mu_semaphore_p_with_deadline (nsync_semaphore *s, nsync_time abs_deadline) {
errno_t err;
BEGIN_CANCELLATION_POINT;

View file

@ -49,7 +49,10 @@ void nsync_mu_semaphore_init_futex (nsync_semaphore *s) {
f->i = 0;
}
/* Wait until the count of *s exceeds 0, and decrement it. */
/* Wait until the count of *s exceeds 0, and decrement it. If POSIX cancellations
are currently disabled by the thread, then this function always succeeds. When
they're enabled in MASKED mode, this function may return ECANCELED. Otherwise,
cancellation will occur by unwinding cleanup handlers pushed to the stack. */
errno_t nsync_mu_semaphore_p_futex (nsync_semaphore *s) {
struct futex *f = (struct futex *) s;
int i;
@ -74,9 +77,10 @@ errno_t nsync_mu_semaphore_p_futex (nsync_semaphore *s) {
return result;
}
/* Wait until one of:
the count of *s is non-zero, in which case decrement *s and return 0;
or abs_deadline expires, in which case return ETIMEDOUT. */
/* Like nsync_mu_semaphore_p() this waits for the count of *s to exceed 0,
while additionally supporting a time parameter specifying at what point
in the future ETIMEDOUT should be returned, if neither cancellation, or
semaphore release happens. */
errno_t nsync_mu_semaphore_p_with_deadline_futex (nsync_semaphore *s, nsync_time abs_deadline) {
struct futex *f = (struct futex *)s;
int i;

View file

@ -16,11 +16,15 @@
limitations under the License.
*/
#include "libc/assert.h"
#include "libc/calls/sig.internal.h"
#include "libc/errno.h"
#include "libc/intrin/strace.internal.h"
#include "libc/intrin/weaken.h"
#include "libc/runtime/syslib.internal.h"
#include "libc/str/str.h"
#include "libc/thread/posixthread.internal.h"
#include "libc/thread/thread.h"
#include "libc/thread/tls.h"
#include "third_party/nsync/atomic.h"
#include "third_party/nsync/atomic.internal.h"
#include "third_party/nsync/futex.internal.h"
@ -36,33 +40,48 @@
static dispatch_semaphore_t dispatch_semaphore_create(long count) {
dispatch_semaphore_t ds;
ds = __syslib->dispatch_semaphore_create (count);
ds = __syslib->__dispatch_semaphore_create (count);
STRACE ("dispatch_semaphore_create(%ld) → %#lx", count, ds);
return (ds);
}
static void dispatch_release (dispatch_semaphore_t ds) {
__syslib->dispatch_release (ds);
__syslib->__dispatch_release (ds);
STRACE ("dispatch_release(%#lx)", ds);
}
static long dispatch_semaphore_wait (dispatch_semaphore_t ds,
dispatch_time_t dt) {
long rc = __syslib->dispatch_semaphore_wait (ds, dt);
long rc = __syslib->__dispatch_semaphore_wait (ds, dt);
STRACE ("dispatch_semaphore_wait(%#lx, %ld) → %ld", ds, dt, rc);
return (rc);
}
static long dispatch_semaphore_signal (dispatch_semaphore_t ds) {
long rc = __syslib->dispatch_semaphore_signal (ds);
long rc = __syslib->__dispatch_semaphore_signal (ds);
(void)rc;
STRACE ("dispatch_semaphore_signal(%#lx) → %ld", ds, rc);
return (ds);
}
static dispatch_time_t dispatch_walltime (const struct timespec *base,
int64_t offset) {
return __syslib->dispatch_walltime (base, offset);
int64_t offset) {
return __syslib->__dispatch_walltime (base, offset);
}
static errno_t nsync_dispatch_semaphore_wait (nsync_semaphore *s,
nsync_time abs_deadline) {
errno_t result = 0;
dispatch_time_t dt;
if (nsync_time_cmp (abs_deadline, nsync_time_no_deadline) == 0) {
dt = DISPATCH_TIME_FOREVER;
} else {
dt = dispatch_walltime (&abs_deadline, 0);
}
if (dispatch_semaphore_wait (*(dispatch_semaphore_t *)s, dt) != 0) {
result = ETIMEDOUT;
}
return (result);
}
/* Initialize *s; the initial value is 0. */
@ -75,30 +94,47 @@ void nsync_mu_semaphore_destroy_gcd (nsync_semaphore *s) {
dispatch_release (*(dispatch_semaphore_t *)s);
}
/* Wait until the count of *s exceeds 0, and decrement it. */
/* Wait until the count of *s exceeds 0, and decrement it. If POSIX cancellations
are currently disabled by the thread, then this function always succeeds. When
they're enabled in MASKED mode, this function may return ECANCELED. Otherwise,
cancellation will occur by unwinding cleanup handlers pushed to the stack. */
errno_t nsync_mu_semaphore_p_gcd (nsync_semaphore *s) {
dispatch_semaphore_wait (*(dispatch_semaphore_t *)s,
DISPATCH_TIME_FOREVER);
return (0);
return nsync_mu_semaphore_p_with_deadline_gcd (s, nsync_time_no_deadline);
}
/* Wait until one of:
the count of *s is non-zero, in which case decrement *s and return 0;
or abs_deadline expires, in which case return ETIMEDOUT. */
/* Like nsync_mu_semaphore_p() this waits for the count of *s to exceed 0,
while additionally supporting a time parameter specifying at what point
in the future ETIMEDOUT should be returned, if neither cancellation, or
semaphore release happens. */
errno_t nsync_mu_semaphore_p_with_deadline_gcd (nsync_semaphore *s,
nsync_time abs_deadline) {
errno_t result = 0;
if (nsync_time_cmp (abs_deadline, nsync_time_no_deadline) == 0) {
dispatch_semaphore_wait (*(dispatch_semaphore_t *)s,
DISPATCH_TIME_FOREVER);
struct PosixThread *pt;
if (!__tls_enabled ||
!_weaken (pthread_testcancel_np) ||
!(pt = _pthread_self()) ||
(pt->pt_flags & PT_NOCANCEL)) {
nsync_dispatch_semaphore_wait (s, abs_deadline);
} else {
struct timespec ts;
bzero (&ts, sizeof (ts));
ts.tv_sec = NSYNC_TIME_SEC (abs_deadline);
ts.tv_nsec = NSYNC_TIME_NSEC (abs_deadline);
if (dispatch_semaphore_wait (*(dispatch_semaphore_t *)s,
dispatch_walltime (&abs_deadline, 0)) != 0) {
result = ETIMEDOUT;
struct timespec now, until, slice;
slice = timespec_frommillis (__SIG_LOCK_INTERVAL_MS);
for (;;) {
if (_weaken (pthread_testcancel_np) () == ECANCELED) {
result = ECANCELED;
break;
}
now = timespec_real();
if (timespec_cmp (now, abs_deadline) >= 0) {
result = ETIMEDOUT;
break;
}
until = timespec_add (now, slice);
if (timespec_cmp (until, abs_deadline) > 0) {
until = abs_deadline;
}
if (!nsync_dispatch_semaphore_wait (s, until)) {
break;
}
}
}
return (result);

View file

@ -17,15 +17,19 @@
PERFORMANCE OF THIS SOFTWARE.
*/
#include "libc/assert.h"
#include "libc/atomic.h"
#include "libc/calls/cp.internal.h"
#include "libc/calls/struct/timespec.internal.h"
#include "libc/calls/syscall-sysv.internal.h"
#include "libc/cosmo.h"
#include "libc/dce.h"
#include "libc/errno.h"
#include "libc/intrin/dll.h"
#include "libc/intrin/strace.internal.h"
#include "libc/str/str.h"
#include "libc/sysv/consts/f.h"
#include "libc/sysv/consts/fd.h"
#include "libc/thread/thread.h"
#include "third_party/nsync/mu_semaphore.h"
#include "third_party/nsync/time.h"
// clang-format off
@ -35,21 +39,51 @@
*/
#define ASSERT(x) npassert(x)
#define SEM_CONTAINER(e) DLL_CONTAINER(struct sem, list, e)
struct sem {
int64_t id;
struct Dll list;
};
static struct {
atomic_uint once;
pthread_spinlock_t lock;
struct Dll *list;
} g_sems;
static nsync_semaphore *sem_big_enough_for_sem = (nsync_semaphore *) (uintptr_t)(1 /
(sizeof (struct sem) <= sizeof (*sem_big_enough_for_sem)));
static void nsync_mu_semaphore_sem_fork_child (void) {
struct Dll *e;
for (e = dll_first (g_sems.list); e; e = dll_next (g_sems.list, e)) {
sys_close (SEM_CONTAINER (e)->id);
}
g_sems.list = 0; /* list memory is on dead thread stacks */
(void) pthread_spin_init (&g_sems.lock, 0);
}
static void nsync_mu_semaphore_sem_init (void) {
pthread_atfork (0, 0, nsync_mu_semaphore_sem_fork_child);
}
/* Initialize *s; the initial value is 0. */
void nsync_mu_semaphore_init_sem (nsync_semaphore *s) {
int lol;
struct sem *f = (struct sem *) s;
f->id = 0;
ASSERT (!sys_sem_init (0, &f->id));
if ((lol = __sys_fcntl (f->id, F_DUPFD_CLOEXEC, 50)) >= 50) {
sys_close (f->id);
f->id = lol;
}
cosmo_once (&g_sems.once, nsync_mu_semaphore_sem_init);
pthread_spin_lock (&g_sems.lock);
dll_init (&f->list);
dll_make_first (&g_sems.list, &f->list);
pthread_spin_unlock (&g_sems.lock);
STRACE ("sem_init(0, [%ld]) → 0", f->id);
ASSERT (__sys_fcntl (f->id, F_SETFD, FD_CLOEXEC) == 0); // ouch
}
/* Releases system resources associated with *s. */
@ -57,10 +91,16 @@ void nsync_mu_semaphore_destroy_sem (nsync_semaphore *s) {
int rc;
struct sem *f = (struct sem *) s;
ASSERT (!(rc = sys_sem_destroy (f->id)));
STRACE ("sem_destroy(%ld) → %d", rc);
pthread_spin_lock (&g_sems.lock);
dll_remove (&g_sems.list, &f->list);
pthread_spin_unlock (&g_sems.lock);
STRACE ("sem_destroy(%ld) → %d", f->id, rc);
}
/* Wait until the count of *s exceeds 0, and decrement it. */
/* Wait until the count of *s exceeds 0, and decrement it. If POSIX cancellations
are currently disabled by the thread, then this function always succeeds. When
they're enabled in MASKED mode, this function may return ECANCELED. Otherwise,
cancellation will occur by unwinding cleanup handlers pushed to the stack. */
errno_t nsync_mu_semaphore_p_sem (nsync_semaphore *s) {
int e, rc;
errno_t result;
@ -78,9 +118,10 @@ errno_t nsync_mu_semaphore_p_sem (nsync_semaphore *s) {
return result;
}
/* Wait until one of:
the count of *s is non-zero, in which case decrement *s and return 0;
or abs_deadline expires, in which case return ETIMEDOUT. */
/* Like nsync_mu_semaphore_p() this waits for the count of *s to exceed 0,
while additionally supporting a time parameter specifying at what point
in the future ETIMEDOUT should be returned, if neither cancellation, or
semaphore release happens. */
errno_t nsync_mu_semaphore_p_with_deadline_sem (nsync_semaphore *s, nsync_time abs_deadline) {
int e, rc;
errno_t result;

View file

@ -49,9 +49,13 @@ $(THIRD_PARTY_NSYNC_A).pkg: \
$(foreach x,$(THIRD_PARTY_NSYNC_A_DIRECTDEPS),$($(x)_A).pkg)
$(THIRD_PARTY_NSYNC_A_OBJS): private \
CCFLAGS += \
COPTS += \
-ffreestanding \
-fdata-sections \
-ffunction-sections \
-fdata-sections
-fno-sanitize=address \
-Wframe-larger-than=4096 \
-Walloca-larger-than=4096
# these assembly files are safe to build on aarch64
o/$(MODE)/third_party/nsync/compat.o: third_party/nsync/compat.S

View file

@ -17,6 +17,7 @@
*/
#include "libc/calls/calls.h"
#include "libc/str/str.h"
#include "libc/thread/thread.h"
#include "third_party/nsync/mu.h"
#include "third_party/nsync/mu_wait.h"
#include "third_party/nsync/testing/closure.h"
@ -137,7 +138,7 @@ static void test_starve_with_readers (testing t) {
trylock_acquires++;
nsync_mu_unlock (&sd.mu);
}
sched_yield ();
pthread_yield ();
}
if (trylock_acquires != 0) {
TEST_ERROR (t, ("expected no acquisitions via nsync_mu_trylock(), got %d\n",
@ -256,7 +257,7 @@ static void test_starve_with_writer (testing t) {
trylock_acquires++;
nsync_mu_unlock (&sd.mu);
}
sched_yield ();
pthread_yield ();
}
if (trylock_acquires < expected_lo) {
TEST_ERROR (t, ("expected at least %d acquisitions via "
@ -276,7 +277,7 @@ static void test_starve_with_writer (testing t) {
rtrylock_acquires++;
nsync_mu_runlock (&sd.mu);
}
sched_yield ();
pthread_yield ();
}
if (rtrylock_acquires < expected_lo) {
TEST_ERROR (t, ("expected at least %d acquisitions via "

View file

@ -227,7 +227,7 @@ static void counting_loop_try_mu (test_data *td, int id) {
int n = td->loop_count;
for (i = 0; i != n; i++) {
while (!nsync_mu_trylock (&td->mu)) {
sched_yield ();
pthread_yield ();
}
td->id = id;
td->i++;

View file

@ -1,56 +0,0 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2016 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 │
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "third_party/nsync/atomic.h"
#include "third_party/nsync/atomic.internal.h"
#include "third_party/nsync/common.internal.h"
#include "libc/thread/thread.h"
asm(".ident\t\"\\n\\n\
*NSYNC (Apache 2.0)\\n\
Copyright 2016 Google, Inc.\\n\
https://github.com/google/nsync\"");
// clang-format off
static pthread_key_t waiter_key;
static nsync_atomic_uint32_ pt_once;
static void do_once (nsync_atomic_uint32_ *ponce, void (*dest) (void *)) {
uint32_t o = ATM_LOAD_ACQ (ponce);
if (o != 2) {
while (o == 0 && !ATM_CAS_ACQ (ponce, 0, 1)) {
o = ATM_LOAD (ponce);
}
if (o == 0) {
pthread_key_create (&waiter_key, dest);
ATM_STORE_REL (ponce, 2);
}
while (ATM_LOAD_ACQ (ponce) != 2) {
nsync_yield_ ();
}
}
}
void *nsync_per_thread_waiter_ (void (*dest) (void *)) {
do_once (&pt_once, dest);
return (pthread_getspecific (waiter_key));
}
void nsync_set_per_thread_waiter_ (void *v, void (*dest) (void *)) {
do_once (&pt_once, dest);
pthread_setspecific (waiter_key, v);
}

View file

@ -18,10 +18,11 @@
*/
#include "libc/calls/calls.h"
#include "libc/intrin/strace.internal.h"
#include "libc/thread/thread.h"
#include "third_party/nsync/common.internal.h"
// clang-format off
void nsync_yield_ (void) {
sched_yield ();
pthread_yield ();
STRACE ("nsync_yield_()");
}