Use *NSYNC for POSIX threads locking APIs

Condition variables, barriers, and r/w locks now work very well.
This commit is contained in:
Justine Tunney 2022-09-11 11:02:07 -07:00
parent 3de35e196c
commit b5cb71ab84
No known key found for this signature in database
GPG key ID: BE714B4575D6E328
197 changed files with 3734 additions and 3817 deletions

202
third_party/nsync/LICENSE.txt vendored Normal file
View file

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

21
third_party/nsync/README.md vendored Normal file
View file

@ -0,0 +1,21 @@
# *NSYNC
The `THIRD_PARTY_NSYNC` and `LIBC_THREAD` packages include source code
from *NSYNC. Here's the latest upstream synchronization point:
git@github.com:google/nsync
ac5489682760393fe21bd2a8e038b528442412a7 (1.25.0)
Author: Mike Burrows <m3b@google.com>
Date: Wed Jun 1 16:47:52 2022 -0700
NSYNC uses the Apache 2.0 license. We made the following local changes:
- Write custom `nsync_malloc_()` so `malloc()` can use *NSYNC.
- Rewrite `futex()` wrapper to support old Linux kernels and OpenBSD.
- Normalize sources to Cosmopolitan style conventions; *NSYNC upstream
supports dozens of compilers and operating systems, at compile-time.
Since Cosmo solves portability at runtime instead, most of the build
config toil has been removed, in order to help the NSYNC source code
be more readable and hackable.

15
third_party/nsync/atomic.h vendored Normal file
View file

@ -0,0 +1,15 @@
#ifndef NSYNC_ATOMIC_H_
#define NSYNC_ATOMIC_H_
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
typedef uint32_t nsync_atomic_uint32_;
#define NSYNC_ATOMIC_UINT32_INIT_ 0
#define NSYNC_ATOMIC_UINT32_LOAD_(p) (*(p))
#define NSYNC_ATOMIC_UINT32_STORE_(p, v) (*(p) = (v))
#define NSYNC_ATOMIC_UINT32_PTR_(p) (p)
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_ATOMIC_H_ */

113
third_party/nsync/atomic.internal.h vendored Normal file
View file

@ -0,0 +1,113 @@
#ifndef NSYNC_ATOMIC_INTERNAL_H_
#define NSYNC_ATOMIC_INTERNAL_H_
#include "libc/intrin/atomic.h"
#include "third_party/nsync/atomic.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
/* Atomic operations on nsync_atomic_uint32_ quantities
CAS, load, and store.
Normally, these are used only on nsync_atomic_uint32_ values, but on
Linux they may be invoked on int values, because futexes operate on
int values. A compile-time check in the futex code ensures that both
int and nsync_atomic_uint32_ are 32 bits.
Memory barriers:
Operations with the suffixes _ACQ and _RELACQ ensure that the
operation appears to complete before other memory operations
subsequently performed by the same thread, as seen by other
threads. (In the case of ATM_CAS_ACQ, this applies only if
the operation returns a non-zero value.)
Operations with the suffixes _REL and _RELACQ ensure that the
operation appears to complete after other memory operations
previously performed by the same thread, as seen by other
threads. (In the case of ATM_CAS_REL, this applies only if
the operation returns a non-zero value.)
// Atomically,
// int ATM_CAS (nsync_atomic_uint32_ *p,
// uint32_t old_value, uint32_t new_value) {
// if (*p == old_value) {
// *p = new_value;
// return (some-non-zero-value);
// } else {
// return (0);
// }
// }
// *_ACQ, *_REL, *_RELACQ variants are available,
// with the barrier semantics described above.
int ATM_CAS (nsync_atomic_uint32_ *p, uint32_t old_value,
uint32_t new_value);
// Atomically,
// uint32_t ATM_LOAD (nsync_atomic_uint32_ *p) { return (*p); }
// A *_ACQ variant is available,
// with the barrier semantics described above.
uint32_t ATM_LOAD (nsync_atomic_uint32_ *p);
// Atomically,
// void ATM_STORE (nsync_atomic_uint32_ *p, uint32_t value) {
// *p = value;
// }
// A *_REL variant is available,
// with the barrier semantics described above.
void ATM_STORE (nsync_atomic_uint32_ *p, uint32_t value);
*/
static inline int atm_cas_nomb_u32_(nsync_atomic_uint32_ *p, uint32_t o,
uint32_t n) {
return atomic_compare_exchange_strong_explicit(NSYNC_ATOMIC_UINT32_PTR_(p),
&o, n, memory_order_relaxed,
memory_order_relaxed);
}
static inline int atm_cas_acq_u32_(nsync_atomic_uint32_ *p, uint32_t o,
uint32_t n) {
return atomic_compare_exchange_strong_explicit(NSYNC_ATOMIC_UINT32_PTR_(p),
&o, n, memory_order_acquire,
memory_order_relaxed);
}
static inline int atm_cas_rel_u32_(nsync_atomic_uint32_ *p, uint32_t o,
uint32_t n) {
return atomic_compare_exchange_strong_explicit(NSYNC_ATOMIC_UINT32_PTR_(p),
&o, n, memory_order_release,
memory_order_relaxed);
}
static inline int atm_cas_relacq_u32_(nsync_atomic_uint32_ *p, uint32_t o,
uint32_t n) {
return atomic_compare_exchange_strong_explicit(NSYNC_ATOMIC_UINT32_PTR_(p),
&o, n, memory_order_acq_rel,
memory_order_relaxed);
}
#define ATM_CAS_HELPER_(barrier, p, o, n) \
(atm_cas_##barrier##_u32_((p), (o), (n)))
#define ATM_CAS(p, o, n) ATM_CAS_HELPER_(nomb, (p), (o), (n))
#define ATM_CAS_ACQ(p, o, n) ATM_CAS_HELPER_(acq, (p), (o), (n))
#define ATM_CAS_REL(p, o, n) ATM_CAS_HELPER_(rel, (p), (o), (n))
#define ATM_CAS_RELACQ(p, o, n) ATM_CAS_HELPER_(relacq, (p), (o), (n))
/* Need a cast to remove "const" from some uses. */
#define ATM_LOAD(p) \
(atomic_load_explicit((nsync_atomic_uint32_ *)NSYNC_ATOMIC_UINT32_PTR_(p), \
memory_order_relaxed))
#define ATM_LOAD_ACQ(p) \
(atomic_load_explicit((nsync_atomic_uint32_ *)NSYNC_ATOMIC_UINT32_PTR_(p), \
memory_order_acquire))
#define ATM_STORE(p, v) \
(atomic_store_explicit(NSYNC_ATOMIC_UINT32_PTR_(p), (v), \
memory_order_relaxed))
#define ATM_STORE_REL(p, v) \
(atomic_store_explicit(NSYNC_ATOMIC_UINT32_PTR_(p), (v), \
memory_order_release))
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_ATOMIC_INTERNAL_H_ */

245
third_party/nsync/common.c vendored Normal file
View file

@ -0,0 +1,245 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2016 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 │
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "libc/mem/mem.h"
#include "libc/runtime/runtime.h"
#include "libc/thread/thread.h"
#include "third_party/nsync/atomic.h"
#include "third_party/nsync/atomic.internal.h"
#include "third_party/nsync/common.internal.h"
#include "third_party/nsync/dll.h"
#include "third_party/nsync/malloc.internal.h"
#include "third_party/nsync/mu_semaphore.h"
#include "third_party/nsync/wait_s.internal.h"
asm(".ident\t\"\\n\\n\
*NSYNC (Apache 2.0)\\n\
Copyright 2016 Google, Inc.\\n\
https://github.com/google/nsync\"");
// clang-format off
/* This package provides a mutex nsync_mu and a Mesa-style condition
* variable nsync_cv. */
/* Implementation notes
The implementations of nsync_mu and nsync_cv both use spinlocks to protect
their waiter queues. The spinlocks are implemented with atomic operations
and a delay loop found below. They could use pthread_mutex_t, but I wished
to have an implementation independent of pthread mutexes and condition
variables.
nsync_mu and nsync_cv use the same type of doubly-linked list of waiters
(see waiter.c). This allows waiters to be transferred from the cv queue to
the mu queue when a thread is logically woken from the cv but would
immediately go to sleep on the mu. See the wake_waiters() call.
In mu, the "designated waker" is a thread that was waiting on mu, has been
woken up, but as yet has neither acquired nor gone back to waiting. The
presence of such a thread is indicated by the MU_DESIG_WAKER bit in the mu
word. This bit allows the nsync_mu_unlock() code to avoid waking a second
waiter when there's already one that will wake the next thread when the time
comes. This speeds things up when the lock is heavily contended, and the
critical sections are small.
The weasel words "with high probability" in the specification of
nsync_mu_trylock() and nsync_mu_rtrylock() prevent clients from believing
that they can determine with certainty whether another thread has given up a
lock yet. This, together with the requirement that a thread that acquired a
mutex must release it (rather than it being released by another thread),
prohibits clients from using mu as a sort of semaphore. The intent is that
it be used only for traditional mutual exclusion, and that clients that need
a semaphore should use one. This leaves room for certain future
optimizations, and make it easier to apply detection of potential races via
candidate lock-set algorithms, should that ever be desired.
The nsync_mu_wait_with_deadline() and nsync_mu_wait_with_deadline() calls use an
absolute rather than a relative timeout. This is less error prone, as
described in the comment on nsync_cv_wait_with_deadline(). Alas, relative
timeouts are seductive in trivial examples (such as tests). These are the
first things that people try, so they are likely to be requested. If enough
people complain we could give them that particular piece of rope.
Excessive evaluations of the same wait condition are avoided by maintaining
waiter.same_condition as a doubly-linked list of waiters with the same
non-NULL wait condition that are also adjacent in the waiter list. This does
well even with large numbers of threads if there is at most one
wait condition that can be false at any given time (such as in a
producer/consumer queue, which cannot be both empty and full
simultaneously). One could imagine a queueing mechanism that would
guarantee to evaluate each condition at most once per wakeup, but that would
be substantially more complex, and would still degrade if the number of
distinct wakeup conditions were high. So clients are advised to resort to
condition variables if they have many distinct wakeup conditions. */
/* Used in spinloops to delay resumption of the loop.
Usage:
unsigned attempts = 0;
while (try_something) {
attempts = nsync_spin_delay_ (attempts);
} */
unsigned nsync_spin_delay_ (unsigned attempts) {
if (attempts < 7) {
volatile int i;
for (i = 0; i != 1 << attempts; i++) {
}
attempts++;
} else {
nsync_yield_ ();
}
return (attempts);
}
/* Spin until (*w & test) == 0, then atomically perform *w = ((*w | set) &
~clear), perform an acquire barrier, and return the previous value of *w.
*/
uint32_t nsync_spin_test_and_set_ (nsync_atomic_uint32_ *w, uint32_t test,
uint32_t set, uint32_t clear) {
unsigned attempts = 0; /* CV_SPINLOCK retry count */
uint32_t old = ATM_LOAD (w);
while ((old & test) != 0 || !ATM_CAS_ACQ (w, old, (old | set) & ~clear)) {
attempts = nsync_spin_delay_ (attempts);
old = ATM_LOAD (w);
}
return (old);
}
/* ====================================================================================== */
struct nsync_waiter_s *nsync_dll_nsync_waiter_ (nsync_dll_element_ *e) {
struct nsync_waiter_s *nw = (struct nsync_waiter_s *) e->container;
ASSERT (nw->tag == NSYNC_WAITER_TAG);
ASSERT (e == &nw->q);
return (nw);
}
waiter *nsync_dll_waiter_ (nsync_dll_element_ *e) {
struct nsync_waiter_s *nw = DLL_NSYNC_WAITER (e);
waiter *w = CONTAINER (waiter, nw, nw);
ASSERT ((nw->flags & NSYNC_WAITER_FLAG_MUCV) != 0);
ASSERT (w->tag == WAITER_TAG);
ASSERT (e == &w->nw.q);
return (w);
}
waiter *nsync_dll_waiter_samecond_ (nsync_dll_element_ *e) {
waiter *w = (waiter *) e->container;
ASSERT (w->tag == WAITER_TAG);
ASSERT (e == &w->same_condition);
return (w);
}
/* -------------------------------- */
static nsync_dll_list_ free_waiters = NULL;
/* free_waiters points to a doubly-linked list of free waiter structs. */
static nsync_atomic_uint32_ free_waiters_mu; /* spinlock; protects free_waiters */
static _Thread_local waiter *waiter_for_thread;
static void waiter_destroy (void *v) {
waiter *w = (waiter *) v;
/* Reset waiter_for_thread in case another thread-local variable reuses
the waiter in its destructor while the waiter is taken by the other
thread from free_waiters. This can happen as the destruction order
of thread-local variables can be arbitrary in some platform e.g.
POSIX. */
waiter_for_thread = NULL;
IGNORE_RACES_START ();
ASSERT ((w->flags & (WAITER_RESERVED|WAITER_IN_USE)) == WAITER_RESERVED);
w->flags &= ~WAITER_RESERVED;
nsync_spin_test_and_set_ (&free_waiters_mu, 1, 1, 0);
free_waiters = nsync_dll_make_first_in_list_ (free_waiters, &w->nw.q);
ATM_STORE_REL (&free_waiters_mu, 0); /* release store */
IGNORE_RACES_END ();
}
/* Return a pointer to an unused waiter struct.
Ensures that the enclosed timer is stopped and its channel drained. */
waiter *nsync_waiter_new_ (void) {
nsync_dll_element_ *q;
waiter *tw;
waiter *w;
tw = waiter_for_thread;
w = tw;
if (w == NULL || (w->flags & (WAITER_RESERVED|WAITER_IN_USE)) != WAITER_RESERVED) {
w = NULL;
nsync_spin_test_and_set_ (&free_waiters_mu, 1, 1, 0);
q = nsync_dll_first_ (free_waiters);
if (q != NULL) { /* If free list is non-empty, dequeue an item. */
free_waiters = nsync_dll_remove_ (free_waiters, q);
w = DLL_WAITER (q);
}
ATM_STORE_REL (&free_waiters_mu, 0); /* release store */
if (w == NULL) { /* If free list was empty, allocate an item. */
w = (waiter *) nsync_malloc_ (sizeof (*w));
w->tag = WAITER_TAG;
w->nw.tag = NSYNC_WAITER_TAG;
nsync_mu_semaphore_init (&w->sem);
w->nw.sem = &w->sem;
nsync_dll_init_ (&w->nw.q, &w->nw);
NSYNC_ATOMIC_UINT32_STORE_ (&w->nw.waiting, 0);
w->nw.flags = NSYNC_WAITER_FLAG_MUCV;
ATM_STORE (&w->remove_count, 0);
nsync_dll_init_ (&w->same_condition, w);
w->flags = 0;
}
if (tw == NULL) {
w->flags |= WAITER_RESERVED;
nsync_set_per_thread_waiter_ (w, &waiter_destroy);
waiter_for_thread = w;
}
}
w->flags |= WAITER_IN_USE;
return (w);
}
/* Return an unused waiter struct *w to the free pool. */
void nsync_waiter_free_ (waiter *w) {
ASSERT ((w->flags & WAITER_IN_USE) != 0);
w->flags &= ~WAITER_IN_USE;
if ((w->flags & WAITER_RESERVED) == 0) {
nsync_spin_test_and_set_ (&free_waiters_mu, 1, 1, 0);
free_waiters = nsync_dll_make_first_in_list_ (free_waiters, &w->nw.q);
ATM_STORE_REL (&free_waiters_mu, 0); /* release store */
}
}
/* ====================================================================================== */
/* writer_type points to a lock_type that describes how to manipulate a mu for a writer. */
static lock_type Xwriter_type = {
MU_WZERO_TO_ACQUIRE,
MU_WADD_TO_ACQUIRE,
MU_WHELD_IF_NON_ZERO,
MU_WSET_WHEN_WAITING,
MU_WCLEAR_ON_ACQUIRE,
MU_WCLEAR_ON_UNCONTENDED_RELEASE
};
lock_type *nsync_writer_type_ = &Xwriter_type;
/* reader_type points to a lock_type that describes how to manipulate a mu for a reader. */
static lock_type Xreader_type = {
MU_RZERO_TO_ACQUIRE,
MU_RADD_TO_ACQUIRE,
MU_RHELD_IF_NON_ZERO,
MU_RSET_WHEN_WAITING,
MU_RCLEAR_ON_ACQUIRE,
MU_RCLEAR_ON_UNCONTENDED_RELEASE
};
lock_type *nsync_reader_type_ = &Xreader_type;

318
third_party/nsync/common.internal.h vendored Normal file
View file

@ -0,0 +1,318 @@
#ifndef NSYNC_COMMON_H_
#define NSYNC_COMMON_H_
#include "third_party/nsync/atomic.h"
#include "third_party/nsync/atomic.internal.h"
#include "third_party/nsync/cv.h"
#include "third_party/nsync/dll.h"
#include "third_party/nsync/mu.h"
#include "third_party/nsync/mu_semaphore.h"
#include "third_party/nsync/note.h"
#include "third_party/nsync/time.h"
#include "third_party/nsync/wait_s.internal.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
/* Annotations for race detectors. */
#if defined(__has_feature) && !defined(__SANITIZE_THREAD__)
#if __has_feature(thread_sanitizer) /* used by clang */
#define __SANITIZE_THREAD__ 1 /* GCC uses this; fake it in clang */
#endif
#endif
#if defined(__SANITIZE_THREAD__)
NSYNC_C_START_
void AnnotateIgnoreWritesBegin(const char *file, int line);
void AnnotateIgnoreWritesEnd(const char *file, int line);
void AnnotateIgnoreReadsBegin(const char *file, int line);
void AnnotateIgnoreReadsEnd(const char *file, int line);
NSYNC_C_END_
#define IGNORE_RACES_START() \
do { \
AnnotateIgnoreReadsBegin(__FILE__, __LINE__); \
AnnotateIgnoreWritesBegin(__FILE__, __LINE__); \
} while (0)
#define IGNORE_RACES_END() \
do { \
AnnotateIgnoreWritesEnd(__FILE__, __LINE__); \
AnnotateIgnoreReadsEnd(__FILE__, __LINE__); \
} while (0)
#else
#define IGNORE_RACES_START()
#define IGNORE_RACES_END()
#endif
#ifndef NSYNC_DEBUG
#define NSYNC_DEBUG 0
#endif
/* Yield the CPU. Platform specific. */
void nsync_yield_(void);
/* Retrieve the per-thread cache of the waiter object. Platform specific. */
void *nsync_per_thread_waiter_(void (*dest)(void *));
/* Set the per-thread cache of the waiter object. Platform specific. */
void nsync_set_per_thread_waiter_(void *v, void (*dest)(void *));
/* Used in spinloops to delay resumption of the loop.
Usage:
unsigned attempts = 0;
while (try_something) {
attempts = nsync_spin_delay_ (attempts);
} */
unsigned nsync_spin_delay_(unsigned attempts);
/* Spin until (*w & test) == 0, then atomically perform *w = ((*w | set) &
~clear), perform an acquire barrier, and return the previous value of *w.
*/
uint32_t nsync_spin_test_and_set_(nsync_atomic_uint32_ *w, uint32_t test,
uint32_t set, uint32_t clear);
/* Abort after printing the nul-temrinated string s[]. */
void nsync_panic_(const char *s);
/* ---------- */
#define MIN_(a_, b_) ((a_) < (b_) ? (a_) : (b_))
#define MAX_(a_, b_) ((a_) > (b_) ? (a_) : (b_))
/* ---------- */
/* Fields in nsync_mu.word.
- At least one of the MU_WLOCK or MU_RLOCK_FIELD fields must be zero.
- MU_WLOCK indicates that a write lock is held.
- MU_RLOCK_FIELD is a count of readers with read locks.
- MU_SPINLOCK represents a spinlock that must be held when manipulating the
waiter queue.
- MU_DESIG_WAKER indicates that a former waiter has been woken, but has
neither acquired the lock nor gone back to sleep. Legal to fail to set it;
illegal to set it when no such waiter exists.
- MU_WAITING indicates whether the waiter queue is non-empty.
The following bits should be zero if MU_WAITING is zero.
- MU_CONDITION indicates that some waiter may have an associated condition
(from nsync_mu_wait, etc.). Legal to set it with no such waiter exists,
but illegal to fail to set it with such a waiter.
- MU_WRITER_WAITING indicates that a reader that has not yet blocked
at least once should not acquire in order not to starve waiting writers.
It set when a writer blocks or a reader is woken with a writer waiting.
It is reset when a writer acquires, but set again when that writer
releases if it wakes readers and there is a waiting writer.
- MU_LONG_WAIT indicates that a waiter has been woken many times but
repeatedly failed to acquire when competing for the lock. This is used
only to prevent long-term starvation by writers. The thread that sets it
clears it when if acquires.
- MU_ALL_FALSE indicates that a complete scan of the waiter list found no
waiters with true conditions, and the lock has not been acquired by a
writer since then. This allows a reader lock to be released without
testing conditions again. It is legal to fail to set this, but illegal
to set it inappropriately.
*/
#define MU_WLOCK ((uint32_t)(1 << 0)) /* writer lock is held. */
#define MU_SPINLOCK \
((uint32_t)(1 << 1)) /* spinlock is held (protects waiters). */
#define MU_WAITING ((uint32_t)(1 << 2)) /* waiter list is non-empty. */
#define MU_DESIG_WAKER \
((uint32_t)(1 << 3)) /* a former waiter awoke, and hasn't yet acquired or \
slept anew */
#define MU_CONDITION \
((uint32_t)(1 << 4)) /* the wait list contains some conditional waiters. */
#define MU_WRITER_WAITING ((uint32_t)(1 << 5)) /* there is a writer waiting */
#define MU_LONG_WAIT \
((uint32_t)(1 << 6)) /* the waiter at the head of the queue has been waiting \
a long time */
#define MU_ALL_FALSE \
((uint32_t)(1 << 7)) /* all waiter conditions are false \
*/
#define MU_RLOCK \
((uint32_t)( \
1 << 8)) /* low-order bit of reader count, which uses rest of word */
/* The constants below are derived from those above. */
#define MU_RLOCK_FIELD \
(~(uint32_t)(MU_RLOCK - 1)) /* mask of reader count field */
#define MU_ANY_LOCK (MU_WLOCK | MU_RLOCK_FIELD) /* mask for any lock held */
#define MU_WZERO_TO_ACQUIRE \
(MU_ANY_LOCK | MU_LONG_WAIT) /* bits to be zero to acquire write lock */
#define MU_WADD_TO_ACQUIRE (MU_WLOCK) /* add to acquire a write lock */
#define MU_WHELD_IF_NON_ZERO \
(MU_WLOCK) /* if any of these bits are set, write lock is held */
#define MU_WSET_WHEN_WAITING \
(MU_WAITING | MU_WRITER_WAITING) /* a writer is waiting */
#define MU_WCLEAR_ON_ACQUIRE \
(MU_WRITER_WAITING) /* clear MU_WRITER_WAITING when a writer acquires */
#define MU_WCLEAR_ON_UNCONTENDED_RELEASE \
(MU_ALL_FALSE) /* clear if a writer releases w/o waking */
/* bits to be zero to acquire read lock */
#define MU_RZERO_TO_ACQUIRE (MU_WLOCK | MU_WRITER_WAITING | MU_LONG_WAIT)
#define MU_RADD_TO_ACQUIRE (MU_RLOCK) /* add to acquire a read lock */
#define MU_RHELD_IF_NON_ZERO \
(MU_RLOCK_FIELD) /* if any of these bits are set, read lock is held */
#define MU_RSET_WHEN_WAITING \
(MU_WAITING) /* indicate that some thread is waiting */
#define MU_RCLEAR_ON_ACQUIRE \
((uint32_t)0) /* nothing to clear when a read acquires */
#define MU_RCLEAR_ON_UNCONTENDED_RELEASE \
((uint32_t)0) /* nothing to clear when a read releases */
/* A lock_type holds the values needed to manipulate a mu in some mode (read or
write). This allows some of the code to be generic, and parameterized by
the lock type. */
typedef struct lock_type_s {
uint32_t zero_to_acquire; /* bits that must be zero to acquire */
uint32_t add_to_acquire; /* constant to add to acquire */
uint32_t
held_if_non_zero; /* if any of these bits are set, the lock is held */
uint32_t set_when_waiting; /* set when thread waits */
uint32_t clear_on_acquire; /* clear when thread acquires */
uint32_t clear_on_uncontended_release; /* clear when thread releases without
waking */
} lock_type;
/* writer_type points to a lock_type that describes how to manipulate a mu for a
* writer. */
extern lock_type *nsync_writer_type_;
/* reader_type points to a lock_type that describes how to manipulate a mu for a
* reader. */
extern lock_type *nsync_reader_type_;
/* ---------- */
/* Bits in nsync_cv.word */
#define CV_SPINLOCK ((uint32_t)(1 << 0)) /* protects waiters */
#define CV_NON_EMPTY ((uint32_t)(1 << 1)) /* waiters list is non-empty */
/* ---------- */
/* Hold a pair of condition function and its argument. */
struct wait_condition_s {
int (*f)(const void *v);
const void *v;
int (*eq)(const void *a, const void *b);
};
/* Return whether wait conditions *a_ and *b_ are equal and non-null. */
#define WAIT_CONDITION_EQ(a_, b_) \
((a_)->f != NULL && (a_)->f == (b_)->f && \
((a_)->v == (b_)->v || \
((a_)->eq != NULL && (*(a_)->eq)((a_)->v, (b_)->v))))
/* If a waiter has waited this many times, it may set the MU_LONG_WAIT bit. */
#define LONG_WAIT_THRESHOLD 30
/* ---------- */
#define NOTIFIED_TIME(n_) \
(ATM_LOAD_ACQ(&(n_)->notified) != 0 ? nsync_time_zero \
: (n_)->expiry_time_valid ? (n_)->expiry_time \
: nsync_time_no_deadline)
/* A waiter represents a single waiter on a cv or a mu.
To wait:
Allocate a waiter struct *w with new_waiter(), set w.waiting=1, and
w.cv_mu=nil or to the associated mu if waiting on a condition variable, then
queue w.nsync_dll on some queue, and then wait using:
while (ATM_LOAD_ACQ (&w.waiting) != 0) { nsync_mu_semaphore_p (&w.sem); }
Return *w to the freepool by calling free_waiter (w).
To wakeup:
Remove *w from the relevant queue then:
ATM_STORE_REL (&w.waiting, 0);
nsync_mu_semaphore_v (&w.sem); */
typedef struct {
uint32_t tag; /* debug DLL_NSYNC_WAITER, DLL_WAITER, DLL_WAITER_SAMECOND */
nsync_semaphore sem; /* Thread waits on this semaphore. */
struct nsync_waiter_s nw; /* An embedded nsync_waiter_s. */
struct nsync_mu_s_ *cv_mu; /* pointer to nsync_mu associated with a cv wait */
lock_type
*l_type; /* Lock type of the mu, or nil if not associated with a mu. */
nsync_atomic_uint32_ remove_count; /* count of removals from queue */
struct wait_condition_s cond; /* A condition on which to acquire a mu. */
nsync_dll_element_ same_condition; /* Links neighbours in nw.q with same
non-nil condition. */
int flags; /* see WAITER_* bits below */
} waiter;
static const uint32_t WAITER_TAG = 0x0590239f;
static const uint32_t NSYNC_WAITER_TAG = 0x726d2ba9;
#define WAITER_RESERVED \
0x1 /* waiter reserved by a thread, even when not in use */
#define WAITER_IN_USE 0x2 /* waiter in use by a thread */
#define CONTAINER(t_, f_, p_) ((t_ *)(((char *)(p_)) - offsetof(t_, f_)))
#define ASSERT(x) \
do { \
if (!(x)) { \
*(volatile int *)0 = 0; \
} \
} while (0)
/* Return a pointer to the nsync_waiter_s containing nsync_dll_element_ *e. */
#define DLL_NSYNC_WAITER(e) \
(NSYNC_DEBUG ? nsync_dll_nsync_waiter_(e) \
: ((struct nsync_waiter_s *)((e)->container)))
struct nsync_waiter_s *nsync_dll_nsync_waiter_(nsync_dll_element_ *e);
/* Return a pointer to the waiter struct that *e is embedded in, where *e is an
* nw.q field. */
#define DLL_WAITER(e) \
(NSYNC_DEBUG ? nsync_dll_waiter_(e) \
: CONTAINER(waiter, nw, DLL_NSYNC_WAITER(e)))
waiter *nsync_dll_waiter_(nsync_dll_element_ *e);
/* Return a pointer to the waiter struct that *e is embedded in, where *e is a
same_condition field. */
#define DLL_WAITER_SAMECOND(e) \
(NSYNC_DEBUG ? nsync_dll_waiter_samecond_(e) : ((waiter *)((e)->container)))
waiter *nsync_dll_waiter_samecond_(nsync_dll_element_ *e);
/* Return a pointer to an unused waiter struct.
Ensures that the enclosed timer is stopped and its channel drained. */
waiter *nsync_waiter_new_(void);
/* Return an unused waiter struct *w to the free pool. */
void nsync_waiter_free_(waiter *w);
/* ---------- */
/* The internals of an nync_note. See internal/note.c for details of locking
discipline. */
struct nsync_note_s_ {
nsync_dll_element_
parent_child_link; /* parent's children, under parent->note_mu */
int expiry_time_valid; /* whether expiry_time is valid; r/o after init */
nsync_time
expiry_time; /* expiry time, if expiry_time_valid != 0; r/o after init */
nsync_mu note_mu; /* protects fields below except "notified" */
nsync_cv no_children_cv; /* signalled when children becomes empty */
uint32_t disconnecting; /* non-zero => node is being disconnected */
nsync_atomic_uint32_ notified; /* non-zero if the note has been notified */
struct nsync_note_s_ *parent; /* points to parent, if any */
nsync_dll_element_ *children; /* list of children */
nsync_dll_element_ *waiters; /* list of waiters */
};
/* ---------- */
void nsync_mu_lock_slow_(nsync_mu *mu, waiter *w, uint32_t clear,
lock_type *l_type);
void nsync_mu_unlock_slow_(nsync_mu *mu, lock_type *l_type);
nsync_dll_list_ nsync_remove_from_mu_queue_(nsync_dll_list_ mu_queue,
nsync_dll_element_ *e);
void nsync_maybe_merge_conditions_(nsync_dll_element_ *p,
nsync_dll_element_ *n);
nsync_time nsync_note_notified_deadline_(nsync_note n);
int nsync_sem_wait_with_cancel_(waiter *w, nsync_time abs_deadline,
nsync_note cancel_note);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_COMMON_H_ */

43
third_party/nsync/counter.h vendored Normal file
View file

@ -0,0 +1,43 @@
#ifndef NSYNC_COUNTER_H_
#define NSYNC_COUNTER_H_
#include "third_party/nsync/atomic.h"
#include "third_party/nsync/time.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
struct nsync_dll_element_s_;
/* An nsync_counter represents an unsigned integer that can count up and down,
and wake waiters when zero. */
typedef struct nsync_counter_s_ *nsync_counter;
/* Return a freshly allocated nsync_counter with the specified value,
of NULL if an nsync_counter cannot be created.
Any non-NULL returned value should be passed to nsync_counter_free() when no
longer needed. */
nsync_counter nsync_counter_new(uint32_t value);
/* Free resources associated with c. Requires that c was allocated by
nsync_counter_new(), and no concurrent or future operations are applied to
c. */
void nsync_counter_free(nsync_counter c);
/* Add delta to c, and return its new value. It is a checkable runtime error
to decrement c below 0, or to increment c (i.e., apply a delta > 0) after a
waiter has waited. */
uint32_t nsync_counter_add(nsync_counter c, int32_t delta);
/* Return the current value of c. */
uint32_t nsync_counter_value(nsync_counter c);
/* Wait until c has value 0, or until abs_deadline, then return
the value of c. It is a checkable runtime error to increment c after
a waiter may have been woken due to the counter reaching zero.
If abs_deadline==nsync_time_no_deadline, the deadline
is far in the future. */
uint32_t nsync_counter_wait(nsync_counter c, nsync_time abs_deadline);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_COUNTER_H_ */

157
third_party/nsync/cv.h vendored Normal file
View file

@ -0,0 +1,157 @@
#ifndef NSYNC_CV_H_
#define NSYNC_CV_H_
#include "third_party/nsync/mu.h"
#include "third_party/nsync/time.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
#define NSYNC_CV_INIT \
{ NSYNC_ATOMIC_UINT32_INIT_, 0 }
struct nsync_dll_element_s_;
struct nsync_note_s_;
/* An nsync_cv is a condition variable in the style of Mesa, Java,
POSIX, and Go's sync.Cond. It allows a thread to wait for a condition
on state protected by a mutex, and to proceed with the mutex held and
the condition true.
See also nsync_mu_wait() and nsync_mu_wait_with_deadline(), which
implement conditional critical sections. In many cases, they are
easier to use than condition variables.
Usage
After making the desired predicate true, call:
nsync_cv_signal (&cv); // If at most one thread can make use
// of the predicate becoming true.
or
nsync_cv_broadcast (&cv); // If multiple threads can make use
// of the predicate becoming true.
To wait for a predicate with no deadline (assuming
nsync_cv_broadcast() or nsync_cv_signal() is called whenever the
predicate becomes true):
nsync_mu_lock (&mu;)
while (!some_predicate_protected_by_mu) { // while-loop required
nsync_cv_wait (&cv, &mu);
}
// predicate is now true
nsync_mu_unlock (&mu);
To wait for a predicate with a deadline (assuming nsync_cv_broadcast() or
nsync_cv_signal() is called whenever the predicate becomes true):
nsync_mu_lock (&mu);
while (!some_predicate_protected_by_mu &&
nsync_cv_wait_with_deadline (&cv, &mu, abs_deadline,
cancel_note) == 0) {
}
if (some_predicate_protected_by_mu) { // predicate is true
} else {
// predicate is false, and deadline expired, or
// cancel_note was notified.
}
nsync_mu_unlock (&mu);
or, if the predicate is complex and you wish to write it just once
and inline, you could use the following instead of the for-loop
above:
nsync_mu_lock (&mu);
int pred_is_true = 0;
int outcome = 0;
while (!(pred_is_true = some_predicate_protected_by_mu) &&
outcome == 0) {
outcome = nsync_cv_wait_with_deadline (&cv, &mu, abs_deadline,
cancel_note);
}
if (pred_is_true) { // predicate is true
} else {
// predicate is false, and deadline expired, or
// cancel_note was notified.
}
nsync_mu_unlock (&mu);
As the examples show, Mesa-style condition variables require that
waits use a loop that tests the predicate anew after each wait. It
may be surprising that these are preferred over the precise wakeups
offered by the condition variables in Hoare monitors. Imprecise
wakeups make more efficient use of the critical section, because
threads can enter it while a woken thread is still emerging from the
scheduler, which may take thousands of cycles. Further, they make the
programme easier to read and debug by making the predicate explicit
locally at the wait, where the predicate is about to be assumed; the
reader does not have to infer the predicate by examining all the
places where wakeups may occur. */
typedef struct nsync_cv_s_ {
/* see bits below */
nsync_atomic_uint32_ word;
/* points to tail of list of waiters; under mu. */
struct nsync_dll_element_s_ *waiters;
} nsync_cv;
/* An nsync_cv should be zeroed to initialize, which can be accomplished
by initializing with static initializer NSYNC_CV_INIT, or by setting
the entire struct to 0, or using nsync_cv_init(). */
void nsync_cv_init(nsync_cv *cv);
/* Wake at least one thread if any are currently blocked on *cv. If the
chosen thread is a reader on an nsync_mu, wake all readers and, if
possible, a writer. */
void nsync_cv_signal(nsync_cv *cv);
/* Wake all threads currently blocked on *cv. */
void nsync_cv_broadcast(nsync_cv *cv);
/* Atomically release "mu" (which must be held on entry) and block the
caller on *cv. Wait until awakened by a call to nsync_cv_signal() or
nsync_cv_broadcast(), or a spurious wakeup; then reacquire "mu", and
return. Equivalent to a call to nsync_mu_wait_with_deadline() with
abs_deadline==nsync_time_no_deadline, and cancel_note==NULL. Callers
should use nsync_cv_wait() in a loop, as with all standard Mesa-style
condition variables. See examples above. */
void nsync_cv_wait(nsync_cv *cv, nsync_mu *mu);
/* Atomically release "mu" (which must be held on entry) and block the
calling thread on *cv. It then waits until awakened by a call to
nsync_cv_signal() or nsync_cv_broadcast() (or a spurious wakeup), or
by the time reaching abs_deadline, or by cancel_note being notified.
In all cases, it reacquires "mu", and returns the reason for the call
returned (0, ETIMEDOUT, or ECANCELED). Use
abs_deadline==nsync_time_no_deadline for no deadline, and
cancel_note==NULL for no cancellation. wait_with_deadline() should be
used in a loop, as with all Mesa-style condition variables. See
examples above.
There are two reasons for using an absolute deadline, rather than a
relative timeout---these are why pthread_cond_timedwait() also uses
an absolute deadline. First, condition variable waits have to be used
in a loop; with an absolute times, the deadline does not have to be
recomputed on each iteration. Second, in most real programmes, some
activity (such as an RPC to a server, or when guaranteeing response
time in a UI), there is a deadline imposed by the specification or
the caller/user; relative delays can shift arbitrarily with
scheduling delays, and so after multiple waits might extend beyond
the expected deadline. Relative delays tend to be more convenient
mostly in tests and trivial examples than they are in real
programmes. */
int nsync_cv_wait_with_deadline(nsync_cv *cv, nsync_mu *mu,
nsync_time abs_deadline,
struct nsync_note_s_ *cancel_note);
/* Like nsync_cv_wait_with_deadline(), but allow an arbitrary lock *v to be
used, given its (*lock)(mu) and (*unlock)(mu) routines. */
int nsync_cv_wait_with_deadline_generic(nsync_cv *cv, void *mu,
void (*lock)(void *),
void (*unlock)(void *),
nsync_time abs_deadline,
struct nsync_note_s_ *cancel_note);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_CV_H_ */

39
third_party/nsync/debug.h vendored Normal file
View file

@ -0,0 +1,39 @@
#ifndef NSYNC_DEBUG_H_
#define NSYNC_DEBUG_H_
#include "third_party/nsync/cv.h"
#include "third_party/nsync/mu.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
/* Debugging operations for mutexes and condition variables.
These operations should not be relied upon for normal functionality.
The implementation may be slow, output formats may change, and the
implementation is free to yield the empty string. */
/* Place in buf[0,..,n-1] a nul-terminated, human readable string
indicative of some of the internal state of the mutex or condition
variable, and return buf. If n>=4, buffer overflow is indicated by
placing the characters "..." at the end of the string.
The *_and_waiters() variants attempt to output the waiter lists in
addition to the basic state. These variants may acquire internal
locks and follow internal pointers. Thus, they are riskier if invoked
in an address space whose overall health is uncertain. */
char *nsync_mu_debug_state(nsync_mu *mu, char *buf, int n);
char *nsync_cv_debug_state(nsync_cv *cv, char *buf, int n);
char *nsync_mu_debug_state_and_waiters(nsync_mu *mu, char *buf, int n);
char *nsync_cv_debug_state_and_waiters(nsync_cv *cv, char *buf, int n);
/* Like nsync_*_debug_state_and_waiters(), but ignoring all locking and
safety considerations, and using an internal, possibly static buffer
that may be overwritten by subsequent or concurrent calls to these
routines. These variants should be used only from an interactive
debugger, when all other threads are stopped; the debugger is
expected to recover from errors. */
char *nsync_mu_debugger(nsync_mu *mu);
char *nsync_cv_debugger(nsync_cv *cv);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_DEBUG_H_ */

143
third_party/nsync/dll.c vendored Normal file
View file

@ -0,0 +1,143 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2016 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 │
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "third_party/nsync/dll.h"
asm(".ident\t\"\\n\\n\
*NSYNC (Apache 2.0)\\n\
Copyright 2016 Google, Inc.\\n\
https://github.com/google/nsync\"");
// clang-format off
/* Initialize *e. */
void nsync_dll_init_ (nsync_dll_element_ *e, void *container) {
e->next = e;
e->prev = e;
e->container = container;
}
/* Return whether list is empty. */
int nsync_dll_is_empty_ (nsync_dll_list_ list) {
return (list == NULL);
}
/* Remove *e from list, and returns the new list. */
nsync_dll_list_ nsync_dll_remove_ (nsync_dll_list_ list, nsync_dll_element_ *e) {
if (list == e) { /* removing tail of list */
if (list->prev == list) {
list = NULL; /* removing only element of list */
} else {
list = list->prev;
}
}
e->next->prev = e->prev;
e->prev->next = e->next;
e->next = e;
e->prev = e;
return (list);
}
/* Cause element *n and its successors to come after element *p.
Requires n and p are non-NULL and do not point at elements of the same list.
Unlike the other operations in this API, this operation acts on
two circular lists of elements, rather than on a "head" location that points
to such a circular list.
If the two lists are p->p_2nd->p_mid->p_last->p and n->n_2nd->n_mid->n_last->n,
then after nsync_dll_splice_after_ (p, n), the p list would be:
p->n->n_2nd->n_mid->n_last->p_2nd->p_mid->p_last->p. */
void nsync_dll_splice_after_ (nsync_dll_element_ *p, nsync_dll_element_ *n) {
nsync_dll_element_ *p_2nd = p->next;
nsync_dll_element_ *n_last = n->prev;
p->next = n; /* n follows p */
n->prev = p;
n_last->next = p_2nd; /* remainder of p-list follows last of n-list */
p_2nd->prev = n_last;
}
/* Make element *e the first element of list, and return
the list. The resulting list will have *e as its first element, followed by
any elements in the same list as *e, followed by the elements that were
previously in list. Requires that *e not be in list. If e==NULL, list is
returned unchanged.
Suppose the e list is e->e_2nd->e_mid->e_last->e.
Recall that a head "list" points to the last element of its list.
If list is initially null, then the outcome is:
result = e_last->e->e_2nd->e_mid->e_last
If list is initially list->list_last->list_1st->list_mid->list_last,
then the outcome is:
result = list_last->e->e_2nd->e_mid->e_last->list_1st->list_mid->list_last
*/
nsync_dll_list_ nsync_dll_make_first_in_list_ (nsync_dll_list_ list, nsync_dll_element_ *e) {
if (e != NULL) {
if (list == NULL) {
list = e->prev; /*e->prev is e_last*/
} else {
nsync_dll_splice_after_ (list, e);
}
}
return (list);
}
/* Make element *e the last element of list, and return
the list. The resulting list will have *e as its last element, preceded by
any elements in the same list as *e, preceded by the elements that were
previously in list. Requires that *e not be in list. If e==NULL, list is
returned unchanged. */
nsync_dll_list_ nsync_dll_make_last_in_list_ (nsync_dll_list_ list, nsync_dll_element_ *e) {
if (e != NULL) {
nsync_dll_make_first_in_list_ (list, e->next);
list = e;
}
return (list);
}
/* Return a pointer to the first element of list, or NULL if list is empty. */
nsync_dll_element_ *nsync_dll_first_ (nsync_dll_list_ list) {
nsync_dll_element_ *first = NULL;
if (list != NULL) {
first = list->next;
}
return (first);
}
/* Return a pointer to the last element of list, or NULL if list is empty. */
nsync_dll_element_ *nsync_dll_last_ (nsync_dll_list_ list) {
return (list);
}
/* Return a pointer to the next element of list following *e,
or NULL if there is no such element. */
nsync_dll_element_ *nsync_dll_next_ (nsync_dll_list_ list, nsync_dll_element_ *e) {
nsync_dll_element_ *next = NULL;
if (e != list) {
next = e->next;
}
return (next);
}
/* Return a pointer to the previous element of list following *e,
or NULL if there is no such element. */
nsync_dll_element_ *nsync_dll_prev_ (nsync_dll_list_ list, nsync_dll_element_ *e) {
nsync_dll_element_ *prev = NULL;
if (e != list->next) {
prev = e->prev;
}
return (prev);
}

69
third_party/nsync/dll.h vendored Normal file
View file

@ -0,0 +1,69 @@
#ifndef NSYNC_DLL_H_
#define NSYNC_DLL_H_
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
/* A nsync_dll_element_ represents an element of a doubly-linked list of
waiters. */
typedef struct nsync_dll_element_s_ {
struct nsync_dll_element_s_ *next;
struct nsync_dll_element_s_ *prev;
/* points to the struct this nsync_dll struct is embedded in. */
void *container;
} nsync_dll_element_;
/* A nsync_dll_list_ represents a list of nsync_dll_elements_. */
typedef nsync_dll_element_ *nsync_dll_list_; /* last elem of circular list; nil
=> empty; first is x.next. */
/* Initialize *e. */
void nsync_dll_init_(nsync_dll_element_ *e, void *container);
/* Return whether list is empty. */
int nsync_dll_is_empty_(nsync_dll_list_ list);
/* Remove *e from list, and returns the new list. */
nsync_dll_list_ nsync_dll_remove_(nsync_dll_list_ list, nsync_dll_element_ *e);
/* Cause element *n and its successors to come after element *p.
Requires n and p are non-NULL and do not point at elements of the
same list. */
void nsync_dll_splice_after_(nsync_dll_element_ *p, nsync_dll_element_ *n);
/* Make element *e the first element of list, and return the list. The
resulting list will have *e as its first element, followed by any
elements in the same list as *e, followed by the elements that were
previously in list. Requires that *e not be in list. If e==NULL, list
is returned unchanged. */
nsync_dll_list_ nsync_dll_make_first_in_list_(nsync_dll_list_ list,
nsync_dll_element_ *e);
/* Make element *e the last element of list, and return the list. The
resulting list will have *e as its last element, preceded by any
elements in the same list as *e, preceded by the elements that were
previously in list. Requires that *e not be in list. If e==NULL, list
is returned unchanged. */
nsync_dll_list_ nsync_dll_make_last_in_list_(nsync_dll_list_ list,
nsync_dll_element_ *e);
/* Return a pointer to the first element of list, or NULL if list is
* empty. */
nsync_dll_element_ *nsync_dll_first_(nsync_dll_list_ list);
/* Return a pointer to the last element of list, or NULL if list is
* empty. */
nsync_dll_element_ *nsync_dll_last_(nsync_dll_list_ list);
/* Return a pointer to the next element of list following *e, or NULL if
there is no such element. */
nsync_dll_element_ *nsync_dll_next_(nsync_dll_list_ list,
nsync_dll_element_ *e);
/* Return a pointer to the previous element of list following *e, or
NULL if there is no such element. */
nsync_dll_element_ *nsync_dll_prev_(nsync_dll_list_ list,
nsync_dll_element_ *e);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_DLL_H_ */

124
third_party/nsync/futex.c vendored Normal file
View file

@ -0,0 +1,124 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2022 Justine Alexandra Roberts Tunney
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
*/
#include "libc/calls/strace.internal.h"
#include "libc/calls/struct/timespec.internal.h"
#include "libc/dce.h"
#include "libc/errno.h"
#include "libc/intrin/describeflags.internal.h"
#include "libc/sysv/consts/futex.h"
#include "libc/thread/thread.h"
#include "third_party/nsync/common.internal.h"
#include "third_party/nsync/futex.internal.h"
// clang-format off
/* futex() polyfill w/ sched_yield() fallback */
#define FUTEX_WAIT_BITS_ FUTEX_BITSET_MATCH_ANY
int _futex (int *, int, int, const struct timespec *, int *, int);
static int FUTEX_WAIT_;
static int FUTEX_WAKE_;
static int FUTEX_PRIVATE_FLAG_;
static bool FUTEX_IS_SUPPORTED;
bool FUTEX_TIMEOUT_IS_ABSOLUTE;
__attribute__((__constructor__)) static void sync_futex_init_ (void) {
int x = 0;
FUTEX_WAKE_ = FUTEX_WAKE;
if (IsLinux () &&
_futex (&x, FUTEX_WAIT_BITSET, 1, 0, 0,
FUTEX_BITSET_MATCH_ANY) == -EAGAIN) {
FUTEX_WAIT_ = FUTEX_WAIT_BITSET;
FUTEX_TIMEOUT_IS_ABSOLUTE = true;
} else {
FUTEX_WAIT_ = FUTEX_WAIT;
}
if (IsOpenbsd () ||
(IsLinux () &&
!_futex (&x, FUTEX_WAKE_PRIVATE, 1, 0, 0, 0))) {
FUTEX_PRIVATE_FLAG_ = FUTEX_PRIVATE_FLAG;
}
// In our testing, we found that the monotonic clock on various
// popular systems (such as Linux, and some BSD variants) was no
// better behaved than the realtime clock, and routinely took
// large steps backwards, especially on multiprocessors. Given
// that "monotonic" doesn't seem to mean what it says,
// implementers of nsync_time might consider retaining the
// simplicity of a single epoch within an address space, by
// configuring any time synchronization mechanism (like ntp) to
// adjust for leap seconds by adjusting the rate, rather than
// with a backwards step.
if (IsLinux () &&
_futex (&x, FUTEX_WAIT_BITSET | FUTEX_CLOCK_REALTIME,
1, 0, 0, FUTEX_BITSET_MATCH_ANY) == -EAGAIN) {
FUTEX_WAIT_ |= FUTEX_CLOCK_REALTIME;
}
FUTEX_IS_SUPPORTED = IsLinux() || IsOpenbsd();
}
int nsync_futex_wait_ (int *p, int expect, char pshare, struct timespec *timeout) {
int rc, op;
if (FUTEX_IS_SUPPORTED) {
op = FUTEX_WAIT_;
if (pshare == PTHREAD_PROCESS_PRIVATE) {
op |= FUTEX_PRIVATE_FLAG_;
}
rc = _futex (p, op, expect, timeout, 0, FUTEX_WAIT_BITS_);
if (IsOpenbsd() && rc > 0) {
// [jart] openbsd does this without setting carry flag
rc = -rc;
}
STRACE("futex(%t, %s, %d, %s) → %s",
p, DescribeFutexOp(op), expect,
DescribeTimespec(0, timeout), DescribeFutexResult(rc));
} else {
nsync_yield_ ();
if (timeout) {
rc = -ETIMEDOUT;
} else {
rc = 0;
}
}
return rc;
}
int nsync_futex_wake_ (int *p, int count, char pshare) {
int rc, op;
int wake (void *, int, int) asm ("_futex");
if (FUTEX_IS_SUPPORTED) {
op = FUTEX_WAKE_;
if (pshare == PTHREAD_PROCESS_PRIVATE) {
op |= FUTEX_PRIVATE_FLAG_;
}
rc = wake (p, op, count);
STRACE("futex(%t, %s, %d) → %s", p,
DescribeFutexOp(op),
count, DescribeFutexResult(rc));
} else {
nsync_yield_ ();
rc = 0;
}
return rc;
}

15
third_party/nsync/futex.internal.h vendored Normal file
View file

@ -0,0 +1,15 @@
#ifndef NSYNC_FUTEX_INTERNAL_H_
#define NSYNC_FUTEX_INTERNAL_H_
#include "libc/calls/struct/timespec.h"
#include "libc/dce.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
extern bool FUTEX_TIMEOUT_IS_ABSOLUTE;
int nsync_futex_wake_(int *, int, char);
int nsync_futex_wait_(int *, int, char, struct timespec *);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_FUTEX_INTERNAL_H_ */

49
third_party/nsync/malloc.c vendored Normal file
View file

@ -0,0 +1,49 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2022 Justine Alexandra Roberts Tunney
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
*/
#include "libc/atomic.h"
#include "libc/calls/extend.internal.h"
#include "libc/intrin/atomic.h"
#include "libc/macros.internal.h"
#include "third_party/nsync/common.internal.h"
#include "third_party/nsync/malloc.internal.h"
// clang-format off
static char *nsync_malloc_endptr_;
static size_t nsync_malloc_total_;
static atomic_char nsync_malloc_lock_;
/* nsync_malloc_() is a malloc-like routine used by mutex and condition
variable code to allocate waiter structs. This allows *NSYNC mutexes
to be used by malloc(), by providing another, simpler allocator here.
The intent is that the implicit NULL value here can be overridden by
a client declaration that uses an initializer. */
void *nsync_malloc_ (size_t size) {
char *start;
size_t offset;
size = ROUNDUP (size, __BIGGEST_ALIGNMENT__);
while (atomic_exchange (&nsync_malloc_lock_, 1)) nsync_yield_ ();
offset = nsync_malloc_total_;
nsync_malloc_total_ += size;
start = (char *) 0x6fc000040000;
if (!nsync_malloc_endptr_) nsync_malloc_endptr_ = start;
nsync_malloc_endptr_ = _extend (start, nsync_malloc_total_,
nsync_malloc_endptr_, 0x6fcfffff0000);
atomic_store_explicit (&nsync_malloc_lock_, 0, memory_order_relaxed);
return start + offset;
}

10
third_party/nsync/malloc.internal.h vendored Normal file
View file

@ -0,0 +1,10 @@
#ifndef NSYNC_MALLOC_INTERNAL_H_
#define NSYNC_MALLOC_INTERNAL_H_
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
void *nsync_malloc_(size_t);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_MALLOC_INTERNAL_H_ */

547
third_party/nsync/mu.c vendored Normal file
View file

@ -0,0 +1,547 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2016 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 │
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "libc/str/str.h"
#include "third_party/nsync/atomic.h"
#include "third_party/nsync/common.internal.h"
#include "third_party/nsync/dll.h"
#include "third_party/nsync/mu_semaphore.h"
#include "third_party/nsync/wait_s.internal.h"
asm(".ident\t\"\\n\\n\
*NSYNC (Apache 2.0)\\n\
Copyright 2016 Google, Inc.\\n\
https://github.com/google/nsync\"");
// clang-format off
/* Initialize *mu. */
void nsync_mu_init (nsync_mu *mu) {
memset ((void *) mu, 0, sizeof (*mu));
}
/* Release the mutex spinlock. */
static void mu_release_spinlock (nsync_mu *mu) {
uint32_t old_word = ATM_LOAD (&mu->word);
while (!ATM_CAS_REL (&mu->word, old_word, old_word & ~MU_SPINLOCK)) {
old_word = ATM_LOAD (&mu->word);
}
}
/* Lock *mu using the specified lock_type, waiting on *w if necessary.
"clear" should be zero if the thread has not previously slept on *mu, and
MU_DESIG_WAKER if it has; this represents bits that nsync_mu_lock_slow_() must clear when
it either acquires or sleeps on *mu. The caller owns *w on return; it is in a valid
state to be returned to the free pool. */
void nsync_mu_lock_slow_ (nsync_mu *mu, waiter *w, uint32_t clear, lock_type *l_type) {
uint32_t zero_to_acquire;
uint32_t wait_count;
uint32_t long_wait;
unsigned attempts = 0; /* attempt count; used for spinloop backoff */
w->cv_mu = NULL; /* not a cv wait */
w->cond.f = NULL; /* Not using a conditional critical section. */
w->cond.v = NULL;
w->cond.eq = NULL;
w->l_type = l_type;
zero_to_acquire = l_type->zero_to_acquire;
if (clear != 0) {
/* Only the constraints of mutual exclusion should stop a designated waker. */
zero_to_acquire &= ~(MU_WRITER_WAITING | MU_LONG_WAIT);
}
wait_count = 0; /* number of times we waited, and were woken. */
long_wait = 0; /* set to MU_LONG_WAIT when wait_count gets large */
for (;;) {
uint32_t old_word = ATM_LOAD (&mu->word);
if ((old_word & zero_to_acquire) == 0) {
/* lock can be acquired; try to acquire, possibly
clearing MU_DESIG_WAKER and MU_LONG_WAIT. */
if (ATM_CAS_ACQ (&mu->word, old_word,
(old_word+l_type->add_to_acquire) &
~(clear|long_wait|l_type->clear_on_acquire))) {
return;
}
} else if ((old_word&MU_SPINLOCK) == 0 &&
ATM_CAS_ACQ (&mu->word, old_word,
(old_word|MU_SPINLOCK|long_wait|
l_type->set_when_waiting) & ~(clear | MU_ALL_FALSE))) {
/* Spinlock is now held, and lock is held by someone
else; MU_WAITING has also been set; queue ourselves.
There's no need to adjust same_condition here,
because w.condition==NULL. */
ATM_STORE (&w->nw.waiting, 1);
if (wait_count == 0) {
/* first wait goes to end of queue */
mu->waiters = nsync_dll_make_last_in_list_ (mu->waiters,
&w->nw.q);
} else {
/* subsequent waits go to front of queue */
mu->waiters = nsync_dll_make_first_in_list_ (mu->waiters,
&w->nw.q);
}
/* Release spinlock. Cannot use a store here, because
the current thread does not hold the mutex. If
another thread were a designated waker, the mutex
holder could be concurrently unlocking, even though
we hold the spinlock. */
mu_release_spinlock (mu);
/* wait until awoken. */
while (ATM_LOAD_ACQ (&w->nw.waiting) != 0) { /* acquire load */
nsync_mu_semaphore_p (&w->sem);
}
wait_count++;
/* If the thread has been woken more than this many
times, and still not acquired, it sets the
MU_LONG_WAIT bit to prevent thread that have not
waited from acquiring. This is the starvation
avoidance mechanism. The number is fairly high so
that we continue to benefit from the throughput of
not having running threads wait unless absolutely
necessary. */
if (wait_count == LONG_WAIT_THRESHOLD) { /* repeatedly woken */
long_wait = MU_LONG_WAIT; /* force others to wait at least once */
}
attempts = 0;
clear = MU_DESIG_WAKER;
/* Threads that have been woken at least once don't care
about waiting writers or long waiters. */
zero_to_acquire &= ~(MU_WRITER_WAITING | MU_LONG_WAIT);
}
attempts = nsync_spin_delay_ (attempts);
}
}
/* Attempt to acquire *mu in writer mode without blocking, and return non-zero
iff successful. Return non-zero with high probability if *mu was free on
entry. */
int nsync_mu_trylock (nsync_mu *mu) {
int result;
IGNORE_RACES_START ();
if (ATM_CAS_ACQ (&mu->word, 0, MU_WADD_TO_ACQUIRE)) { /* acquire CAS */
result = 1;
} else {
uint32_t old_word = ATM_LOAD (&mu->word);
result = ((old_word & MU_WZERO_TO_ACQUIRE) == 0 &&
ATM_CAS_ACQ (&mu->word, old_word,
(old_word + MU_WADD_TO_ACQUIRE) & ~MU_WCLEAR_ON_ACQUIRE));
}
IGNORE_RACES_END ();
return (result);
}
/* Block until *mu is free and then acquire it in writer mode. */
void nsync_mu_lock (nsync_mu *mu) {
IGNORE_RACES_START ();
if (!ATM_CAS_ACQ (&mu->word, 0, MU_WADD_TO_ACQUIRE)) { /* acquire CAS */
uint32_t old_word = ATM_LOAD (&mu->word);
if ((old_word&MU_WZERO_TO_ACQUIRE) != 0 ||
!ATM_CAS_ACQ (&mu->word, old_word,
(old_word+MU_WADD_TO_ACQUIRE) & ~MU_WCLEAR_ON_ACQUIRE)) {
waiter *w = nsync_waiter_new_ ();
nsync_mu_lock_slow_ (mu, w, 0, nsync_writer_type_);
nsync_waiter_free_ (w);
}
}
IGNORE_RACES_END ();
}
/* Attempt to acquire *mu in reader mode without blocking, and return non-zero
iff successful. Returns non-zero with high probability if *mu was free on
entry. It may fail to acquire if a writer is waiting, to avoid starvation.
*/
int nsync_mu_rtrylock (nsync_mu *mu) {
int result;
IGNORE_RACES_START ();
if (ATM_CAS_ACQ (&mu->word, 0, MU_RADD_TO_ACQUIRE)) { /* acquire CAS */
result = 1;
} else {
uint32_t old_word = ATM_LOAD (&mu->word);
result = ((old_word&MU_RZERO_TO_ACQUIRE) == 0 &&
ATM_CAS_ACQ (&mu->word, old_word,
(old_word+MU_RADD_TO_ACQUIRE) & ~MU_RCLEAR_ON_ACQUIRE));
}
IGNORE_RACES_END ();
return (result);
}
/* Block until *mu can be acquired in reader mode and then acquire it. */
void nsync_mu_rlock (nsync_mu *mu) {
IGNORE_RACES_START ();
if (!ATM_CAS_ACQ (&mu->word, 0, MU_RADD_TO_ACQUIRE)) { /* acquire CAS */
uint32_t old_word = ATM_LOAD (&mu->word);
if ((old_word&MU_RZERO_TO_ACQUIRE) != 0 ||
!ATM_CAS_ACQ (&mu->word, old_word,
(old_word+MU_RADD_TO_ACQUIRE) & ~MU_RCLEAR_ON_ACQUIRE)) {
waiter *w = nsync_waiter_new_ ();
nsync_mu_lock_slow_ (mu, w, 0, nsync_reader_type_);
nsync_waiter_free_ (w);
}
}
IGNORE_RACES_END ();
}
/* Invoke the condition associated with *p, which is an element of
a "waiter" list. */
static int condition_true (nsync_dll_element_ *p) {
return ((*DLL_WAITER (p)->cond.f) (DLL_WAITER (p)->cond.v));
}
/* If *p is an element of waiter_list (a list of "waiter" structs(, return a
pointer to the next element of the list that has a different condition. */
static nsync_dll_element_ *skip_past_same_condition (
nsync_dll_list_ waiter_list, nsync_dll_element_ *p) {
nsync_dll_element_ *next;
nsync_dll_element_ *last_with_same_condition =
&DLL_WAITER_SAMECOND (DLL_WAITER (p)->same_condition.prev)->nw.q;
if (last_with_same_condition != p && last_with_same_condition != p->prev) {
/* First in set with same condition, so skip to end. */
next = nsync_dll_next_ (waiter_list, last_with_same_condition);
} else {
next = nsync_dll_next_ (waiter_list, p);
}
return (next);
}
/* Merge the same_condition lists of *p and *n if they have the same non-NULL
condition. */
void nsync_maybe_merge_conditions_ (nsync_dll_element_ *p, nsync_dll_element_ *n) {
if (p != NULL && n != NULL &&
WAIT_CONDITION_EQ (&DLL_WAITER (p)->cond, &DLL_WAITER (n)->cond)) {
nsync_dll_splice_after_ (&DLL_WAITER (p)->same_condition,
&DLL_WAITER (n)->same_condition);
}
}
/* Remove element *e from nsync_mu waiter queue mu_queue, fixing
up the same_condition list by merging the lists on either side if possible.
Also increment the waiter's remove_count. */
nsync_dll_list_ nsync_remove_from_mu_queue_ (nsync_dll_list_ mu_queue, nsync_dll_element_ *e) {
/* Record previous and next elements in the original queue. */
nsync_dll_element_ *prev = e->prev;
nsync_dll_element_ *next = e->next;
uint32_t old_value;
/* Remove. */
mu_queue = nsync_dll_remove_ (mu_queue, e);
do {
old_value = ATM_LOAD (&DLL_WAITER (e)->remove_count);
} while (!ATM_CAS (&DLL_WAITER (e)->remove_count, old_value, old_value+1));
if (!nsync_dll_is_empty_ (mu_queue)) {
/* Fix up same_condition. */
nsync_dll_element_ *e_same_condition = &DLL_WAITER (e)->same_condition;
if (e_same_condition->next != e_same_condition) {
/* *e is linked to a same_condition neighbour---just remove it. */
e_same_condition->next->prev = e_same_condition->prev;
e_same_condition->prev->next = e_same_condition->next;
e_same_condition->next = e_same_condition;
e_same_condition->prev = e_same_condition;
} else if (prev != nsync_dll_last_ (mu_queue)) {
/* Merge the new neighbours together if we can. */
nsync_maybe_merge_conditions_ (prev, next);
}
}
return (mu_queue);
}
/* Unlock *mu and wake one or more waiters as appropriate after an unlock.
It is called with *mu held in mode l_type. */
void nsync_mu_unlock_slow_ (nsync_mu *mu, lock_type *l_type) {
unsigned attempts = 0; /* attempt count; used for backoff */
for (;;) {
uint32_t old_word = ATM_LOAD (&mu->word);
int testing_conditions = ((old_word & MU_CONDITION) != 0);
uint32_t early_release_mu = l_type->add_to_acquire;
uint32_t late_release_mu = 0;
if (testing_conditions) {
/* Convert to a writer lock, and release later.
- A writer lock is currently needed to test conditions
because exclusive access is needed to the list to
allow modification. The spinlock cannot be used
to achieve that, because an internal lock should not
be held when calling the external predicates.
- We must test conditions even though a reader region
cannot have made any new ones true because some
might have been true before the reader region started.
The MU_ALL_FALSE test below shortcuts the case where
the conditions are known all to be false. */
early_release_mu = l_type->add_to_acquire - MU_WLOCK;
late_release_mu = MU_WLOCK;
}
if ((old_word&MU_WAITING) == 0 || (old_word&MU_DESIG_WAKER) != 0 ||
(old_word & MU_RLOCK_FIELD) > MU_RLOCK ||
(old_word & (MU_RLOCK|MU_ALL_FALSE)) == (MU_RLOCK|MU_ALL_FALSE)) {
/* no one to wake, there's a designated waker waking
up, there are still readers, or it's a reader and all waiters
have false conditions */
if (ATM_CAS_REL (&mu->word, old_word,
(old_word - l_type->add_to_acquire) &
~l_type->clear_on_uncontended_release)) {
return;
}
} else if ((old_word&MU_SPINLOCK) == 0 &&
ATM_CAS_ACQ (&mu->word, old_word,
(old_word-early_release_mu)|MU_SPINLOCK|MU_DESIG_WAKER)) {
nsync_dll_list_ wake;
lock_type *wake_type;
uint32_t clear_on_release;
uint32_t set_on_release;
/* The spinlock is now held, and we've set the
designated wake flag, since we're likely to wake a
thread that will become that designated waker. If
there are conditions to check, the mutex itself is
still held. */
nsync_dll_element_ *p = NULL;
nsync_dll_element_ *next = NULL;
/* Swap the entire mu->waiters list into the local
"new_waiters" list. This gives us exclusive access
to the list, even if we unlock the spinlock, which
we may do if checking conditions. The loop below
will grab more new waiters that arrived while we
were checking conditions, and terminates only if no
new waiters arrive in one loop iteration. */
nsync_dll_list_ waiters = NULL;
nsync_dll_list_ new_waiters = mu->waiters;
mu->waiters = NULL;
/* Remove a waiter from the queue, if possible. */
wake = NULL; /* waiters to wake. */
wake_type = NULL; /* type of waiter(s) on wake, or NULL if wake is empty. */
clear_on_release = MU_SPINLOCK;
set_on_release = MU_ALL_FALSE;
while (!nsync_dll_is_empty_ (new_waiters)) { /* some new waiters to consider */
p = nsync_dll_first_ (new_waiters);
if (testing_conditions) {
/* Should we continue to test conditions? */
if (wake_type == nsync_writer_type_) {
/* No, because we're already waking a writer,
and need wake no others.*/
testing_conditions = 0;
} else if (wake_type == NULL &&
DLL_WAITER (p)->l_type != nsync_reader_type_ &&
DLL_WAITER (p)->cond.f == NULL) {
/* No, because we've woken no one, but the
first waiter is a writer with no condition,
so we will certainly wake it, and need wake
no others. */
testing_conditions = 0;
}
}
/* If testing waiters' conditions, release the
spinlock while still holding the write lock.
This is so that the spinlock is not held
while the conditions are evaluated. */
if (testing_conditions) {
mu_release_spinlock (mu);
}
/* Process the new waiters picked up in this iteration of the
"while (!nsync_dll_is_empty_ (new_waiters))" loop,
and stop looking when we run out of waiters, or we find
a writer to wake up. */
while (p != NULL && wake_type != nsync_writer_type_) {
int p_has_condition;
next = nsync_dll_next_ (new_waiters, p);
p_has_condition = (DLL_WAITER (p)->cond.f != NULL);
if (p_has_condition && !testing_conditions) {
nsync_panic_ ("checking a waiter condition "
"while unlocked\n");
}
if (p_has_condition && !condition_true (p)) {
/* condition is false */
/* skip to the end of the same_condition group. */
next = skip_past_same_condition (new_waiters, p);
} else if (wake_type == NULL ||
DLL_WAITER (p)->l_type == nsync_reader_type_) {
/* Wake this thread. */
new_waiters = nsync_remove_from_mu_queue_ (
new_waiters, p);
wake = nsync_dll_make_last_in_list_ (wake, p);
wake_type = DLL_WAITER (p)->l_type;
} else {
/* Failing to wake a writer
that could acquire if it
were first. */
set_on_release |= MU_WRITER_WAITING;
set_on_release &= ~MU_ALL_FALSE;
}
p = next;
}
if (p != NULL) {
/* Didn't search to end of list, so can't be sure
all conditions are false. */
set_on_release &= ~MU_ALL_FALSE;
}
/* If testing waiters' conditions, reacquire the spinlock
released above. */
if (testing_conditions) {
nsync_spin_test_and_set_ (&mu->word, MU_SPINLOCK,
MU_SPINLOCK, 0);
}
/* add the new_waiters to the last of the waiters. */
nsync_maybe_merge_conditions_ (nsync_dll_last_ (waiters),
nsync_dll_first_ (new_waiters));
waiters = nsync_dll_make_last_in_list_ (waiters,
nsync_dll_last_ (new_waiters));
/* Pick up the next set of new waiters. */
new_waiters = mu->waiters;
mu->waiters = NULL;
}
/* Return the local waiter list to *mu. */
mu->waiters = waiters;
if (nsync_dll_is_empty_ (wake)) {
/* not waking a waiter => no designated waker */
clear_on_release |= MU_DESIG_WAKER;
}
if ((set_on_release & MU_ALL_FALSE) == 0) {
/* If not explicitly setting MU_ALL_FALSE, clear it. */
clear_on_release |= MU_ALL_FALSE;
}
if (nsync_dll_is_empty_ (mu->waiters)) {
/* no waiters left */
clear_on_release |= MU_WAITING | MU_WRITER_WAITING |
MU_CONDITION | MU_ALL_FALSE;
}
/* Release the spinlock, and possibly the lock if
late_release_mu is non-zero. Other bits are set or
cleared according to whether we woke any threads,
whether any waiters remain, and whether any of them
are writers. */
old_word = ATM_LOAD (&mu->word);
while (!ATM_CAS_REL (&mu->word, old_word,
((old_word-late_release_mu)|set_on_release) &
~clear_on_release)) { /* release CAS */
old_word = ATM_LOAD (&mu->word);
}
/* Wake the waiters. */
for (p = nsync_dll_first_ (wake); p != NULL; p = next) {
next = nsync_dll_next_ (wake, p);
wake = nsync_dll_remove_ (wake, p);
ATM_STORE_REL (&DLL_NSYNC_WAITER (p)->waiting, 0);
nsync_mu_semaphore_v (&DLL_WAITER (p)->sem);
}
return;
}
attempts = nsync_spin_delay_ (attempts);
}
}
/* Unlock *mu, which must be held in write mode, and wake waiters, if appropriate. */
void nsync_mu_unlock (nsync_mu *mu) {
IGNORE_RACES_START ();
/* C is not a garbage-collected language, so we cannot release until we
can be sure that we will not have to touch the mutex again to wake a
waiter. Another thread could acquire, decrement a reference count
and deallocate the mutex before the current thread touched the mutex
word again. */
if (!ATM_CAS_REL (&mu->word, MU_WLOCK, 0)) {
uint32_t old_word = ATM_LOAD (&mu->word);
/* Clear MU_ALL_FALSE because the critical section we're just
leaving may have made some conditions true. */
uint32_t new_word = (old_word - MU_WLOCK) & ~MU_ALL_FALSE;
/* Sanity check: mutex must be held in write mode, and there
must be no readers. */
if ((new_word & (MU_RLOCK_FIELD | MU_WLOCK)) != 0) {
if ((old_word & MU_RLOCK_FIELD) != 0) {
nsync_panic_ ("attempt to nsync_mu_unlock() an nsync_mu "
"held in read mode\n");
} else {
nsync_panic_ ("attempt to nsync_mu_unlock() an nsync_mu "
"not held in write mode\n");
}
} else if ((old_word & (MU_WAITING|MU_DESIG_WAKER)) == MU_WAITING ||
!ATM_CAS_REL (&mu->word, old_word, new_word)) {
/* There are waiters and no designated waker, or
our initial CAS attempt failed, to use slow path. */
nsync_mu_unlock_slow_ (mu, nsync_writer_type_);
}
}
IGNORE_RACES_END ();
}
/* Unlock *mu, which must be held in read mode, and wake waiters, if appropriate. */
void nsync_mu_runlock (nsync_mu *mu) {
IGNORE_RACES_START ();
/* See comment in nsync_mu_unlock(). */
if (!ATM_CAS_REL (&mu->word, MU_RLOCK, 0)) {
uint32_t old_word = ATM_LOAD (&mu->word);
/* Sanity check: mutex must not be held in write mode and
reader count must not be 0. */
if (((old_word ^ MU_WLOCK) & (MU_WLOCK | MU_RLOCK_FIELD)) == 0) {
if ((old_word & MU_WLOCK) != 0) {
nsync_panic_ ("attempt to nsync_mu_runlock() an nsync_mu "
"held in write mode\n");
} else {
nsync_panic_ ("attempt to nsync_mu_runlock() an nsync_mu "
"not held in read mode\n");
}
} else if ((old_word & (MU_WAITING | MU_DESIG_WAKER)) == MU_WAITING &&
(old_word & (MU_RLOCK_FIELD|MU_ALL_FALSE)) == MU_RLOCK) {
/* There are waiters and no designated waker, the last
reader is unlocking, and not all waiters have a
false condition. So we must take the slow path to
attempt to wake a waiter. */
nsync_mu_unlock_slow_ (mu, nsync_reader_type_);
} else if (!ATM_CAS_REL (&mu->word, old_word, old_word - MU_RLOCK)) {
/* CAS attempt failed, so take slow path. */
nsync_mu_unlock_slow_ (mu, nsync_reader_type_);
}
}
IGNORE_RACES_END ();
}
/* Abort if *mu is not held in write mode. */
void nsync_mu_assert_held (const nsync_mu *mu) {
IGNORE_RACES_START ();
if ((ATM_LOAD (&mu->word) & MU_WHELD_IF_NON_ZERO) == 0) {
nsync_panic_ ("nsync_mu not held in write mode\n");
}
IGNORE_RACES_END ();
}
/* Abort if *mu is not held in read or write mode. */
void nsync_mu_rassert_held (const nsync_mu *mu) {
IGNORE_RACES_START ();
if ((ATM_LOAD (&mu->word) & MU_ANY_LOCK) == 0) {
nsync_panic_ ("nsync_mu not held in some mode\n");
}
IGNORE_RACES_END ();
}
/* Return whether *mu is held in read mode.
Requires that *mu is held in some mode. */
int nsync_mu_is_reader (const nsync_mu *mu) {
uint32_t word;
IGNORE_RACES_START ();
word = ATM_LOAD (&mu->word);
if ((word & MU_ANY_LOCK) == 0) {
nsync_panic_ ("nsync_mu not held in some mode\n");
}
IGNORE_RACES_END ();
return ((word & MU_WLOCK) == 0);
}

103
third_party/nsync/mu.h vendored Normal file
View file

@ -0,0 +1,103 @@
#ifndef NSYNC_MU_H_
#define NSYNC_MU_H_
#include "third_party/nsync/atomic.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
struct nsync_dll_element_s_;
/* An nsync_mu is a lock. If initialized to zero, it's valid and unlocked.
An nsync_mu can be "free", held by a single thread (aka fiber,
goroutine) in "write" (exclusive) mode, or by many threads in "read"
(shared) mode. A thread that acquires it should eventually release
it. It is illegal to acquire an nsync_mu in one thread and release it
in another. It is illegal for a thread to reacquire an nsync_mu while
holding it (even a second share of a "read" lock).
Example usage:
static struct foo {
nsync_mu mu; // protects invariant a+b==0 on fields below.
int a;
int b;
} p = { NSYNC_MU_INIT, 0, 0 };
// ....
nsync_mu_lock (&p.mu);
// The current thread now has exclusive access to p.a and p.b;
// invariant assumed true.
p.a++;
p.b--; // restore invariant p.a+p.b==0 before releasing p.mu
nsync_mu_unlock (&p.mu)
Mutexes can be used with condition variables; see nsync_cv.h.
nsync_mu_wait() and nsync_mu_wait_with_deadline() can be used instead
of condition variables. See nsync_mu_wait.h for more details. Example
use of nsync_mu_wait() to wait for p.a==0, using definition above:
int a_is_zero (const void *condition_arg) {
return (((const struct foo *)condition_arg)->a == 0);
}
...
nsync_mu_lock (&p.mu);
nsync_mu_wait (&p.mu, &a_is_zero, &p, NULL);
// The current thread now has exclusive access to
// p.a and p.b, and p.a==0.
...
nsync_mu_unlock (&p.mu);
*/
typedef struct nsync_mu_s_ {
nsync_atomic_uint32_ word; /* internal use only */
struct nsync_dll_element_s_ *waiters; /* internal use only */
} nsync_mu;
/* An nsync_mu should be zeroed to initialize, which can be accomplished
by initializing with static initializer NSYNC_MU_INIT, or by setting
the entire structure to all zeroes, or using nsync_mu_init(). */
#define NSYNC_MU_INIT \
{ NSYNC_ATOMIC_UINT32_INIT_, 0 }
void nsync_mu_init(nsync_mu *mu);
/* Block until *mu is free and then acquire it in writer mode. Requires
that the calling thread not already hold *mu in any mode. */
void nsync_mu_lock(nsync_mu *mu);
/* Unlock *mu, which must have been acquired in write mode by the
calling thread, and wake waiters, if appropriate. */
void nsync_mu_unlock(nsync_mu *mu);
/* Attempt to acquire *mu in writer mode without blocking, and return
non-zero iff successful. Return non-zero with high probability if *mu
was free on entry. */
int nsync_mu_trylock(nsync_mu *mu);
/* Block until *mu can be acquired in reader mode and then acquire it.
Requires that the calling thread not already hold *mu in any mode. */
void nsync_mu_rlock(nsync_mu *mu);
/* Unlock *mu, which must have been acquired in read mode by the calling
thread, and wake waiters, if appropriate. */
void nsync_mu_runlock(nsync_mu *mu);
/* Attempt to acquire *mu in reader mode without blocking, and return
non-zero iff successful. Return non-zero with high probability if *mu
was free on entry. Perhaps fail to acquire if a writer is waiting, to
avoid starvation. */
int nsync_mu_rtrylock(nsync_mu *mu);
/* May abort if *mu is not held in write mode by the calling thread. */
void nsync_mu_assert_held(const nsync_mu *mu);
/* May abort if *mu is not held in read or write mode
by the calling thread. */
void nsync_mu_rassert_held(const nsync_mu *mu);
/* Return whether *mu is held in read mode.
Requires that the calling thread holds *mu in some mode. */
int nsync_mu_is_reader(const nsync_mu *mu);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_MU_H_ */

56
third_party/nsync/mu_semaphore.c vendored Normal file
View file

@ -0,0 +1,56 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2022 Justine Alexandra Roberts Tunney
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
*/
#include "libc/dce.h"
#include "third_party/nsync/mu_semaphore.h"
#include "third_party/nsync/mu_semaphore.internal.h"
// clang-format off
/* Initialize *s; the initial value is 0. */
void nsync_mu_semaphore_init (nsync_semaphore *s) {
if (!IsWindows ())
nsync_mu_semaphore_init_futex (s);
else
nsync_mu_semaphore_init_win32 (s);
}
/* Wait until the count of *s exceeds 0, and decrement it. */
void nsync_mu_semaphore_p (nsync_semaphore *s) {
if (!IsWindows ())
nsync_mu_semaphore_p_futex (s);
else
nsync_mu_semaphore_p_win32 (s);
}
/* Wait until one of:
the count of *s is non-zero, in which case decrement *s and return 0;
or abs_deadline expires, in which case return ETIMEDOUT. */
int nsync_mu_semaphore_p_with_deadline (nsync_semaphore *s, nsync_time abs_deadline) {
if (!IsWindows ())
return nsync_mu_semaphore_p_with_deadline_futex (s, abs_deadline);
else
return nsync_mu_semaphore_p_with_deadline_win32 (s, abs_deadline);
}
/* Ensure that the count of *s is at least 1. */
void nsync_mu_semaphore_v (nsync_semaphore *s) {
if (!IsWindows ())
nsync_mu_semaphore_v_futex (s);
else
nsync_mu_semaphore_v_win32 (s);
}

28
third_party/nsync/mu_semaphore.h vendored Normal file
View file

@ -0,0 +1,28 @@
#ifndef NSYNC_SEM_H_
#define NSYNC_SEM_H_
#include "third_party/nsync/time.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
typedef struct nsync_semaphore_s_ {
void *sem_space[32]; /* space used by implementation */
} nsync_semaphore;
/* Initialize *s; the initial value is 0. */
void nsync_mu_semaphore_init(nsync_semaphore *s);
/* Wait until the count of *s exceeds 0, and decrement it. */
void nsync_mu_semaphore_p(nsync_semaphore *s);
/* Wait until one of: the count of *s is non-zero, in which case
decrement *s and return 0; or abs_deadline expires, in which case
return ETIMEDOUT. */
int nsync_mu_semaphore_p_with_deadline(nsync_semaphore *s,
nsync_time abs_deadline);
/* Ensure that the count of *s is at least 1. */
void nsync_mu_semaphore_v(nsync_semaphore *s);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_SEM_H_ */

View file

@ -0,0 +1,20 @@
#ifndef NSYNC_MU_SEMAPHORE_INTERNAL_H_
#define NSYNC_MU_SEMAPHORE_INTERNAL_H_
#include "third_party/nsync/mu_semaphore.h"
#include "third_party/nsync/time.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
void nsync_mu_semaphore_init_futex(nsync_semaphore *);
void nsync_mu_semaphore_p_futex(nsync_semaphore *);
int nsync_mu_semaphore_p_with_deadline_futex(nsync_semaphore *, nsync_time);
void nsync_mu_semaphore_v_futex(nsync_semaphore *);
void nsync_mu_semaphore_init_win32(nsync_semaphore *);
void nsync_mu_semaphore_p_win32(nsync_semaphore *);
int nsync_mu_semaphore_p_with_deadline_win32(nsync_semaphore *, nsync_time);
void nsync_mu_semaphore_v_win32(nsync_semaphore *);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_MU_SEMAPHORE_INTERNAL_H_ */

124
third_party/nsync/mu_semaphore_futex.c vendored Normal file
View file

@ -0,0 +1,124 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2016 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 │
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "libc/errno.h"
#include "libc/str/str.h"
#include "libc/thread/thread.h"
#include "third_party/nsync/atomic.h"
#include "third_party/nsync/atomic.internal.h"
#include "third_party/nsync/futex.internal.h"
#include "third_party/nsync/mu_semaphore.h"
#include "third_party/nsync/mu_semaphore.internal.h"
asm(".ident\t\"\\n\\n\
*NSYNC (Apache 2.0)\\n\
Copyright 2016 Google, Inc.\\n\
https://github.com/google/nsync\"");
// clang-format off
/* Check that atomic operations on nsync_atomic_uint32_ can be applied to int. */
static const int assert_int_size = 1 /
(sizeof (assert_int_size) == sizeof (uint32_t) &&
sizeof (nsync_atomic_uint32_) == sizeof (uint32_t));
#define ASSERT(x) do { if (!(x)) { *(volatile int *)0 = 0; } } while (0)
struct futex {
int i; /* lo half=count; hi half=waiter count */
};
static nsync_semaphore *sem_big_enough_for_futex = (nsync_semaphore *) (uintptr_t)(1 /
(sizeof (struct futex) <= sizeof (*sem_big_enough_for_futex)));
/* Initialize *s; the initial value is 0. */
void nsync_mu_semaphore_init_futex (nsync_semaphore *s) {
struct futex *f = (struct futex *) s;
f->i = 0;
}
/* Wait until the count of *s exceeds 0, and decrement it. */
void nsync_mu_semaphore_p_futex (nsync_semaphore *s) {
struct futex *f = (struct futex *) s;
int i;
do {
i = ATM_LOAD ((nsync_atomic_uint32_ *) &f->i);
if (i == 0) {
int futex_result = nsync_futex_wait_ (&f->i, i, PTHREAD_PROCESS_PRIVATE, NULL);
ASSERT (futex_result == 0 ||
futex_result == -EINTR ||
futex_result == -EWOULDBLOCK);
}
} while (i == 0 || !ATM_CAS_ACQ ((nsync_atomic_uint32_ *) &f->i, i, i-1));
}
/* Wait until one of:
the count of *s is non-zero, in which case decrement *s and return 0;
or abs_deadline expires, in which case return ETIMEDOUT. */
int nsync_mu_semaphore_p_with_deadline_futex (nsync_semaphore *s, nsync_time abs_deadline) {
struct futex *f = (struct futex *)s;
int i;
int result = 0;
do {
i = ATM_LOAD ((nsync_atomic_uint32_ *) &f->i);
if (i == 0) {
int futex_result;
struct timespec ts_buf;
const struct timespec *ts = NULL;
if (nsync_time_cmp (abs_deadline, nsync_time_no_deadline) != 0) {
memset (&ts_buf, 0, sizeof (ts_buf));
if (FUTEX_TIMEOUT_IS_ABSOLUTE) {
ts_buf.tv_sec = NSYNC_TIME_SEC (abs_deadline);
ts_buf.tv_nsec = NSYNC_TIME_NSEC (abs_deadline);
} else {
nsync_time now;
now = nsync_time_now ();
if (nsync_time_cmp (now, abs_deadline) > 0) {
ts_buf.tv_sec = 0;
ts_buf.tv_nsec = 0;
} else {
nsync_time rel_deadline;
rel_deadline = nsync_time_sub (abs_deadline, now);
ts_buf.tv_sec = NSYNC_TIME_SEC (rel_deadline);
ts_buf.tv_nsec = NSYNC_TIME_NSEC (rel_deadline);
}
}
ts = &ts_buf;
}
futex_result = nsync_futex_wait_ (&f->i, i, PTHREAD_PROCESS_PRIVATE, ts);
ASSERT (futex_result == 0 ||
futex_result == -EINTR ||
futex_result == -ETIMEDOUT ||
futex_result == -EWOULDBLOCK);
/* Some systems don't wait as long as they are told. */
if (futex_result == -ETIMEDOUT &&
nsync_time_cmp (abs_deadline, nsync_time_now ()) <= 0) {
result = ETIMEDOUT;
}
}
} while (result == 0 && (i == 0 || !ATM_CAS_ACQ ((nsync_atomic_uint32_ *) &f->i, i, i - 1)));
return (result);
}
/* Ensure that the count of *s is at least 1. */
void nsync_mu_semaphore_v_futex (nsync_semaphore *s) {
struct futex *f = (struct futex *) s;
uint32_t old_value;
do {
old_value = ATM_LOAD ((nsync_atomic_uint32_ *) &f->i);
} while (!ATM_CAS_REL ((nsync_atomic_uint32_ *) &f->i, old_value, old_value+1));
ASSERT (nsync_futex_wake_ (&f->i, 1, PTHREAD_PROCESS_PRIVATE) >= 0);
}

84
third_party/nsync/mu_semaphore_win32.c vendored Normal file
View file

@ -0,0 +1,84 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2016 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 │
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "libc/errno.h"
#include "libc/nt/enum/wait.h"
#include "libc/nt/synchronization.h"
#include "libc/runtime/runtime.h"
#include "third_party/nsync/mu_semaphore.h"
#include "third_party/nsync/mu_semaphore.internal.h"
#include "third_party/nsync/time.h"
asm(".ident\t\"\\n\\n\
*NSYNC (Apache 2.0)\\n\
Copyright 2016 Google, Inc.\\n\
https://github.com/google/nsync\"");
// clang-format off
/* Initialize *s; the initial value is 0. */
void nsync_mu_semaphore_init_win32 (nsync_semaphore *s) {
int64_t *h = (int64_t *) s;
*h = CreateSemaphore(NULL, 0, 1, NULL);
if (!*h) notpossible;
}
/* Wait until the count of *s exceeds 0, and decrement it. */
void nsync_mu_semaphore_p_win32 (nsync_semaphore *s) {
int64_t *h = (int64_t *) s;
WaitForSingleObject(*h, -1u);
}
/* Wait until one of:
the count of *s is non-zero, in which case decrement *s and return 0;
or abs_deadline expires, in which case return ETIMEDOUT. */
int nsync_mu_semaphore_p_with_deadline_win32 (nsync_semaphore *s, nsync_time abs_deadline) {
int64_t *h = (int64_t *) s;
int result;
if (nsync_time_cmp (abs_deadline, nsync_time_no_deadline) == 0) {
result = WaitForSingleObject(*h, -1u);
} else {
nsync_time now;
now = nsync_time_now ();
do {
if (nsync_time_cmp (abs_deadline, now) <= 0) {
result = WaitForSingleObject (*h, 0);
} else {
nsync_time delay;
delay = nsync_time_sub (abs_deadline, now);
if (NSYNC_TIME_SEC (delay) > 1000*1000) {
result = WaitForSingleObject (*h, 1000*1000);
} else {
result = WaitForSingleObject (*h,
(unsigned) (NSYNC_TIME_SEC (delay) * 1000 +
(NSYNC_TIME_NSEC (delay) + 999999) / (1000 * 1000)));
}
}
if (result == kNtWaitTimeout) {
now = nsync_time_now ();
}
} while (result == kNtWaitTimeout && /* Windows generates early wakeups. */
nsync_time_cmp (abs_deadline, now) > 0);
}
return (result == kNtWaitTimeout ? ETIMEDOUT : 0);
}
/* Ensure that the count of *s is at least 1. */
void nsync_mu_semaphore_v_win32 (nsync_semaphore *s) {
int64_t *h = (int64_t *) s;
ReleaseSemaphore(*h, 1, NULL);
}

118
third_party/nsync/mu_wait.h vendored Normal file
View file

@ -0,0 +1,118 @@
#ifndef NSYNC_MU_WAIT_H_
#define NSYNC_MU_WAIT_H_
#include "third_party/nsync/mu.h"
#include "third_party/nsync/time.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
/* nsync_mu_wait() and nsync_mu_wait_with_deadline() can be used instead
of condition variables. In many straightforward situations they are
of equivalent performance and are somewhat easier to use, because
unlike condition variables, they do not require that the waits be
placed in a loop, and they do not require explicit wakeup calls.
Example:
Definitions:
static nsync_mu mu = NSYNC_MU_INIT;
static int i = 0; // protected by mu
// Condition for use with nsync_mu_wait().
static int int_is_zero (const void *v) {
return (*(const int *)v == 0);
}
Waiter:
nsync_mu_lock (&mu);
// Wait until i is zero.
nsync_mu_wait (&mu, &int_is_zero, &i, NULL);
// i is known to be zero here.
// ...
nsync_mu_unlock (&mu);
Thread potentially making i zero:
nsync_mu_lock (&mu);
i--;
// No need to signal that i may have become zero. The unlock call
// below will evaluate waiters' conditions to decide which to wake.
nsync_mu_unlock (&mu);
It is legal to use conditional critical sections and condition
variables on the same mutex.
--------------
The implementation benefits from determining whether waiters are
waiting for the same condition; it may then evaluate a condition once
on behalf of several waiters. Two waiters have equal condition if
their "condition" pointers are equal, and either:
- their "condition_arg" pointers are equal, or
- "condition_arg_eq" is non-null and (*condition_arg_eq)
(condition_arg0, condition_arg1) returns non-zero.
*condition_arg_eq will not be invoked unless the "condition" pointers
are equal, and the "condition_arg" pointers are unequal.
If many waiters wait for distinct conditions simultaneously,
condition variables may be faster.
*/
struct nsync_note_s_; /* forward declaration for an nsync_note */
/* Return when (*condition) (condition_arg) is true. Perhaps unlock and
relock *mu while blocked waiting for the condition to become true.
nsync_mu_wait() is equivalent to nsync_mu_wait_with_deadline() with
abs_deadline==nsync_time_no_deadline, and cancel_note==NULL.
Requires that *mu be held on entry. See nsync_mu_wait_with_deadline()
for more details on *condition and *condition_arg_eq. */
void nsync_mu_wait(nsync_mu *mu, int (*condition)(const void *condition_arg),
const void *condition_arg,
int (*condition_arg_eq)(const void *a, const void *b));
/* Return when at least one of: (*condition) (condition_arg) is true,
the deadline expires, or *cancel_note is notified. Perhaps unlock and
relock *mu while blocked waiting for one of these events, but always
return with *mu held. Return 0 iff the (*condition) (condition_arg)
is true on return, and otherwise either ETIMEDOUT or ECANCELED,
depending on why the call returned early. Callers should use
abs_deadline==nsync_time_no_deadline for no deadline, and
cancel_note==NULL for no cancellation.
Requires that *mu be held on entry.
The implementation may call *condition from any thread using the
mutex, and while holding *mu in either read or write mode; it
guarantees that any thread calling *condition will hold *mu in some
mode. Requires that (*condition) (condition_arg) neither modify state
protected by *mu, nor return a value dependent on state not protected
by *mu. To depend on time, use the abs_deadline parameter.
(Conventional use of condition variables have the same restrictions
on the conditions tested by the while-loop.) If non-null,
condition_arg_eq should return whether two condition_arg calls with
the same "condition" pointer are considered equivalent; it should
have no side-effects. */
int nsync_mu_wait_with_deadline(
nsync_mu *mu, int (*condition)(const void *condition_arg),
const void *condition_arg,
int (*condition_arg_eq)(const void *a, const void *b),
nsync_time abs_deadline, struct nsync_note_s_ *cancel_note);
/* Unlock *mu, which must be held in write mode, and wake waiters, if
appropriate. Unlike nsync_mu_unlock(), this call is not required to
wake nsync_mu_wait/nsync_mu_wait_with_deadline calls on conditions
that were false before this thread acquired the lock. This call
should be used only at the end of critical sections for which:
- nsync_mu_wait and/or nsync_mu_wait_with_deadline are in use on the same
mutex,
- this critical section cannot make the condition true for any of those
nsync_mu_wait/nsync_mu_wait_with_deadline waits, and
- when performance is significantly improved by using this call. */
void nsync_mu_unlock_without_wakeup(nsync_mu *mu);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_MU_WAIT_H_ */

51
third_party/nsync/note.h vendored Normal file
View file

@ -0,0 +1,51 @@
#ifndef NSYNC_NOTE_H_
#define NSYNC_NOTE_H_
#include "third_party/nsync/time.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
/* An nsync_note represents a single bit that can transition from 0 to 1
at most once. When 1, the note is said to be notified. There are
operations to wait for the transition, which can be triggered either
by an explicit call, or timer expiry. Notes can have parent notes; a
note becomes notified if its parent becomes notified. */
typedef struct nsync_note_s_ *nsync_note;
/* Return a freshly allocated nsync_note, or NULL if an nsync_note
cannot be created.
If parent!=NULL, the allocated nsync_note's parent will be parent.
The newaly allocated note will be automatically notified at
abs_deadline, and is notified at initialization if
abs_deadline==nsync_zero_time.
nsync_notes should be passed to nsync_note_free() when no longer needed. */
nsync_note nsync_note_new(nsync_note parent, nsync_time abs_deadline);
/* Free resources associated with n. Requires that n was allocated by
nsync_note_new(), and no concurrent or future operations are applied
to n directly.
It is legal to call nsync_note_free() on a node even if it has a
parent or children that are in use; if n has both a parent and
children, n's parent adopts its children. */
void nsync_note_free(nsync_note n);
/* Notify n and all its descendants. */
void nsync_note_notify(nsync_note n);
/* Return whether n has been notified. */
int nsync_note_is_notified(nsync_note n);
/* Wait until n has been notified or abs_deadline is reached, and return
whether n has been notified. If abs_deadline==nsync_time_no_deadline,
the deadline is far in the future. */
int nsync_note_wait(nsync_note n, nsync_time abs_deadline);
/* Return the expiry time associated with n. This is the minimum of the
abs_deadline passed on creation and that of any of its ancestors. */
nsync_time nsync_note_expiry(nsync_note n);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_NOTE_H_ */

58
third_party/nsync/nsync.mk vendored Normal file
View file

@ -0,0 +1,58 @@
#-*-mode:makefile-gmake;indent-tabs-mode:t;tab-width:8;coding:utf-8-*-┐
#───vi: set et ft=make ts=8 tw=8 fenc=utf-8 :vi───────────────────────┘
PKGS += THIRD_PARTY_NSYNC
THIRD_PARTY_NSYNC_SRCS = $(THIRD_PARTY_NSYNC_A_SRCS)
THIRD_PARTY_NSYNC_HDRS = $(THIRD_PARTY_NSYNC_A_HDRS)
THIRD_PARTY_NSYNC_ARTIFACTS += THIRD_PARTY_NSYNC_A
THIRD_PARTY_NSYNC = $(THIRD_PARTY_NSYNC_A_DEPS) $(THIRD_PARTY_NSYNC_A)
THIRD_PARTY_NSYNC_A = o/$(MODE)/third_party/nsync/nsync.a
THIRD_PARTY_NSYNC_A_FILES := $(wildcard third_party/nsync/*)
THIRD_PARTY_NSYNC_A_HDRS = $(filter %.h,$(THIRD_PARTY_NSYNC_A_FILES))
THIRD_PARTY_NSYNC_A_SRCS = $(filter %.c,$(THIRD_PARTY_NSYNC_A_FILES))
THIRD_PARTY_NSYNC_A_OBJS = \
$(THIRD_PARTY_NSYNC_A_SRCS:%.c=o/$(MODE)/%.o)
THIRD_PARTY_NSYNC_A_DIRECTDEPS = \
LIBC_CALLS \
LIBC_INTRIN \
LIBC_NEXGEN32E \
LIBC_NT_KERNEL32 \
LIBC_STR \
LIBC_STUBS \
LIBC_SYSV \
LIBC_SYSV_CALLS
THIRD_PARTY_NSYNC_A_DEPS := \
$(call uniq,$(foreach x,$(THIRD_PARTY_NSYNC_A_DIRECTDEPS),$($(x))))
THIRD_PARTY_NSYNC_A_CHECKS = \
$(THIRD_PARTY_NSYNC_A).pkg \
$(THIRD_PARTY_NSYNC_A_HDRS:%=o/$(MODE)/%.ok)
$(THIRD_PARTY_NSYNC_A): \
third_party/nsync/ \
$(THIRD_PARTY_NSYNC_A).pkg \
$(THIRD_PARTY_NSYNC_A_OBJS)
$(THIRD_PARTY_NSYNC_A).pkg: \
$(THIRD_PARTY_NSYNC_A_OBJS) \
$(foreach x,$(THIRD_PARTY_NSYNC_A_DIRECTDEPS),$($(x)_A).pkg)
$(THIRD_PARTY_NSYNC_A_OBJS): private \
OVERRIDE_CCFLAGS += \
-ffunction-sections \
-fdata-sections
THIRD_PARTY_NSYNC_LIBS = $(foreach x,$(THIRD_PARTY_NSYNC_ARTIFACTS),$($(x)))
THIRD_PARTY_NSYNC_SRCS = $(foreach x,$(THIRD_PARTY_NSYNC_ARTIFACTS),$($(x)_SRCS))
THIRD_PARTY_NSYNC_CHECKS = $(foreach x,$(THIRD_PARTY_NSYNC_ARTIFACTS),$($(x)_CHECKS))
THIRD_PARTY_NSYNC_OBJS = $(foreach x,$(THIRD_PARTY_NSYNC_ARTIFACTS),$($(x)_OBJS))
$(THIRD_PARTY_NSYNC_OBJS): third_party/nsync/nsync.mk
.PHONY: o/$(MODE)/third_party/nsync
o/$(MODE)/third_party/nsync: $(THIRD_PARTY_NSYNC_CHECKS)

37
third_party/nsync/once.h vendored Normal file
View file

@ -0,0 +1,37 @@
#ifndef NSYNC_ONCE_H_
#define NSYNC_ONCE_H_
#include "third_party/nsync/atomic.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
/* An nsync_once allows a function to be called exactly once, when first
referenced. */
typedef nsync_atomic_uint32_ nsync_once;
/* An initializer for nsync_once; it is guaranteed to be all zeroes. */
#define NSYNC_ONCE_INIT NSYNC_ATOMIC_UINT32_INIT_
/* The first time nsync_run_once() or nsync_run_once_arg() is applied to
*once, the supplied function is run (with argument, in the case of
nsync_run_once_arg()). Other callers will wait until the run of the
function is complete, and then return without running the function
again. */
void nsync_run_once(nsync_once *once, void (*f)(void));
void nsync_run_once_arg(nsync_once *once, void (*farg)(void *arg), void *arg);
/* Same as nsync_run_once()/nsync_run_once_arg() but uses a spinloop.
Can be used on the same nsync_once as
nsync_run_once/nsync_run_once_arg().
These *_spin variants should be used only in contexts where normal
blocking is disallowed, such as within user-space schedulers, when
the runtime is not fully initialized, etc. They provide no
significant performance benefit, and they should be avoided in normal
code. */
void nsync_run_once_spin(nsync_once *once, void (*f)(void));
void nsync_run_once_arg_spin(nsync_once *once, void (*farg)(void *arg),
void *arg);
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_ONCE_H_ */

30
third_party/nsync/panic.c vendored Normal file
View file

@ -0,0 +1,30 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2022 Justine Alexandra Roberts Tunney
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
*/
#include "libc/calls/calls.h"
#include "third_party/nsync/common.internal.h"
// clang-format off
/* Aborts after printing the nul-terminated string s[]. */
void nsync_panic_ (const char *s) {
size_t n = 0;
while (s[n]) ++n;
write (2, "panic: ", 7);
write (2, s, n);
notpossible;
}

54
third_party/nsync/time.c vendored Normal file
View file

@ -0,0 +1,54 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2016 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 │
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "libc/calls/struct/timespec.h"
#include "libc/str/str.h"
#include "libc/sysv/consts/clock.h"
#include "third_party/nsync/time.h"
asm(".ident\t\"\\n\\n\
*NSYNC (Apache 2.0)\\n\
Copyright 2016 Google, Inc.\\n\
https://github.com/google/nsync\"");
// clang-format off
#define NSYNC_NS_IN_S_ (1000 * 1000 * 1000)
/* Return the maximum t, assuming it's an integral
type, and the representation is not too strange. */
#define MAX_INT_TYPE(t) (((t)~(t)0) > 1? /*is t unsigned?*/ \
(t)~(t)0 : /*unsigned*/ \
(t) ((((uintmax_t)1) << (sizeof (t) * CHAR_BIT - 1)) - 1)) /*signed*/
const nsync_time nsync_time_no_deadline =
NSYNC_TIME_STATIC_INIT (MAX_INT_TYPE (int64_t), NSYNC_NS_IN_S_ - 1);
const nsync_time nsync_time_zero = NSYNC_TIME_STATIC_INIT (0, 0);
nsync_time nsync_time_sleep (nsync_time delay) {
struct timespec ts;
struct timespec remain;
memset (&ts, 0, sizeof (ts));
ts.tv_sec = NSYNC_TIME_SEC (delay);
ts.tv_nsec = NSYNC_TIME_NSEC (delay);
if (nanosleep (&ts, &remain) == 0) {
/* nanosleep() is not required to fill in "remain"
if it returns 0. */
memset (&remain, 0, sizeof (remain));
}
return (remain);
}

53
third_party/nsync/time.h vendored Normal file
View file

@ -0,0 +1,53 @@
#ifndef NSYNC_TIME_H_
#define NSYNC_TIME_H_
#include "libc/calls/struct/timespec.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
#define NSYNC_TIME_SEC(t) ((t).tv_sec)
#define NSYNC_TIME_NSEC(t) ((t).tv_nsec)
#define NSYNC_TIME_STATIC_INIT(t, ns) \
{ (t), (ns) }
/* The type nsync_time represents the interval elapsed between two
moments in time. Often the first such moment is an address-space-wide
epoch, such as the Unix epoch, but clients should not rely on the
epoch in one address space being the same as that in another.
Intervals relative to the epoch are known as absolute times. */
typedef struct timespec nsync_time;
/* A deadline infinitely far in the future. */
extern const nsync_time nsync_time_no_deadline;
/* The zero delay, or an expired deadline. */
extern const nsync_time nsync_time_zero;
/* Return the current time since the epoch. */
#define nsync_time_now() _timespec_real()
/* Sleep for the specified delay. Returns the unslept time which may be
non-zero if the call was interrupted. */
nsync_time nsync_time_sleep(nsync_time delay);
/* Return a+b */
#define nsync_time_add(a, b) _timespec_add(a, b)
/* Return a-b */
#define nsync_time_sub(a, b) _timespec_sub(a, b)
/* Return +ve, 0, or -ve according to whether a>b, a==b, or a<b. */
#define nsync_time_cmp(a, b) _timespec_cmp(a, b)
/* Return the specified number of milliseconds as a time. */
#define nsync_time_ms(a) _timespec_frommillis(a)
/* Return the specified number of microseconds as a time. */
#define nsync_time_us(a) _timespec_frommicros(a)
/* Return an nsync_time constructed from second and nanosecond
components */
#define nsync_time_s_ns(s, ns) ((nsync_time){(int64_t)(s), (unsigned)(ns)})
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_TIME_H_ */

26
third_party/nsync/wait_s.internal.h vendored Normal file
View file

@ -0,0 +1,26 @@
#ifndef COSMOPOLITAN_LIBC_THREAD_WAIT_INTERNAL_H_
#define COSMOPOLITAN_LIBC_THREAD_WAIT_INTERNAL_H_
#include "third_party/nsync/atomic.h"
#include "third_party/nsync/dll.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
/* Implementations of "struct nsync_waitable_s" must provide functions
in struct nsync_waitable_funcs_s (see public/nsync_wait.h). When
nsync_wait_n() waits on a client's object, those functions are called
with v pointing to the client's object and nw pointing to a struct
nsync_waiter_s. */
struct nsync_waiter_s {
uint32_t tag; /* used for debugging */
nsync_dll_element_ q; /* used to link children of parent */
nsync_atomic_uint32_ waiting; /* non-zero <=> the waiter is waiting */
struct nsync_semaphore_s_ *sem; /* *sem will be Ved when waiter is woken */
uint32_t flags; /* see below */
};
/* set if waiter is embedded in Mu/CV's internal structures */
#define NSYNC_WAITER_FLAG_MUCV 0x1
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* COSMOPOLITAN_LIBC_THREAD_WAIT_INTERNAL_H_ */

138
third_party/nsync/waiter.h vendored Normal file
View file

@ -0,0 +1,138 @@
#ifndef NSYNC_WAITER_H_
#define NSYNC_WAITER_H_
#include "third_party/nsync/time.h"
#if !(__ASSEMBLER__ + __LINKER__ + 0)
COSMOPOLITAN_C_START_
/* nsync_wait_n() allows the client to wait on multiple objects
(condition variables, nsync_notes, nsync_counters, etc.) until at
least one of them becomes ready, or a deadline expires.
It can be thought of as rather like Unix's select() or poll(), except
the the objects being waited for are synchronization data structures,
rather than file descriptors.
The client can construct new objects that can be waited for by
implementing three routines.
Examples:
To wait on two nsync_notes n0, n1, and a nsync_counter c0, with a
deadline of abs_deadline:
// Form an array of struct nsync_waitable_s, identifying the
// objects and the corresponding descriptors. (Static
// initialization syntax is used for brevity.)
static struct nsync_waitable_s w[] = {
{ &n0, &nsync_note_waitable_funcs },
{ &n1, &nsync_note_waitable_funcs },
{ &c0, &nsync_counter_waitable_funcs }
};
static struct nsync_waitable_s *pw[] = { &w[0], &w[1], &w[2] };
int n = sizeof (w) / sizeof (w[0]);
// Wait. The mu, lock, and unlock arguments are NULL because no
// condition variables are invovled.
int i = nsync_wait_n (NULL, NULL, NULL, abs_deadline, n, pw);
if (i == n) {
// timeout
} else {
// w[i].v became ready.
}
To wait on multiple condition variables, the mu/lock/unlock
parameters are used. Imagine cv0 and cv1 are signalled when
predicates pred0() (under lock mu0) and pred1() (under lock mu1)
become true respectively. Assume that mu0 is acquired before mu1.
static void lock2 (void *v) { // lock two mutexes in order
nsync_mu **mu = (nsync_mu **) v;
nsync_mu_lock (mu[0]);
nsync_mu_lock (mu[1]);
}
static void unlock2 (void *v) { // unlock two mutexes.
nsync_mu **mu = (nsync_mu **) v;
nsync_mu_unlock (mu[1]);
nsync_mu_unlock (mu[0]);
}
// Describe the condition variables and the locks.
static struct nsync_waitable_s w[] = {
{ &cv0, &nsync_cv_waitable_funcs },
{ &cv1, &nsync_cv_waitable_funcs }
};
static struct nsync_waitable_s *pw[] = { &w[0], &w[1] };
nsync_mu *lock_list[] = { &mu0, &mu1 };
int n = sizeof (w) / sizeof (w[0]);
lock2 (list_list);
while (!pred0 () && !pred1 ()) {
// Wait for one of the condition variables to be signalled,
// with no timeout.
nsync_wait_n (lock_list, &lock2, &unlock2,
nsync_time_no_deadline, n, pw);
}
if (pred0 ()) { ... }
if (pred1 ()) { ... }
unlock2 (list_list);
*/
/* forward declaration of struct that contains type dependent wait
operations */
struct nsync_waitable_funcs_s;
/* Clients wait on objects by forming an array of struct
nsync_waitable_s. Each each element points to one object and its
type-dependent functions. */
struct nsync_waitable_s {
/* pointer to object */
void *v;
/* pointer to type-dependent functions. Use
&nsync_note_waitable_funcs for an nsync_note,
&nsync_counternote_waitable_funcs for an nsync_counter,
&nsync_cv_waitable_funcs for an nsync_cv. */
const struct nsync_waitable_funcs_s *funcs;
};
/* Wait until at least one of *waitable[0,..,count-1] is has been
notified, or abs_deadline is reached. Return the index of the
notified element of waitable[], or count if no such element exists.
If mu!=NULL, (*unlock)(mu) is called after the thread is queued on
the various waiters, and (*lock)(mu) is called before return;
mu/lock/unlock are used to acquire and release the relevant locks
whan waiting on condition variables. */
int nsync_wait_n(void *mu, void (*lock)(void *), void (*unlock)(void *),
nsync_time abs_deadline, int count,
struct nsync_waitable_s *waitable[]);
/* A "struct nsync_waitable_s" implementation must implement these
functions. Clients should ignore the internals. */
struct nsync_waiter_s;
struct nsync_waitable_funcs_s {
/* Return the time when *v will be ready (max time if unknown), or 0
if it is already ready. The parameter nw may be passed as NULL, in
which case the result should indicate whether the thread would
block if it were to wait on *v. All calls with the same *v must
report the same result until the object becomes ready, from which
point calls must report 0. */
nsync_time (*ready_time)(void *v, struct nsync_waiter_s *nw);
/* If *v is ready, return zero; otherwise enqueue *nw on *v and return
non-zero. */
int (*enqueue)(void *v, struct nsync_waiter_s *nw);
/* If nw has been previously dequeued, return zero; otherwise dequeue
*nw from *v and return non-zero. */
int (*dequeue)(void *v, struct nsync_waiter_s *nw);
};
/* The "struct nsync_waitable_s" for nsync_note, nsync_counter, and nsync_cv. */
extern const struct nsync_waitable_funcs_s nsync_note_waitable_funcs;
extern const struct nsync_waitable_funcs_s nsync_counter_waitable_funcs;
extern const struct nsync_waitable_funcs_s nsync_cv_waitable_funcs;
COSMOPOLITAN_C_END_
#endif /* !(__ASSEMBLER__ + __LINKER__ + 0) */
#endif /* NSYNC_WAITER_H_ */

56
third_party/nsync/waiter_per_thread.c vendored Normal file
View file

@ -0,0 +1,56 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2016 Google Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0 │
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "third_party/nsync/atomic.h"
#include "third_party/nsync/atomic.internal.h"
#include "third_party/nsync/common.internal.h"
#include "libc/thread/thread.h"
asm(".ident\t\"\\n\\n\
*NSYNC (Apache 2.0)\\n\
Copyright 2016 Google, Inc.\\n\
https://github.com/google/nsync\"");
// clang-format off
static pthread_key_t waiter_key;
static nsync_atomic_uint32_ pt_once;
static void do_once (nsync_atomic_uint32_ *ponce, void (*dest) (void *)) {
uint32_t o = ATM_LOAD_ACQ (ponce);
if (o != 2) {
while (o == 0 && !ATM_CAS_ACQ (ponce, 0, 1)) {
o = ATM_LOAD (ponce);
}
if (o == 0) {
pthread_key_create (&waiter_key, dest);
ATM_STORE_REL (ponce, 2);
}
while (ATM_LOAD_ACQ (ponce) != 2) {
nsync_yield_ ();
}
}
}
void *nsync_per_thread_waiter_ (void (*dest) (void *)) {
do_once (&pt_once, dest);
return (pthread_getspecific (waiter_key));
}
void nsync_set_per_thread_waiter_ (void *v, void (*dest) (void *)) {
do_once (&pt_once, dest);
pthread_setspecific (waiter_key, v);
}

27
third_party/nsync/yield.c vendored Normal file
View file

@ -0,0 +1,27 @@
/*-*- mode:c;indent-tabs-mode:t;c-basic-offset:8;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=8 tw=8 fenc=utf-8 :vi
Copyright 2022 Justine Alexandra Roberts Tunney
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
*/
#include "libc/calls/calls.h"
#include "libc/calls/strace.internal.h"
#include "third_party/nsync/common.internal.h"
// clang-format off
void nsync_yield_ (void) {
sched_yield ();
STRACE ("nsync_yield_()");
}