cosmopolitan/libc/runtime/zipos-open.c

316 lines
9.9 KiB
C
Raw Normal View History

2020-06-15 14:18:57 +00:00
/*-*- mode:c;indent-tabs-mode:nil;c-basic-offset:2;tab-width:8;coding:utf-8 -*-│
vi: set noet ft=c ts=2 sts=2 sw=2 fenc=utf-8 :vi
2020-06-15 14:18:57 +00:00
Copyright 2020 Justine Alexandra Roberts Tunney
2020-12-28 01:18:44 +00:00
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
2020-06-15 14:18:57 +00:00
2020-12-28 01:18:44 +00:00
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
2020-06-15 14:18:57 +00:00
*/
#include "libc/assert.h"
2020-06-15 14:18:57 +00:00
#include "libc/calls/calls.h"
#include "libc/calls/internal.h"
#include "libc/calls/state.internal.h"
#include "libc/calls/struct/sigset.h"
Make improvements - We now serialize the file descriptor table when spawning / executing processes on Windows. This means you can now inherit more stuff than just standard i/o. It's needed by bash, which duplicates the console to file descriptor #255. We also now do a better job serializing the environment variables, so you're less likely to encounter E2BIG when using your bash shell. We also no longer coerce environ to uppercase - execve() on Windows now remotely controls its parent process to make them spawn a replacement for itself. Then it'll be able to terminate immediately once the spawn succeeds, without having to linger around for the lifetime as a shell process for proxying the exit code. When process worker thread running in the parent sees the child die, it's given a handle to the new child, to replace it in the process table. - execve() and posix_spawn() on Windows will now provide CreateProcess an explicit handle list. This allows us to remove handle locks which enables better fork/spawn concurrency, with seriously correct thread safety. Other codebases like Go use the same technique. On the other hand fork() still favors the conventional WIN32 inheritence approach which can be a little bit messy, but is *controlled* by guaranteeing perfectly clean slates at both the spawning and execution boundaries - sigset_t is now 64 bits. Having it be 128 bits was a mistake because there's no reason to use that and it's only supported by FreeBSD. By using the system word size, signal mask manipulation on Windows goes very fast. Furthermore @asyncsignalsafe funcs have been rewritten on Windows to take advantage of signal masking, now that it's much more pleasant to use. - All the overlapped i/o code on Windows has been rewritten for pretty good signal and cancelation safety. We're now able to ensure overlap data structures are cleaned up so long as you don't longjmp() out of out of a signal handler that interrupted an i/o operation. Latencies are also improved thanks to the removal of lots of "busy wait" code. Waits should be optimal for everything except poll(), which shall be the last and final demon we slay in the win32 i/o horror show. - getrusage() on Windows is now able to report RUSAGE_CHILDREN as well as RUSAGE_SELF, thanks to aggregation in the process manager thread.
2023-10-08 12:36:18 +00:00
#include "libc/calls/struct/sigset.internal.h"
#include "libc/calls/syscall-sysv.internal.h"
#include "libc/calls/syscall_support-sysv.internal.h"
#include "libc/dce.h"
#include "libc/errno.h"
#include "libc/intrin/asan.internal.h"
2022-08-20 19:32:51 +00:00
#include "libc/intrin/atomic.h"
#include "libc/intrin/cmpxchg.h"
#include "libc/intrin/directmap.internal.h"
#include "libc/intrin/extend.internal.h"
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
#include "libc/intrin/likely.h"
#include "libc/intrin/strace.internal.h"
#include "libc/intrin/weaken.h"
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
#include "libc/limits.h"
#include "libc/runtime/internal.h"
#include "libc/runtime/memtrack.internal.h"
#include "libc/runtime/zipos.internal.h"
#include "libc/str/str.h"
#include "libc/sysv/consts/f.h"
#include "libc/sysv/consts/fd.h"
2020-06-15 14:18:57 +00:00
#include "libc/sysv/consts/map.h"
#include "libc/sysv/consts/o.h"
#include "libc/sysv/consts/prot.h"
#include "libc/sysv/consts/s.h"
#include "libc/sysv/consts/sig.h"
2020-06-15 14:18:57 +00:00
#include "libc/sysv/errfuns.h"
#include "libc/thread/thread.h"
#include "libc/thread/tls.h"
#include "libc/zip.internal.h"
#define MAX_REFS SSIZE_MAX
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
static char *__zipos_mapend;
static size_t __zipos_maptotal;
static pthread_mutex_t __zipos_lock_obj;
static void __zipos_wipe(void) {
pthread_mutex_init(&__zipos_lock_obj, 0);
}
static void __zipos_lock(void) {
pthread_mutex_lock(&__zipos_lock_obj);
}
static void __zipos_unlock(void) {
pthread_mutex_unlock(&__zipos_lock_obj);
}
static void *__zipos_mmap_space(size_t mapsize) {
char *start;
size_t offset;
unassert(mapsize);
offset = __zipos_maptotal;
__zipos_maptotal += mapsize;
start = (char *)kMemtrackZiposStart;
if (!__zipos_mapend) __zipos_mapend = start;
__zipos_mapend = _extend(start, __zipos_maptotal, __zipos_mapend, MAP_PRIVATE,
kMemtrackZiposStart + kMemtrackZiposSize);
return start + offset;
}
struct ZiposHandle *__zipos_keep(struct ZiposHandle *h) {
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
int refs = atomic_fetch_add_explicit(&h->refs, 1, memory_order_relaxed);
unassert(!VERY_UNLIKELY(refs > MAX_REFS));
return h;
}
static bool __zipos_drop(struct ZiposHandle *h) {
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
if (!atomic_fetch_sub_explicit(&h->refs, 1, memory_order_release)) {
atomic_thread_fence(memory_order_acquire);
return true;
}
return false;
}
void __zipos_free(struct ZiposHandle *h) {
if (!__zipos_drop(h)) {
return;
}
if (IsAsan()) {
__asan_poison((char *)h + sizeof(struct ZiposHandle),
h->mapsize - sizeof(struct ZiposHandle), kAsanHeapFree);
}
__zipos_lock();
do h->next = h->zipos->freelist;
while (!_cmpxchg(&h->zipos->freelist, h->next, h));
__zipos_unlock();
}
static struct ZiposHandle *__zipos_alloc(struct Zipos *zipos, size_t size) {
size_t mapsize;
struct ZiposHandle *h, **ph;
__zipos_lock();
mapsize = sizeof(struct ZiposHandle) + size;
mapsize = ROUNDUP(mapsize, 4096);
StartOver:
ph = &zipos->freelist;
while ((h = *ph)) {
if (h->mapsize >= mapsize) {
if (!_cmpxchg(ph, h, h->next)) goto StartOver;
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
atomic_store_explicit(&h->refs, 0, memory_order_relaxed);
break;
}
ph = &h->next;
}
if (!h) {
h = __zipos_mmap_space(mapsize);
}
__zipos_unlock();
if (IsAsan()) {
__asan_unpoison((char *)h, sizeof(struct ZiposHandle) + size);
__asan_poison((char *)h + sizeof(struct ZiposHandle) + size,
mapsize - (sizeof(struct ZiposHandle) + size),
kAsanHeapOverrun);
}
if (h) {
h->size = size;
h->zipos = zipos;
h->mapsize = mapsize;
}
return h;
}
static int __zipos_mkfd(int minfd) {
int fd, e = errno;
if ((fd = __sys_fcntl(2, F_DUPFD_CLOEXEC, minfd)) != -1) {
return fd;
} else if (errno == EINVAL) {
errno = e;
return __fixupnewfd(__sys_fcntl(2, F_DUPFD, minfd), O_CLOEXEC);
} else {
return fd;
}
}
static int __zipos_setfd(int fd, struct ZiposHandle *h, unsigned flags) {
int want = fd;
atomic_compare_exchange_strong_explicit(
&g_fds.f, &want, fd + 1, memory_order_release, memory_order_relaxed);
g_fds.p[fd].kind = kFdZip;
g_fds.p[fd].handle = (intptr_t)h;
g_fds.p[fd].flags = flags | O_CLOEXEC;
__fds_unlock();
return fd;
}
static int __zipos_load(struct Zipos *zipos, size_t cf, int flags,
struct ZiposUri *name) {
size_t lf;
size_t size;
int fd, minfd;
2020-06-15 14:18:57 +00:00
struct ZiposHandle *h;
if (cf == ZIPOS_SYNTHETIC_DIRECTORY) {
size = name->len;
if (!(h = __zipos_alloc(zipos, size + 1))) return -1;
if (size) memcpy(h->data, name->path, size);
h->data[size] = 0;
h->mem = h->data;
} else {
lf = GetZipCfileOffset(zipos->map + cf);
npassert((ZIP_LFILE_MAGIC(zipos->map + lf) == kZipLfileHdrMagic));
size = GetZipLfileUncompressedSize(zipos->map + lf);
switch (ZIP_LFILE_COMPRESSIONMETHOD(zipos->map + lf)) {
case kZipCompressionNone:
if (!(h = __zipos_alloc(zipos, 0))) return -1;
h->mem = ZIP_LFILE_CONTENT(zipos->map + lf);
break;
case kZipCompressionDeflate:
if (!(h = __zipos_alloc(zipos, size))) return -1;
if (!__inflate(h->data, size, ZIP_LFILE_CONTENT(zipos->map + lf),
GetZipLfileCompressedSize(zipos->map + lf))) {
h->mem = h->data;
} else {
h->mem = 0;
eio();
}
break;
default:
return eio();
}
2020-06-15 14:18:57 +00:00
}
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
atomic_store_explicit(&h->pos, 0, memory_order_relaxed);
h->cfile = cf;
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
unassert(size < SIZE_MAX);
h->size = size;
if (h->mem) {
minfd = 3;
__fds_lock();
TryAgain:
if (IsWindows() || IsMetal()) {
if ((fd = __reservefd_unlocked(-1)) != -1) {
return __zipos_setfd(fd, h, flags);
Improve ZIP filesystem and change its prefix The ZIP filesystem has a breaking change. You now need to use /zip/ to open() / opendir() / etc. assets within the ZIP structure of your APE binary, instead of the previous convention of using zip: or zip! URIs. This is needed because Python likes to use absolute paths, and having ZIP paths encoded like URIs simply broke too many things. Many more system calls have been updated to be able to operate on ZIP files and file descriptors. In particular fcntl() and ioctl() since Python would do things like ask if a ZIP file is a terminal and get confused when the old implementation mistakenly said yes, because the fastest way to guarantee native file descriptors is to dup(2). This change also improves the async signal safety of zipos and ensures it doesn't maintain any open file descriptors beyond that which the user has opened. This change makes a lot of progress towards adding magic numbers that are specific to platforms other than Linux. The philosophy here is that, if you use an operating system like FreeBSD, then you should be able to take advantage of FreeBSD exclusive features, even if we don't polyfill them on other platforms. For example, you can now open() a file with the O_VERIFY flag. If your program runs on other platforms, then Cosmo will automatically set O_VERIFY to zero. This lets you safely use it without the need for #ifdef or ifstatements which detract from readability. One of the blindspots of the ASAN memory hardening we use to offer Rust like assurances has always been that memory passed to the kernel via system calls (e.g. writev) can't be checked automatically since the kernel wasn't built with MODE=asan. This change makes more progress ensuring that each system call will verify the soundness of memory before it's passed to the kernel. The code for doing these checks is fast, particularly for buffers, where it can verify 64 bytes a cycle. - Correct O_LOOP definition on NT - Introduce program_executable_name - Add ASAN guards to more system calls - Improve termios compatibility with BSDs - Fix bug in Windows auxiliary value encoding - Add BSD and XNU specific errnos and open flags - Add check to ensure build doesn't talk to internet
2021-08-22 08:04:18 +00:00
}
} else if ((fd = __zipos_mkfd(minfd)) != -1) {
if (__ensurefds_unlocked(fd) != -1) {
if (g_fds.p[fd].kind) {
sys_close(fd);
minfd = fd + 1;
goto TryAgain;
}
return __zipos_setfd(fd, h, flags);
}
sys_close(fd);
2021-02-08 12:04:42 +00:00
}
__fds_unlock();
2020-06-15 14:18:57 +00:00
}
__zipos_free(h);
return -1;
2020-06-15 14:18:57 +00:00
}
void __zipos_postdup(int oldfd, int newfd) {
if (oldfd == newfd) {
return;
}
if (__isfdkind(newfd, kFdZip)) {
__zipos_free((struct ZiposHandle *)(intptr_t)g_fds.p[newfd].handle);
if (!__isfdkind(oldfd, kFdZip)) {
__fds_lock();
bzero(g_fds.p + newfd, sizeof(*g_fds.p));
__fds_unlock();
}
}
if (__isfdkind(oldfd, kFdZip)) {
__zipos_keep((struct ZiposHandle *)(intptr_t)g_fds.p[oldfd].handle);
__fds_lock();
__ensurefds_unlocked(newfd);
g_fds.p[newfd] = g_fds.p[oldfd];
__fds_unlock();
}
}
/**
* Loads compressed file from αcτµαlly pδrταblε εxεcµταblε object store.
*
* @param uri is obtained via __zipos_parseuri()
* @asyncsignalsafe
*/
int __zipos_open(struct ZiposUri *name, int flags) {
// check if this thread is cancelled
int rc;
if (_weaken(pthread_testcancel_np) &&
(rc = _weaken(pthread_testcancel_np)())) {
errno = rc;
return -1;
}
// validate api usage
if ((flags & O_CREAT) || //
(flags & O_TRUNC) || //
(flags & O_ACCMODE) != O_RDONLY) {
return erofs();
}
// get the zipos global singleton
struct Zipos *zipos;
if (!(zipos = __zipos_get())) {
return enoexec();
}
// most open() calls are due to languages path searching assets. the
// majority of these calls will return ENOENT or ENOTDIR. we need to
// perform two extremely costly sigprocmask() calls below. thanks to
// zipos being a read-only filesystem, we can avoid it in many cases
ssize_t cf;
if ((cf = __zipos_find(zipos, name)) == -1) {
2023-08-17 03:11:19 +00:00
return -1;
}
if (flags & O_EXCL) {
return eexist();
}
if (cf != ZIPOS_SYNTHETIC_DIRECTORY) {
int mode = GetZipCfileMode(zipos->map + cf);
if ((flags & O_DIRECTORY) && !S_ISDIR(mode)) {
return enotdir();
}
if (!(mode & 0444)) {
return eacces();
}
}
// now do the heavy lifting
BLOCK_SIGNALS;
rc = __zipos_load(zipos, cf, flags, name);
ALLOW_SIGNALS;
return rc;
2020-06-15 14:18:57 +00:00
}
__attribute__((__constructor__)) static void __zipos_ctor(void) {
__zipos_wipe();
pthread_atfork(__zipos_lock, __zipos_unlock, __zipos_wipe);
}