cosmopolitan/libc/runtime/zipos-read.c

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

76 lines
3.6 KiB
C
Raw Normal View History

2020-06-15 14:18:57 +00:00
/*-*- mode:c;indent-tabs-mode:nil;c-basic-offset:2;tab-width:8;coding:utf-8 -*-│
vi: set et ft=c ts=2 sts=2 sw=2 fenc=utf-8 :vi
Copyright 2020 Justine Alexandra Roberts Tunney
2020-12-28 01:18:44 +00:00
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
2020-06-15 14:18:57 +00:00
2020-12-28 01:18:44 +00:00
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL
WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL
DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR
PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
2020-06-15 14:18:57 +00:00
*/
#include "libc/assert.h"
#include "libc/calls/struct/iovec.h"
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
#include "libc/intrin/atomic.h"
#include "libc/intrin/likely.h"
#include "libc/limits.h"
#include "libc/runtime/zipos.internal.h"
#include "libc/stdio/sysparam.h"
#include "libc/str/str.h"
#include "libc/sysv/consts/s.h"
#include "libc/sysv/errfuns.h"
#include "libc/thread/tls.h"
#include "libc/zip.h"
2020-06-15 14:18:57 +00:00
static ssize_t __zipos_read_impl(struct ZiposHandle *h, const struct iovec *iov,
size_t iovlen, ssize_t opt_offset) {
int i;
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
int64_t b, x, y, start_pos;
if (h->cfile == ZIPOS_SYNTHETIC_DIRECTORY ||
S_ISDIR(GetZipCfileMode(h->zipos->map + h->cfile)))
return eisdir();
if (opt_offset == -1) {
Restart:
start_pos = atomic_load_explicit(&h->pos, memory_order_relaxed);
do {
if (UNLIKELY(start_pos == SIZE_MAX))
goto Restart;
} while (!LIKELY(atomic_compare_exchange_weak_explicit(
&h->pos, &start_pos, SIZE_MAX, memory_order_acquire,
memory_order_relaxed)));
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
x = y = start_pos;
} else {
x = y = opt_offset;
}
for (i = 0; i < iovlen && y < h->size; ++i, y += b) {
b = MIN(iov[i].iov_len, h->size - y);
if (b)
memcpy(iov[i].iov_base, h->mem + y, b);
}
if (opt_offset == -1) {
unassert(y != SIZE_MAX);
Better zipos refcounts and atomic reads/seeks (#973) * Better refcounting Cribbed from [Rust Arc][1] and the [Boost docs][2]: """ Increasing the reference counter can always be done with memory_order_relaxed: New references to an object can only be formed from an existing reference, and passing an existing reference from one thread to another must already provide any required synchronization. It is important to enforce any possible access to the object in one thread (through an existing reference) to happen before deleting the object in a different thread. This is achieved by a "release" operation after dropping a reference (any access to the object through this reference must obviously happened before), and an "acquire" operation before deleting the object. It would be possible to use memory_order_acq_rel for the fetch_sub operation, but this results in unneeded "acquire" operations when the reference counter does not yet reach zero and may impose a performance penalty. """ [1] https://moshg.github.io/rust-std-ja/src/alloc/arc.rs.html [2] https://www.boost.org/doc/libs/1_55_0/doc/html/atomic/usage_examples.html * Make ZiposHandle's pos atomic Implements a somewhat stronger guarantee than POSIX specifies: reads and seeks are atomic. They may be arbitrarily reordered between threads, but each one happens all the way and leaves the fd in a consistent state. This is achieved by "locking" pos in __zipos_read by storing SIZE_MAX to pos during the operation, so only one can be in-flight at a time. Seeks, on the other hand, just update pos in one go, and rerun if it changed in the meantime. I used `LIKELY` / `UNLIKELY` to pessimize the concurrent case; hopefully that buys back some performance.
2023-12-01 09:01:03 +00:00
atomic_store_explicit(&h->pos, y, memory_order_release);
}
return y - x;
}
2020-06-15 14:18:57 +00:00
/**
* Reads data from zip store object.
*
* @return [1..size] bytes on success, 0 on EOF, or -1 w/ errno; with
* exception of size==0, in which case return zero means no error
* @asyncsignalsafe
*/
ssize_t __zipos_read(struct ZiposHandle *h, const struct iovec *iov,
size_t iovlen, ssize_t opt_offset) {
unassert(opt_offset >= 0 || opt_offset == -1);
return __zipos_read_impl(h, iov, iovlen, opt_offset);
2020-06-15 14:18:57 +00:00
}