Commit graph

19 commits

Author SHA1 Message Date
Justine Tunney
96f979dfc5
Rename makefiles BUILD.mk
This way they appear at the top of directory listings.
2023-11-28 11:21:08 -08:00
Justine Tunney
791f79fcb3
Make improvements
- We now serialize the file descriptor table when spawning / executing
  processes on Windows. This means you can now inherit more stuff than
  just standard i/o. It's needed by bash, which duplicates the console
  to file descriptor #255. We also now do a better job serializing the
  environment variables, so you're less likely to encounter E2BIG when
  using your bash shell. We also no longer coerce environ to uppercase

- execve() on Windows now remotely controls its parent process to make
  them spawn a replacement for itself. Then it'll be able to terminate
  immediately once the spawn succeeds, without having to linger around
  for the lifetime as a shell process for proxying the exit code. When
  process worker thread running in the parent sees the child die, it's
  given a handle to the new child, to replace it in the process table.

- execve() and posix_spawn() on Windows will now provide CreateProcess
  an explicit handle list. This allows us to remove handle locks which
  enables better fork/spawn concurrency, with seriously correct thread
  safety. Other codebases like Go use the same technique. On the other
  hand fork() still favors the conventional WIN32 inheritence approach
  which can be a little bit messy, but is *controlled* by guaranteeing
  perfectly clean slates at both the spawning and execution boundaries

- sigset_t is now 64 bits. Having it be 128 bits was a mistake because
  there's no reason to use that and it's only supported by FreeBSD. By
  using the system word size, signal mask manipulation on Windows goes
  very fast. Furthermore @asyncsignalsafe funcs have been rewritten on
  Windows to take advantage of signal masking, now that it's much more
  pleasant to use.

- All the overlapped i/o code on Windows has been rewritten for pretty
  good signal and cancelation safety. We're now able to ensure overlap
  data structures are cleaned up so long as you don't longjmp() out of
  out of a signal handler that interrupted an i/o operation. Latencies
  are also improved thanks to the removal of lots of "busy wait" code.
  Waits should be optimal for everything except poll(), which shall be
  the last and final demon we slay in the win32 i/o horror show.

- getrusage() on Windows is now able to report RUSAGE_CHILDREN as well
  as RUSAGE_SELF, thanks to aggregation in the process manager thread.
2023-10-08 08:59:53 -07:00
Justine Tunney
e11fa30791
Move zipos into runtime package
This way complex runtime features (e.g. ftrace, symbol tables) can
always yoink zipos support. This is important now that apelink.com
automates embedding symbol tables for multiple cpus.
2023-08-11 23:14:02 -07:00
Justine Tunney
d7c79f43ef
Clean up more code
- Found some bugs in LLVM compiler-rt library
- The useless LIBC_STUBS package is now deleted
- Improve the overflow checking story even further
- Get chibicc tests working in MODE=dbg mode again
- The libc/isystem/ headers now have correctly named guards
2023-06-18 01:00:05 -07:00
Justine Tunney
4d629fd424
Fix stack abuse in llama.cc
This change also incorporates improvements for MODE=asan. It's been
confirmed that o/asan/third_party/ggml/llama.com will work.

Fixes #829
2023-06-08 07:12:26 -07:00
Justine Tunney
daf4454a06
Validate privileged code relationships
- Work towards improving non-optimized build support
- Introduce MODE=zero which is -O0 without ASAN/UBSAN
- Use system GCC when ~/.cosmo.mk has USE_SYSTEM_TOOLCHAIN=1
- Have package.com check .privileged code doesn't call non-privileged
2023-06-08 04:38:06 -07:00
Justine Tunney
eb40cb371d
Get --ftrace working on aarch64
This change implements a new approach to function call logging, that's
based on the GCC flag: -fpatchable-function-entry. Read the commentary
in build/config.mk to learn how it works.
2023-06-05 23:35:31 -07:00
Justine Tunney
9cc3e37263
Upgrade to Cosmopolitan GCC 11.2.0 for aarch64 2023-06-05 02:07:28 -07:00
Justine Tunney
8fdb31681a
Introduce support for GGJT v3 file format
llama.com can now load weights that use the new file format which was
introduced a few weeks ago. Note that, unlike llama.cpp, we will keep
support for old file formats in our tool so you don't need to convert
your weights when the upstream project makes breaking changes. Please
note that using ggjt v3 does make avx2 inference go 5% faster for me.
2023-06-03 15:46:21 -07:00
Justine Tunney
e7eb0b3070
Make more ML improvements
- Fix UX issues with llama.com
- Do housekeeping on libm code
- Add more vectorization to GGML
- Get GGJT quantizer programs working well
- Have the quantizer keep the output layer as f16c
- Prefetching improves performance 15% if you use fewer threads
2023-05-16 08:07:23 -07:00
Justine Tunney
210187cf77
Perform some code cleanup 2023-05-15 16:32:10 -07:00
Justine Tunney
5a4cf9560f
Add support for new GGJT v2 quantizers
This change makes quantized models (e.g. q4_0) go 10% faster on Macs
however doesn't offer much improvement for Intel PC hardware.

This change syncs llama.cpp 699b1ad7fe6f7b9e41d3cb41e61a8cc3ea5fc6b5
which recently made a breaking change to nearly all its file formats
without any migration. Since that'll break hundreds upon hundreds of
models on websites like HuggingFace llama.com will support both file
formats because llama.com will never ever break the GGJT file format
2023-05-13 08:08:32 -07:00
Justine Tunney
80c174d494
Clean up llama.com anti/stop/reverse-prompt code
Example use case for JSON completion:

    $ m=opt
    $ make -j16 m=$m o/$m/third_party/ggml/llama.com
    $ o/$m/third_party/ggml/llama.com -m llama.bin -p '{"key": "life", "val": ' -r '}'
    42}

This provides better control. More sophisticated facilities for
controlling text generation will be provided soon enough.
2023-05-12 08:20:58 -07:00
Justine Tunney
1f6f9e6701
Remove division from matrix multiplication
This change reduces llama.com CPU cycles systemically by 2.5% according
to the Linux Kernel `perf stat -Bddd` utility.
2023-05-10 21:19:54 -07:00
Justine Tunney
4c093155a3
Get llama.com building as an aarch64 native binary 2023-05-10 04:20:47 -07:00
Justine Tunney
d04430f4ef
Get LIBC_MEM and LIBC_STDIO building with aarch64 2023-05-10 04:20:47 -07:00
Justine Tunney
3dac9f8999
Use Companion AI in llama.com by default 2023-04-30 23:08:15 -07:00
Justine Tunney
b31ba86ace
Introduce prompt caching so prompts load instantly
This change also introduces an ephemeral status line in non-verbose mode
to display a load percentage status when slow operations are happening.
2023-04-28 16:15:26 -07:00
Justine Tunney
e8b43903b2
Import llama.cpp
https://github.com/ggerganov/llama.cpp
0b2da20538d01926b77ea237dd1c930c4d20b686
See third_party/ggml/README.cosmo for changes
2023-04-27 14:37:14 -07:00