Commit graph

13 commits

Author SHA1 Message Date
Justine Tunney
f7ae50462a
Make improvements
- Fix unused local variable errors
- Remove yoinks from sigaction() header
- Add nox87 and aarch64 to github actions
- Fix cosmocc -fportcosmo in linking mode
- It's now possible to build `make m=llvm o/llvm/libc`
2023-07-10 04:35:14 -07:00
Justine Tunney
fe044e22cc
Switch public headers to getopt_long() entirely
Cosmopolitan's getopt() is now redefined as __getopt().
2023-07-02 19:57:43 -07:00
Justine Tunney
0409096658
Get us closer to building busybox
This change undefines __linux__ and adds APIs like clock_settime(). The
gosh darned getopt_long() API has been reintroduced, thanks to OpenBSD.
2023-06-18 04:13:45 -07:00
Justine Tunney
d7c79f43ef
Clean up more code
- Found some bugs in LLVM compiler-rt library
- The useless LIBC_STUBS package is now deleted
- Improve the overflow checking story even further
- Get chibicc tests working in MODE=dbg mode again
- The libc/isystem/ headers now have correctly named guards
2023-06-18 01:00:05 -07:00
Justine Tunney
e6b7c16a53
Make changes needed for new demo 2023-06-15 23:22:49 -07:00
Justine Tunney
8fdb31681a
Introduce support for GGJT v3 file format
llama.com can now load weights that use the new file format which was
introduced a few weeks ago. Note that, unlike llama.cpp, we will keep
support for old file formats in our tool so you don't need to convert
your weights when the upstream project makes breaking changes. Please
note that using ggjt v3 does make avx2 inference go 5% faster for me.
2023-06-03 15:46:21 -07:00
Justine Tunney
e7eb0b3070
Make more ML improvements
- Fix UX issues with llama.com
- Do housekeeping on libm code
- Add more vectorization to GGML
- Get GGJT quantizer programs working well
- Have the quantizer keep the output layer as f16c
- Prefetching improves performance 15% if you use fewer threads
2023-05-16 08:07:23 -07:00
Justine Tunney
cc1732bc42
Make AARCH64 harder, better, faster, stronger
- Perform some housekeeping on scalar math function code
- Import ARM's Optimized Routines for SIMD string processing
- Upgrade to latest Chromium zlib and enable more SIMD optimizations
2023-05-15 02:15:34 -07:00
Ariel Núñez
91791e9f38
Started removing features from RedPajama to make it easier to understand for beginners (#817) 2023-05-14 09:16:22 -07:00
Justine Tunney
89d1fad7ee
Enable crash reports for radpajama executables 2023-05-13 21:16:03 -07:00
Justine Tunney
296ee3ec58
Make some other fixes to radpajama build config 2023-05-13 21:09:28 -07:00
Justine Tunney
282dd8e7b7
Get radpajama to build
make -j8 o//third_party/radpajama/radpajama.com
    make -j8 o//third_party/radpajama/radpajama-chat.com

This change gets the radpajama.mk config working. This package depends
on THIRD_PARTY_GGML but it's configured to call ggjt_v1(), so that the
library will provide the old quantizers. The ggml_quantize_chunk() API
will now dispatch to older quantizers based on the configured version.
2023-05-13 20:44:36 -07:00
Ariel Núñez
b3e3359d22
Import radpajama (a redpajama.cpp fork) (#814)
This is the relevant commit: bfa6466199

Model download links:
https://huggingface.co/ceonlabs/radpajama/tree/main
2023-05-11 07:12:08 -07:00