This change fixes Cosmopolitan so it has fewer opinions about compiler
warnings. The whole repository had to be cleaned up to be buildable in
-Werror -Wall mode. This lets us benefit from things like strict const
checking. Some actual bugs might have been caught too.
- Fix unused local variable errors
- Remove yoinks from sigaction() header
- Add nox87 and aarch64 to github actions
- Fix cosmocc -fportcosmo in linking mode
- It's now possible to build `make m=llvm o/llvm/libc`
llama.com can now load weights that use the new file format which was
introduced a few weeks ago. Note that, unlike llama.cpp, we will keep
support for old file formats in our tool so you don't need to convert
your weights when the upstream project makes breaking changes. Please
note that using ggjt v3 does make avx2 inference go 5% faster for me.
make -j8 o//third_party/radpajama/radpajama.com
make -j8 o//third_party/radpajama/radpajama-chat.com
This change gets the radpajama.mk config working. This package depends
on THIRD_PARTY_GGML but it's configured to call ggjt_v1(), so that the
library will provide the old quantizers. The ggml_quantize_chunk() API
will now dispatch to older quantizers based on the configured version.
2023-05-13 20:44:36 -07:00
Renamed from third_party/radpajama/main-redpajama-chat.cpp (Browse further)