mirror of
https://github.com/jart/cosmopolitan.git
synced 2025-05-23 13:52:28 +00:00
Add support for new GGJT v2 quantizers
This change makes quantized models (e.g. q4_0) go 10% faster on Macs however doesn't offer much improvement for Intel PC hardware. This change syncs llama.cpp 699b1ad7fe6f7b9e41d3cb41e61a8cc3ea5fc6b5 which recently made a breaking change to nearly all its file formats without any migration. Since that'll break hundreds upon hundreds of models on websites like HuggingFace llama.com will support both file formats because llama.com will never ever break the GGJT file format
This commit is contained in:
parent
ba49e86e20
commit
5a4cf9560f
24 changed files with 4074 additions and 1805 deletions
|
@ -47,6 +47,8 @@
|
|||
"__AES__"
|
||||
"__AVX__"
|
||||
"__AVX2__"
|
||||
"__AVX512F__"
|
||||
"__AVXVNNI__"
|
||||
"__ABM__"
|
||||
"__BMI__"
|
||||
"__BMI2__"
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue