ggml : add new Q4_2 quantization (ARM only) (#1046)

* ggml : Q4_2 ARM

* ggml : add ggml_is_quantized()

* llama : update llama_type_name() with Q4_2 entry

* ggml : speed-up q4_2

- 4 threads: ~100ms -> ~90ms
- 8 threads:  ~55ms -> ~50ms

* ggml : optimize q4_2 using vmlaq_n_f32 + vmulq_n_f32
This commit is contained in:
Georgi Gerganov 2023-04-18 23:54:57 +03:00 committed by GitHub
parent 50a8a2af97
commit 77a73403ca
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 287 additions and 11 deletions

View file

@ -72,6 +72,7 @@ extern "C" {
LLAMA_FTYPE_MOSTLY_Q4_0 = 2, // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q4_1 = 3, // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q4_1_SOME_F16 = 4, // tok_embeddings.weight and output.weight are F16
LLAMA_FTYPE_MOSTLY_Q4_2 = 5, // except 1d tensors
};
LLAMA_API struct llama_context_params llama_context_default_params();