llama : cleanup unused mmq flags (#5772)

* cleanup unused --no-mul-mat-q,-nommq, -mmq, --mul-mat-q, mul_mat_q

* remove: mul_mat_q in compare llama bench and usage

* update llama-bench

---------

Co-authored-by: slaren <slarengh@gmail.com>
This commit is contained in:
Pierrick Hymbert 2024-03-01 12:39:06 +01:00 committed by GitHub
parent 9600d59e01
commit 3ab8b3a92e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
9 changed files with 10 additions and 56 deletions

View file

@ -255,7 +255,6 @@ extern "C" {
enum ggml_type type_v; // data type for V cache
// Keep the booleans together to avoid misalignment during copy-by-value.
bool mul_mat_q; // if true, use experimental mul_mat_q kernels (DEPRECATED - always true)
bool logits_all; // the llama_eval() call computes all logits, not just the last one (DEPRECATED - set llama_batch.logits instead)
bool embedding; // embedding mode only
bool offload_kqv; // whether to offload the KQV ops (including the KV cache) to GPU