cuda : improve text-generation and batched decoding performance (#3776)

* cuda : prints wip

* cuda : new cublas gemm branch for multi-batch quantized src0

* cuda : add F32 sgemm branch

* cuda : fine-tune >= VOLTA params + use MMQ only for small batches

* cuda : remove duplicated cuBLAS GEMM code

* cuda : add CUDA_USE_TENSOR_CORES and GGML_CUDA_FORCE_MMQ macros

* build : add compile option to force use of MMQ kernels
This commit is contained in:
Georgi Gerganov 2023-10-27 17:01:23 +03:00 committed by GitHub
parent 34b2a5e1ee
commit 2f9ec7e271
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 125 additions and 19 deletions

View file

@ -5959,8 +5959,6 @@ static int llama_decode_internal(
}
}
ggml_cuda_set_mul_mat_q(cparams.mul_mat_q);
// HACK: ggml-alloc may change the tensor backend when reusing a parent, so force output to be on the CPU here if needed
if (!lctx.embedding.empty()) {
embeddings->backend = GGML_BACKEND_CPU;