cuda : improve text-generation and batched decoding performance (#3776)
* cuda : prints wip * cuda : new cublas gemm branch for multi-batch quantized src0 * cuda : add F32 sgemm branch * cuda : fine-tune >= VOLTA params + use MMQ only for small batches * cuda : remove duplicated cuBLAS GEMM code * cuda : add CUDA_USE_TENSOR_CORES and GGML_CUDA_FORCE_MMQ macros * build : add compile option to force use of MMQ kernels
This commit is contained in:
parent
34b2a5e1ee
commit
2f9ec7e271
5 changed files with 125 additions and 19 deletions
|
@ -5959,8 +5959,6 @@ static int llama_decode_internal(
|
|||
}
|
||||
}
|
||||
|
||||
ggml_cuda_set_mul_mat_q(cparams.mul_mat_q);
|
||||
|
||||
// HACK: ggml-alloc may change the tensor backend when reusing a parent, so force output to be on the CPU here if needed
|
||||
if (!lctx.embedding.empty()) {
|
||||
embeddings->backend = GGML_BACKEND_CPU;
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue