Scale buf_size linearly with n_ctx

This appear to solve https://github.com/ggerganov/llama.cpp/issues/153
where error of "ggml_new_tensor_impl: not enough space in the context's memory pool" is thrown in interactive mode. 

At least the out of memory error come from `ctx0` used here. Although I am not familiar with the code base enough to tell if this is indeed the cause.
This commit is contained in:
hx507 2023-03-17 05:11:49 +08:00 committed by GitHub
parent 721311070e
commit 7b8858415e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -549,9 +549,7 @@ bool llama_eval(
const int d_key = n_embd/n_head; const int d_key = n_embd/n_head;
// TODO: check if this size scales with n_ctx linearly and remove constant. somehow I feel it wasn't the case static size_t buf_size = (size_t)hparams.n_ctx*1024*1024;
// static size_t buf_size = hparams.n_ctx*1024*1024;
static size_t buf_size = 512u*1024*1024;
static void * buf = malloc(buf_size); static void * buf = malloc(buf_size);
if (mem_per_token > 0 && mem_per_token*N > buf_size) { if (mem_per_token > 0 && mem_per_token*N > buf_size) {