gguf : fix resource leaks (#6061)

There several places where a gguf context is allocated. A call to gguf_free
is missing in some error paths. Also on linux, llama-bench was missing a
fclose.
This commit is contained in:
Steve Grubb 2024-03-14 14:29:32 -04:00 committed by GitHub
parent 727107707a
commit 6e0438da3c
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 7 additions and 0 deletions

View file

@ -711,6 +711,7 @@ static bool load_checkpoint_file(const char * filename, struct my_llama_model *
load_checkpoint_gguf(fctx, f_ggml_ctx, model, train);
gguf_free(fctx);
return true;
}