Fix memory bug in grammar parser (#7194)

The llama.cpp grammar parser had a bug where forgetting to add a closing
quotation mark to strings would cause parsing to crash. Anyone running a
server on a public endpoint is advised to upgrade. To reproduce this bug

    ./llamafile -m foo.gguf -p bar --grammar 'root::="'

Credit for discovering and reporting this issue goes to Eclypsium
Security Researcher Richard Johnson <Richard.johnson@eclypsium.com>.
This commit is contained in:
Justine Tunney 2024-05-10 07:01:08 -04:00 committed by GitHub
parent f89fe2732c
commit 4e3880978f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
4 changed files with 21 additions and 5 deletions

View file

@ -523,6 +523,10 @@ int main(int argc, char ** argv) {
}
struct llama_sampling_context * ctx_sampling = llama_sampling_init(sparams);
if (!ctx_sampling) {
fprintf(stderr, "%s: failed to initialize sampling subsystem\n", __func__);
exit(1);
}
while ((n_remain != 0 && !is_antiprompt) || params.interactive) {
// predict