lookup : add prompt lookup decoding example (#4484)
* initial commit, going through initializations * main loop finished, starting to debug * BUG: generates gibberish/repeating tokens after a while * kv_cache management * Added colors to distinguish drafted tokens (--color). Updated README * lookup : fix token positions in the draft batch * lookup : use n_draft from CLI params * lookup : final touches --------- Co-authored-by: Leon Ericsson <leon.ericsson@icloud.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit is contained in:
parent
ba66175132
commit
7082d24cec
7 changed files with 256 additions and 2 deletions
13
examples/lookup/README.md
Normal file
13
examples/lookup/README.md
Normal file
|
@ -0,0 +1,13 @@
|
|||
# llama.cpp/examples/lookup
|
||||
|
||||
Demonstration of Prompt Lookup Decoding
|
||||
|
||||
https://github.com/apoorvumang/prompt-lookup-decoding
|
||||
|
||||
The key parameters for lookup decoding are `ngram_min`, `ngram_max` and `n_draft`. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found.
|
||||
|
||||
More info:
|
||||
|
||||
https://github.com/ggerganov/llama.cpp/pull/4484
|
||||
https://github.com/ggerganov/llama.cpp/issues/4226
|
||||
|
Loading…
Add table
Add a link
Reference in a new issue