Implement non-greedy tokenizer that tries to maximize token lengths (#242)

* Implement non-greedy tokenizer that tries to maximize token lengths

* Insert single space in front of the prompt

- this is to match original llama tokenizer behavior

---------

Co-authored-by: Jakub Horak <jakub.horak@ibawizard.net>
This commit is contained in:
thement 2023-03-17 21:05:58 +01:00 committed by GitHub
parent 4f54609110
commit c9f670a177
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
2 changed files with 45 additions and 27 deletions

View file

@ -845,6 +845,8 @@ int main(int argc, char ** argv) {
std::vector<float> logits;
// Add a space in front of the first character to match OG llama tokenizer behavior
params.prompt.insert(0, 1, ' ');
// tokenize the prompt
std::vector<gpt_vocab::id> embd_inp = ::llama_tokenize(vocab, params.prompt, true);