Implement non-greedy tokenizer that tries to maximize token lengths (#242)
* Implement non-greedy tokenizer that tries to maximize token lengths * Insert single space in front of the prompt - this is to match original llama tokenizer behavior --------- Co-authored-by: Jakub Horak <jakub.horak@ibawizard.net>
This commit is contained in:
parent
4f54609110
commit
c9f670a177
2 changed files with 45 additions and 27 deletions
2
main.cpp
2
main.cpp
|
@ -845,6 +845,8 @@ int main(int argc, char ** argv) {
|
|||
|
||||
std::vector<float> logits;
|
||||
|
||||
// Add a space in front of the first character to match OG llama tokenizer behavior
|
||||
params.prompt.insert(0, 1, ' ');
|
||||
// tokenize the prompt
|
||||
std::vector<gpt_vocab::id> embd_inp = ::llama_tokenize(vocab, params.prompt, true);
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue