Key changes: * BERT conversion: fix abuse of LlamaHfVocab, do not set BOS or EOS * Nomic Embed conversion: pad vocab instead of slicing embedding tensor * llama_tokenize: handle added special tokens like HF does |
||
|---|---|---|
| .. | ||
| CMakeLists.txt | ||
| lookahead.cpp | ||
| README.md | ||
llama.cpp/examples/lookahead
Demonstration of lookahead decoding technique: