llama : add support for Tekken pre-tokenizer (#8579)
* llama : Added support for Tekken pre-tokenizer (#8577) Removed uneeded `vocab.tokenizer_clean_spaces` assignment * llama : fix order of pre-tokenizers * * Tekken pre-tokenizer no longer uses clean_up_tokenization_spaces * Updated chkhsh for Tekken tokenizer --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit is contained in:
parent
69b9945b44
commit
940362224d
4 changed files with 18 additions and 0 deletions
|
@ -593,6 +593,9 @@ class Model:
|
|||
if chkhsh == "b53802fb28e26d645c3a310b34bfe07da813026ec7c7716883404d5e0f8b1901":
|
||||
# ref: https://huggingface.co/core42/jais-13b
|
||||
res = "jais"
|
||||
if chkhsh == "63b97e4253352e6f357cc59ea5b583e3a680eaeaf2632188c2b952de2588485e":
|
||||
# ref: https://huggingface.co/mistralai/Mistral-Nemo-Base-2407
|
||||
res = "tekken"
|
||||
|
||||
if res is None:
|
||||
logger.warning("\n")
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue