BERT tokenizer fixes (#6498)

Key changes:
* BERT conversion: fix abuse of LlamaHfVocab, do not set BOS or EOS
* Nomic Embed conversion: pad vocab instead of slicing embedding tensor
* llama_tokenize: handle added special tokens like HF does
This commit is contained in:
Jared Van Bortel 2024-04-09 13:44:08 -04:00 committed by GitHub
parent c4a3a4ff47
commit 1b67731e18
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
20 changed files with 221 additions and 194 deletions

View file

@ -223,14 +223,14 @@ void llama_batch_add(
std::vector<llama_token> llama_tokenize(
const struct llama_context * ctx,
const std::string & text,
bool add_bos,
bool special = false);
bool add_special,
bool parse_special = false);
std::vector<llama_token> llama_tokenize(
const struct llama_model * model,
const std::string & text,
bool add_bos,
bool special = false);
bool add_special,
bool parse_special = false);
// tokenizes a token into a piece
// should work similar to Python's `tokenizer.id_to_piece`