llama : support batched embeddings (#5466)

* batched embedding: pool outputs by sequence id. updated embedding example

* bring back non-causal attention

* embd : minor improvements

* llama : minor

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit is contained in:
Douglas Hanley 2024-02-13 06:06:58 -06:00 committed by GitHub
parent ad014bba97
commit 03bf161eb6
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 163 additions and 54 deletions

View file

@ -1648,6 +1648,7 @@ class BertModel(Model):
self.gguf_writer.add_head_count(self.hparams["num_attention_heads"])
self.gguf_writer.add_layer_norm_eps(self.hparams["layer_norm_eps"])
self.gguf_writer.add_causal_attention(False)
self.gguf_writer.add_pooling_layer(True)
self.gguf_writer.add_file_type(self.ftype)
def set_vocab(self):