llama : support all OpenELM models
* llama : add variable GQA and variable FFN sizes Some metadata keys can now also be arrays to support setting their value per-layer for models like OpenELM.
This commit is contained in:
parent
51b2577dd4
commit
c8cdb48d10
5 changed files with 247 additions and 188 deletions
|
@ -267,7 +267,6 @@ class TensorNameMap:
|
|||
"encoder.layers.{bid}.mlp.fc11", # nomic-bert
|
||||
"model.layers.{bid}.mlp.c_fc", # starcoder2
|
||||
"encoder.layer.{bid}.mlp.gated_layers_v", # jina-bert-v2
|
||||
"transformer.layers.{bid}.ffn.proj_1", # openelm
|
||||
"model.layers.{bid}.residual_mlp.w3", # arctic
|
||||
),
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue