fix autoawq quantized gemma model convert error

using autoawq to quantize gemma model will include a lm_head.weight tensor in model-00001-of-00002.safetensors. it result in this situation that convert-hf-to-gguf.py can't map lm_head.weight. skip loading this tensor could prevent this error.
This commit is contained in:
Zheng.Deng 2024-04-15 23:44:00 +08:00
parent 132f55795e
commit 2082353020

View file

@ -2262,6 +2262,11 @@ class GemmaModel(Model):
tensor_map = gguf.get_tensor_name_map(self.model_arch, block_count)
for name, data_torch in self.get_tensors():
# lm_head is not used in llama.cpp, while autoawq will include this tensor in model
# To prevent errors, skip loading lm_head.weight.
if "lm_head.weight" in name:
continue
old_dtype = data_torch.dtype
# convert any unsupported data types to float32