readme: add convert-hf-to-gguf.py in example
I'm using huggingface model and find out that I need to use different script. So I make this change for people to save time instead of read or open new issue
This commit is contained in:
parent
29eee40474
commit
eaff11ca55
1 changed files with 3 additions and 0 deletions
|
@ -698,6 +698,9 @@ python convert.py models/mymodel/ --vocab-type bpe
|
|||
|
||||
# update the gguf filetype to current version if older version is now unsupported
|
||||
./quantize ./models/mymodel/ggml-model-Q4_K_M.gguf ./models/mymodel/ggml-model-Q4_K_M-v2.gguf COPY
|
||||
|
||||
# convert the hunggingface model to ggml FP16 format
|
||||
python3 convert-hf-to-gguf.py models/mymodel/
|
||||
```
|
||||
|
||||
### Run the quantized model
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue