Misc: Make the conversion script executable
This commit is contained in:
parent
c80e2a8f2a
commit
81c9c9e8a6
2 changed files with 3 additions and 1 deletions
|
@ -142,7 +142,7 @@ ls ./models
|
||||||
python3 -m pip install torch numpy sentencepiece
|
python3 -m pip install torch numpy sentencepiece
|
||||||
|
|
||||||
# convert the 7B model to ggml FP16 format
|
# convert the 7B model to ggml FP16 format
|
||||||
python3 convert-pth-to-ggml.py models/7B/ 1
|
./convert-pth-to-ggml.py models/7B/ 1
|
||||||
|
|
||||||
# quantize the model to 4-bits
|
# quantize the model to 4-bits
|
||||||
./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin 2
|
./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin 2
|
||||||
|
|
2
convert-pth-to-ggml.py
Normal file → Executable file
2
convert-pth-to-ggml.py
Normal file → Executable file
|
@ -1,3 +1,5 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
# Convert a LLaMA model checkpoint to a ggml compatible file
|
# Convert a LLaMA model checkpoint to a ggml compatible file
|
||||||
#
|
#
|
||||||
# Load the model using Torch
|
# Load the model using Torch
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue