This website requires JavaScript.
Explore
Help
Sign in
vbatts
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
a5735e4426
llama.cpp
/
gguf-py
/
scripts
History
Download ZIP
Download TAR.GZ
Galunid
eb57fee51f
gguf-py : Add tokenizer.ggml.pre to gguf-new-metadata.py (
#7627
)
2024-05-30 02:10:40 +02:00
..
__init__.py
convert : support models with multiple chat templates (
#6588
)
2024-04-18 14:49:01 +03:00
gguf-convert-endian.py
convert.py : add python logging instead of print() (
#6511
)
2024-05-03 22:36:41 +03:00
gguf-dump.py
convert-hf : save memory with lazy evaluation (
#7075
)
2024-05-08 18:16:38 -04:00
gguf-new-metadata.py
gguf-py : Add tokenizer.ggml.pre to gguf-new-metadata.py (
#7627
)
2024-05-30 02:10:40 +02:00
gguf-set-metadata.py
convert.py : add python logging instead of print() (
#6511
)
2024-05-03 22:36:41 +03:00