This website requires JavaScript.
Explore
Help
Sign in
vbatts
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
c0956b09ba
llama.cpp
/
gguf-py
/
gguf
History
Download ZIP
Download TAR.GZ
pmysl
c1386c936e
gguf-py : add IQ1_M to GGML_QUANT_SIZES (
#6761
)
2024-04-21 15:49:30 +03:00
..
__init__.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
constants.py
gguf-py : add IQ1_M to GGML_QUANT_SIZES (
#6761
)
2024-04-21 15:49:30 +03:00
gguf.py
gguf-py: Refactor and allow reading/modifying existing GGUF files (
#3981
)
2023-11-11 08:04:50 +03:00
gguf_reader.py
gguf : add support for I64 and F64 arrays (
#6062
)
2024-03-15 10:46:51 +02:00
gguf_writer.py
convert : support models with multiple chat templates (
#6588
)
2024-04-18 14:49:01 +03:00
py.typed
convert : various script cleanups/fixes + merges and special token handling (
#2842
)
2023-08-30 11:25:50 +03:00
tensor_mapping.py
llama : add qwen2moe (
#6074
)
2024-04-16 18:40:48 +03:00
vocab.py
convert : support models with multiple chat templates (
#6588
)
2024-04-18 14:49:01 +03:00