llama.cpp/gguf-py/gguf
Francis Couture-Harpin 58b515cad6 convert-hf : add --outtype auto-f16
A reason for this to exist is for model quantizers who want an initial
GGUF with the most fidelity to the original model while still using
a 16-bit float type instead of 32-bit floats.
2024-05-09 15:27:44 -04:00
..
__init__.py convert-hf : support bfloat16 conversion 2024-05-08 23:14:25 -04:00
constants.py convert-hf : add --outtype auto-f16 2024-05-09 15:27:44 -04:00
gguf.py gguf-py: Refactor and allow reading/modifying existing GGUF files (#3981) 2023-11-11 08:04:50 +03:00
gguf_reader.py convert-hf : save memory with lazy evaluation (#7075) 2024-05-08 18:16:38 -04:00
gguf_writer.py convert-hf : get bit-exact same output as ./quantize 2024-05-09 12:29:31 -04:00
lazy.py gguf-py : flake8 fixes 2024-05-08 23:21:13 -04:00
py.typed convert : various script cleanups/fixes + merges and special token handling (#2842) 2023-08-30 11:25:50 +03:00
tensor_mapping.py llama : add phi3 support (#6852) 2024-04-24 10:00:37 +03:00
vocab.py convert-hf : save memory with lazy evaluation (#7075) 2024-05-08 18:16:38 -04:00