A reason for this to exist is for model quantizers who want an initial GGUF with the most fidelity to the original model while still using a 16-bit float type instead of 32-bit floats. |
||
---|---|---|
.. | ||
__init__.py | ||
constants.py | ||
gguf.py | ||
gguf_reader.py | ||
gguf_writer.py | ||
lazy.py | ||
py.typed | ||
tensor_mapping.py | ||
vocab.py |