Update README.md
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit is contained in:
parent
5c65037280
commit
76af67c853
1 changed files with 2 additions and 3 deletions
|
@ -712,9 +712,8 @@ Building the program with BLAS support may lead to some performance improvements
|
|||
|
||||
### Prepare and Quantize
|
||||
|
||||
Note: You can use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space on Hugging Face to quantise your model weights without any setup too.
|
||||
|
||||
It is synced from `llama.cpp` main every 6 hours.
|
||||
> [!NOTE]
|
||||
> You can use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space on Hugging Face to quantise your model weights without any setup too. It is synced from `llama.cpp` main every 6 hours.
|
||||
|
||||
To obtain the official LLaMA 2 weights please see the <a href="#obtaining-and-using-the-facebook-llama-2-model">Obtaining and using the Facebook LLaMA 2 model</a> section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face.
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue