From 3775d0debbdb5e697de6dd3765352dd4dcdc2558 Mon Sep 17 00:00:00 2001 From: Vaibhav Srivastav Date: Tue, 14 May 2024 23:01:11 +0200 Subject: [PATCH] chore: add references to the quantisation space. --- README.md | 4 ++++ examples/quantize/README.md | 4 +++- 2 files changed, 7 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 9f2f8df64..de7e9aaeb 100644 --- a/README.md +++ b/README.md @@ -712,6 +712,10 @@ Building the program with BLAS support may lead to some performance improvements ### Prepare and Quantize +Note: You can use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space on Hugging Face to quantise your model weights without any setup too. + +It is synced to `llama.cpp` main every 6 hours. + To obtain the official LLaMA 2 weights please see the Obtaining and using the Facebook LLaMA 2 model section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face. Note: `convert.py` does not support LLaMA 3, you can use `convert-hf-to-gguf.py` with LLaMA 3 downloaded from Hugging Face. diff --git a/examples/quantize/README.md b/examples/quantize/README.md index 8a10365c0..53f7e0bb1 100644 --- a/examples/quantize/README.md +++ b/examples/quantize/README.md @@ -1,6 +1,8 @@ # quantize -TODO +You can also use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space on Hugging Face to build your own quants without any setup. + +Note: It is synced to llama.cpp `main` every 6 hours. ## Llama 2 7B