diff --git a/README.md b/README.md index 3354d0989..c07516549 100644 --- a/README.md +++ b/README.md @@ -712,9 +712,8 @@ Building the program with BLAS support may lead to some performance improvements ### Prepare and Quantize -Note: You can use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space on Hugging Face to quantise your model weights without any setup too. - -It is synced from `llama.cpp` main every 6 hours. +> [!NOTE] +> You can use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space on Hugging Face to quantise your model weights without any setup too. It is synced from `llama.cpp` main every 6 hours. To obtain the official LLaMA 2 weights please see the Obtaining and using the Facebook LLaMA 2 model section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face.