diff --git a/README.md b/README.md index de7e9aaeb..3354d0989 100644 --- a/README.md +++ b/README.md @@ -714,7 +714,7 @@ Building the program with BLAS support may lead to some performance improvements Note: You can use the [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space on Hugging Face to quantise your model weights without any setup too. -It is synced to `llama.cpp` main every 6 hours. +It is synced from `llama.cpp` main every 6 hours. To obtain the official LLaMA 2 weights please see the Obtaining and using the Facebook LLaMA 2 model section. There is also a large selection of pre-quantized `gguf` models available on Hugging Face.