From ad945e2c416fd3fc49d4a350b386a91d0f959dbc Mon Sep 17 00:00:00 2001 From: Concedo <39025047+LostRuins@users.noreply.github.com> Date: Thu, 29 Jun 2023 22:13:39 +0800 Subject: [PATCH] make instructions clearer --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 22615b1fc..8147cdfcb 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin ![Preview](media/preview.png) ## Usage -- [Download the latest release here](https://github.com/LostRuins/koboldcpp/releases/latest) or clone the repo. +- **[Download the latest .exe release here](https://github.com/LostRuins/koboldcpp/releases/latest)** or clone the git repo. - Windows binaries are provided in the form of **koboldcpp.exe**, which is a pyinstaller wrapper for a few **.dll** files and **koboldcpp.py**. If you feel concerned, you may prefer to rebuild it yourself with the provided makefiles and scripts. - Weights are not included, you can use the official llama.cpp `quantize.exe` to generate them from your official weight files (or download them from other places). - To run, execute **koboldcpp.exe** or drag and drop your quantized `ggml_model.bin` file onto the .exe, and then connect with Kobold or Kobold Lite. If you're not on windows, then run the script **KoboldCpp.py** after compiling the libraries.