diff --git a/README.md b/README.md index d8bd0facf..17dfa815b 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin - If you want you can also link your own install of OpenBLAS manually with `make LLAMA_OPENBLAS=1` - Alternatively, if you want you can also link your own install of CLBlast manually with `make LLAMA_CLBLAST=1`, for this you will need to obtain and link OpenCL and CLBlast libraries. - For a full featured build, do `make LLAMA_OPENBLAS=1 LLAMA_CLBLAST=1` - - For Arch Linux: Install `cblas` and `openblas`. + - For Arch Linux: Install `cblas` `openblas` and `clblast`. - For Debian: Install `libclblast-dev` and `libopenblas-dev`. - After all binaries are built, you can run the python script with the command `koboldcpp.py [ggml_model.bin] [port]` @@ -66,4 +66,4 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin - GPT-J (All versions including legacy f16, newer format + quantized, pyg.cpp, new pygmalion, janeway etc.) Supports OpenBLAS acceleration only for newer format. - RWKV (f16 GGMF format), unaccelerated due to RNN properties. - GPT-NeoX / Pythia - - Basically every single current and historical GGML format that has ever existed should be supported, except for bloomz.cpp due to lack of demand. \ No newline at end of file + - Basically every single current and historical GGML format that has ever existed should be supported, except for bloomz.cpp due to lack of demand.