Update README.md (#120)

This commit is contained in:
Disty0 2023-04-26 12:33:34 +03:00 committed by GitHub
parent 0aa3d839fb
commit 27bc29128e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -40,7 +40,7 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin
- If you want you can also link your own install of OpenBLAS manually with `make LLAMA_OPENBLAS=1`
- Alternatively, if you want you can also link your own install of CLBlast manually with `make LLAMA_CLBLAST=1`, for this you will need to obtain and link OpenCL and CLBlast libraries.
- For a full featured build, do `make LLAMA_OPENBLAS=1 LLAMA_CLBLAST=1`
- For Arch Linux: Install `cblas` and `openblas`.
- For Arch Linux: Install `cblas` `openblas` and `clblast`.
- For Debian: Install `libclblast-dev` and `libopenblas-dev`.
- After all binaries are built, you can run the python script with the command `koboldcpp.py [ggml_model.bin] [port]`
@ -66,4 +66,4 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin
- GPT-J (All versions including legacy f16, newer format + quantized, pyg.cpp, new pygmalion, janeway etc.) Supports OpenBLAS acceleration only for newer format.
- RWKV (f16 GGMF format), unaccelerated due to RNN properties.
- GPT-NeoX / Pythia
- Basically every single current and historical GGML format that has ever existed should be supported, except for bloomz.cpp due to lack of demand.
- Basically every single current and historical GGML format that has ever existed should be supported, except for bloomz.cpp due to lack of demand.