Update README.md (#120)
This commit is contained in:
parent
0aa3d839fb
commit
27bc29128e
1 changed files with 2 additions and 2 deletions
|
@ -40,7 +40,7 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin
|
|||
- If you want you can also link your own install of OpenBLAS manually with `make LLAMA_OPENBLAS=1`
|
||||
- Alternatively, if you want you can also link your own install of CLBlast manually with `make LLAMA_CLBLAST=1`, for this you will need to obtain and link OpenCL and CLBlast libraries.
|
||||
- For a full featured build, do `make LLAMA_OPENBLAS=1 LLAMA_CLBLAST=1`
|
||||
- For Arch Linux: Install `cblas` and `openblas`.
|
||||
- For Arch Linux: Install `cblas` `openblas` and `clblast`.
|
||||
- For Debian: Install `libclblast-dev` and `libopenblas-dev`.
|
||||
- After all binaries are built, you can run the python script with the command `koboldcpp.py [ggml_model.bin] [port]`
|
||||
|
||||
|
@ -66,4 +66,4 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin
|
|||
- GPT-J (All versions including legacy f16, newer format + quantized, pyg.cpp, new pygmalion, janeway etc.) Supports OpenBLAS acceleration only for newer format.
|
||||
- RWKV (f16 GGMF format), unaccelerated due to RNN properties.
|
||||
- GPT-NeoX / Pythia
|
||||
- Basically every single current and historical GGML format that has ever existed should be supported, except for bloomz.cpp due to lack of demand.
|
||||
- Basically every single current and historical GGML format that has ever existed should be supported, except for bloomz.cpp due to lack of demand.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue