Merge branch 'concedo' of https://github.com/LostRuins/llamacpp-for-kobold into concedo
This commit is contained in:
commit
ea5d01002f
1 changed files with 1 additions and 0 deletions
|
@ -23,6 +23,7 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin
|
||||||
- If you want you can also link your own install of OpenBLAS manually with `make LLAMA_OPENBLAS=1`
|
- If you want you can also link your own install of OpenBLAS manually with `make LLAMA_OPENBLAS=1`
|
||||||
- Alternatively, if you want you can also link your own install of CLBlast manually with `make LLAMA_CLBLAST=1`, for this you will need to obtain and link OpenCL and CLBlast libraries.
|
- Alternatively, if you want you can also link your own install of CLBlast manually with `make LLAMA_CLBLAST=1`, for this you will need to obtain and link OpenCL and CLBlast libraries.
|
||||||
- For Arch Linux: Install `cblas` and `openblas`. In the makefile, find the `ifdef LLAMA_OPENBLAS` conditional and add `-lcblas` to `LDFLAGS`.
|
- For Arch Linux: Install `cblas` and `openblas`. In the makefile, find the `ifdef LLAMA_OPENBLAS` conditional and add `-lcblas` to `LDFLAGS`.
|
||||||
|
- For Debian: Install `libclblast-dev` and `libopenblas-dev`.
|
||||||
- After all binaries are built, you can run the python script with the command `koboldcpp.py [ggml_model.bin] [port]`
|
- After all binaries are built, you can run the python script with the command `koboldcpp.py [ggml_model.bin] [port]`
|
||||||
|
|
||||||
## Considerations
|
## Considerations
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue