Merge pull request #27 from ariez-xyz/patch-1
add more precise instructions for arch
This commit is contained in:
commit
5dd610032e
1 changed files with 2 additions and 1 deletions
|
@ -22,6 +22,7 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin
|
|||
## OSX and Linux
|
||||
- You will have to compile your binaries from source. A makefile is provided, simply run `make`
|
||||
- If you want you can also link your own install of OpenBLAS manually with `make LLAMA_OPENBLAS=1`
|
||||
- For Arch Linux: Install `cblas` and `openblas`. In the makefile, find the `ifdef LLAMA_OPENBLAS` conditional and add `-lcblas` to `LDFLAGS`.
|
||||
- After all binaries are built, you can run the python script with the command `koboldcpp.py [ggml_model.bin] [port]`
|
||||
|
||||
## Considerations
|
||||
|
@ -39,4 +40,4 @@ What does it mean? You get llama.cpp with a fancy UI, persistent stories, editin
|
|||
|
||||
## Notes
|
||||
- Generation delay scales linearly with original prompt length. See [this discussion](https://github.com/ggerganov/llama.cpp/discussions/229). If OpenBLAS is enabled then prompt ingestion becomes about 2-3x faster. This is automatic on windows, but will require linking on OSX and Linux.
|
||||
- I have heard of someone claiming a false AV positive report. The exe is a simple pyinstaller bundle that includes the necessary python scripts and dlls to run. If this still concerns you, you might wish to rebuild everything from source code using the makefile, and you can rebuild the exe yourself with pyinstaller by using `make_pyinstaller.bat`
|
||||
- I have heard of someone claiming a false AV positive report. The exe is a simple pyinstaller bundle that includes the necessary python scripts and dlls to run. If this still concerns you, you might wish to rebuild everything from source code using the makefile, and you can rebuild the exe yourself with pyinstaller by using `make_pyinstaller.bat`
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue