Tidy Android Instructions README.md

Remove CLBlast instructions(outdated), added OpenBlas.
This commit is contained in:
Jeximo 2024-04-30 20:33:45 -03:00 committed by GitHub
parent a8f9b07631
commit b115ad432e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -977,49 +977,33 @@ Here is a demo of an interactive session running on Pixel 5 phone:
https://user-images.githubusercontent.com/271616/225014776-1d567049-ad71-4ef2-b050-55b0b3b9274c.mp4
#### Building the Project using Termux (F-Droid)
Termux from F-Droid offers an alternative route to execute the project on an Android device. This method empowers you to construct the project right from within the terminal, negating the requirement for a rooted device or SD Card.
#### Build on Android using Termux (F-Droid)
F-Droid Termux is an alternative to execute `llama.cpp` on an Android device(*no root required*).
Outlined below are the directives for installing the project using OpenBLAS and CLBlast. This combination is specifically designed to deliver peak performance on recent devices that feature a GPU.
Below are instructions to install `llama.cpp` including CPU and OpenBLAS inference.
If you opt to utilize OpenBLAS, you'll need to install the corresponding package.
```
apt update && apt upgrade -y
apt install libopenblas
```
Subsequently, if you decide to incorporate CLBlast, you'll first need to install the requisite OpenCL packages:
Due to permission limitations in the Android API, it's essential to move your model inside the `~/` directory for best performance:
```
apt install ocl-icd opencl-headers opencl-clhpp clinfo
cd storage/downloads
mv model.gguf ~/
```
In order to compile CLBlast, you'll need to first clone the respective Git repository, which can be found at this URL: https://github.com/CNugteren/CLBlast. Alongside this, clone this repository into your home directory. Once this is done, navigate to the CLBlast folder and execute the commands detailed below:
Build & run `llama.cpp`:
```
cmake .
make
cp libclblast.so* $PREFIX/lib
cp ./include/clblast.h ../llama.cpp
$HOME
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build -DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS && cd build && cmake --build . --config Release
cd bin
./main -m ~/model.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:"
```
Following the previous steps, navigate to the LlamaCpp directory. To compile it with OpenBLAS and CLBlast, execute the command provided below:
```
cp /data/data/com.termux/files/usr/include/openblas/cblas.h .
cp /data/data/com.termux/files/usr/include/openblas/openblas_config.h .
make LLAMA_CLBLAST=1 //(sometimes you need to run this command twice)
```
Upon completion of the aforementioned steps, you will have successfully compiled the project. To run it using CLBlast, a slight adjustment is required: a command must be issued to direct the operations towards your device's physical GPU, rather than the virtual one. The necessary command is detailed below:
```
GGML_OPENCL_PLATFORM=0
GGML_OPENCL_DEVICE=0
export LD_LIBRARY_PATH=/vendor/lib64:$LD_LIBRARY_PATH
```
(Note: some Android devices, like the Zenfone 8, need the following command instead - "export LD_LIBRARY_PATH=/system/vendor/lib64:$LD_LIBRARY_PATH". Source: https://www.reddit.com/r/termux/comments/kc3ynp/opencl_working_in_termux_more_in_comments/ )
For easy and swift re-execution, consider documenting this final part in a .sh script file. This will enable you to rerun the process with minimal hassle.
Place your desired model into the `~/llama.cpp/models/` directory and execute the `./main (...)` script.
### Docker
#### Prerequisites