fix: readme
This commit is contained in:
parent
28a2e6e7d4
commit
34a9ef6c6c
1 changed files with 2 additions and 2 deletions
|
@ -214,7 +214,7 @@ cd llama.cpp
|
||||||
|
|
||||||
### Build
|
### Build
|
||||||
|
|
||||||
In order to build llama.cpp you have three different options.
|
In order to build llama.cpp you have different options.
|
||||||
|
|
||||||
- Using `make`:
|
- Using `make`:
|
||||||
- On Linux or MacOS:
|
- On Linux or MacOS:
|
||||||
|
@ -320,7 +320,7 @@ mpirun -hostfile hostfile -n 3 ./main -m ./models/7B/ggml-model-q4_0.gguf -n 128
|
||||||
|
|
||||||
### BLAS Build
|
### BLAS Build
|
||||||
|
|
||||||
Building the program with BLAS support may lead to some performance improvements in prompt processing using batch sizes higher than 32 (the default is 512). BLAS doesn't affect the normal generation performance. There are currently three different implementations of it:
|
Building the program with BLAS support may lead to some performance improvements in prompt processing using batch sizes higher than 32 (the default is 512). BLAS doesn't affect the normal generation performance. There are currently various different implementations of it:
|
||||||
|
|
||||||
- #### Accelerate Framework:
|
- #### Accelerate Framework:
|
||||||
|
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue