doc: update n-gpu-layers to show correct GPU usage
This commit is contained in:
parent
5e498be648
commit
7298e97947
1 changed files with 1 additions and 1 deletions
|
@ -70,7 +70,7 @@ You can consume the endpoints with Postman or NodeJS with axios library. You can
|
|||
docker run -p 8080:8080 -v /path/to/models:/models ggerganov/llama.cpp:server -m models/7B/ggml-model.gguf -c 512 --host 0.0.0.0 --port 8080
|
||||
|
||||
# or, with CUDA:
|
||||
docker run -p 8080:8080 -v /path/to/models:/models --gpus all ggerganov/llama.cpp:server -m models/7B/ggml-model.gguf -c 512 --host 0.0.0.0 --port 8080 --n-gpu-layers 1
|
||||
docker run -p 8080:8080 -v /path/to/models:/models --gpus all ggerganov/llama.cpp:server -m models/7B/ggml-model.gguf -c 512 --host 0.0.0.0 --port 8080 --n-gpu-layers 99
|
||||
```
|
||||
|
||||
## Testing with CURL
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue