Update llama-run README.md

For consistency

Signed-off-by: Eric Curtin <ecurtin@redhat.com>
This commit is contained in:
Eric Curtin 2025-01-24 09:16:25 +00:00
parent 564804b79b
commit 224d6e9da9

View file

@ -3,11 +3,10 @@
The purpose of this example is to demonstrate a minimal usage of llama.cpp for running models.
```bash
llama-run granite-code
llama-run granite3-moe
```
```bash
llama-run -h
Description:
Runs a llm
@ -17,7 +16,7 @@ Usage:
Options:
-c, --context-size <value>
Context size (default: 2048)
-n, --ngl <value>
-n, -ngl, --ngl <value>
Number of GPU layers (default: 0)
--temp <value>
Temperature (default: 0.8)