Merge branch 'master' into gg/llama-kv-cache
ggml-ci
This commit is contained in:
commit
972f91c7d7
61 changed files with 8505 additions and 3580 deletions
|
@ -37,7 +37,7 @@ Once downloaded, place your model in the models folder in llama.cpp.
|
|||
|
||||
##### Infinite text from a starting prompt (you can use `Ctrl-C` to stop it):
|
||||
```bash
|
||||
./llama-cli -m models\gemma-1.1-7b-it.Q4_K_M.gguf --ignore-eos -n -1
|
||||
./llama-cli -m models/gemma-1.1-7b-it.Q4_K_M.gguf --ignore-eos -n -1
|
||||
```
|
||||
|
||||
### Windows:
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue