server : --n-predict option document and cap to max value (#5549)
* server: document --n-predict * server: ensure client request cannot override n_predict if set * server: fix print usage LF in new --n-predict option
This commit is contained in:
parent
66c1968f7a
commit
36376abe05
2 changed files with 15 additions and 1 deletions
|
@ -39,6 +39,7 @@ see https://github.com/ggerganov/llama.cpp/issues/1437
|
|||
- `--mmproj MMPROJ_FILE`: Path to a multimodal projector file for LLaVA.
|
||||
- `--grp-attn-n`: Set the group attention factor to extend context size through self-extend(default: 1=disabled), used together with group attention width `--grp-attn-w`
|
||||
- `--grp-attn-w`: Set the group attention width to extend context size through self-extend(default: 512), used together with group attention factor `--grp-attn-n`
|
||||
- `-n, --n-predict`: Set the maximum tokens to predict (default: -1)
|
||||
|
||||
## Build
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue