Add docs for llama_chat_apply_template (#5645)
* add docs for llama_chat_apply_template * fix typo
This commit is contained in:
parent
7fe4678b02
commit
7c8bcc11dc
2 changed files with 2 additions and 1 deletions
|
@ -41,6 +41,7 @@ see https://github.com/ggerganov/llama.cpp/issues/1437
|
|||
- `--grp-attn-w`: Set the group attention width to extend context size through self-extend(default: 512), used together with group attention factor `--grp-attn-n`
|
||||
- `-n, --n-predict`: Set the maximum tokens to predict (default: -1)
|
||||
- `--slots-endpoint-disable`: To disable slots state monitoring endpoint. Slots state may contain user data, prompts included.
|
||||
- `--chat-template JINJA_TEMPLATE`: Set custom jinja chat template. This parameter accepts a string, not a file name (default: template taken from model's metadata). We only support [some pre-defined templates](https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template)
|
||||
|
||||
## Build
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue