add clarification for llama_chat_apply_template
This commit is contained in:
parent
9c4422fbe9
commit
6012ad651f
1 changed files with 1 additions and 0 deletions
1
llama.h
1
llama.h
|
@ -706,6 +706,7 @@ extern "C" {
|
|||
|
||||
/// Apply chat template and maybe tokenize it. Inspired by hf apply_chat_template() on python.
|
||||
/// Both "model" and "custom_template" are optional, but at least one is required. "custom_template" has higher precedence than "model"
|
||||
/// NOTE: This function only support some know jinja templates. It is not a jinja parser.
|
||||
/// @param custom_template A Jinja template to use for this conversion. If this is nullptr, the model’s default chat template will be used instead.
|
||||
/// @param msg Pointer to a list of multiple llama_chat_message
|
||||
/// @param add_ass Whether to end the prompt with the token(s) that indicate the start of an assistant message.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue