add clarification for llama_chat_apply_template

This commit is contained in:
ngxson 2024-02-17 16:45:31 +01:00
parent 9c4422fbe9
commit 6012ad651f

View file

@ -706,6 +706,7 @@ extern "C" {
/// Apply chat template and maybe tokenize it. Inspired by hf apply_chat_template() on python.
/// Both "model" and "custom_template" are optional, but at least one is required. "custom_template" has higher precedence than "model"
/// NOTE: This function only support some know jinja templates. It is not a jinja parser.
/// @param custom_template A Jinja template to use for this conversion. If this is nullptr, the models default chat template will be used instead.
/// @param msg Pointer to a list of multiple llama_chat_message
/// @param add_ass Whether to end the prompt with the token(s) that indicate the start of an assistant message.