ChatON:LoadJSon:ChatTemplates: revPrompt, system-user flags

WIP:NOTE:

Initial go converting from json driven flow to ChatTemplatesGroupKV
related flow done. Needs to be tested.

A optional helper added to load ChatTemplates from a specified
json file.

Need to add a compile time initialized MapOfMapOfVariants wrt
the chat template details of models/standards already known
to the program. So that one can use the llama.cpp and this new
chat template logic, even without json dependency, if one doesnt
want to.
This commit is contained in:
HanishKVC 2024-05-12 01:36:03 +05:30
parent 444d2ccf9c
commit 1574201f71

View file

@ -361,6 +361,20 @@ inline bool chaton_meta_load(const std::string &fname) {
std::string assistantEnd = curTmpl[K_ASSISTANT][K_END];
gCT.set_value<std::string>(group, { K_ASSISTANT, K_END }, assistantEnd);
std::string reversePrompt = curTmpl[K_REVERSE_PROMPT];
gCT.set_value<std::string>(group, { K_REVERSE_PROMPT }, reversePrompt);
bool systemHasSuffix = curTmpl[K_SYSTEMUSER_SYSTEM_HAS_SUFFIX];
gCT.set_value(group, { K_SYSTEMUSER_SYSTEM_HAS_SUFFIX }, systemHasSuffix);
bool systemHasEnd = curTmpl[K_SYSTEMUSER_SYSTEM_HAS_END];
gCT.set_value(group, { K_SYSTEMUSER_SYSTEM_HAS_END }, systemHasEnd);
bool userHasBegin = curTmpl[K_SYSTEMUSER_1ST_USER_HAS_BEGIN];
gCT.set_value(group, { K_SYSTEMUSER_1ST_USER_HAS_BEGIN }, userHasBegin);
bool userHasPrefix = curTmpl[K_SYSTEMUSER_1ST_USER_HAS_PREFIX];
gCT.set_value(group, { K_SYSTEMUSER_1ST_USER_HAS_PREFIX }, userHasPrefix);
}
LOGXLN("%s", gCT.dump("", "DBUG:ChatONMetaLoad:ChatTemplates:").c_str());
return true;