SimpleChat:ChatRequestOptions: max_tokens

As some times based on the query from the user, the ai model may get
into a run away kind of generation with repeatations etal, so adding
max_tokens to try and limit this run away behaviour, if possible.
This commit is contained in:
HanishKVC 2024-05-23 18:51:36 +05:30
parent 59f74c7de9
commit cbd853eda9

View file

@ -34,7 +34,8 @@ let gUsageMsg = `
// Add needed fields wrt json object to be sent wrt LLM web services completions endpoint.
let gChatRequestOptions = {
"temperature": 0.7
"temperature": 0.7,
"max_tokens": 2048
};
class SimpleChat {