SimpleChat:ChatRequestOptions: max_tokens
As some times based on the query from the user, the ai model may get into a run away kind of generation with repeatations etal, so adding max_tokens to try and limit this run away behaviour, if possible.
This commit is contained in:
parent
59f74c7de9
commit
cbd853eda9
1 changed files with 2 additions and 1 deletions
|
@ -34,7 +34,8 @@ let gUsageMsg = `
|
||||||
|
|
||||||
// Add needed fields wrt json object to be sent wrt LLM web services completions endpoint.
|
// Add needed fields wrt json object to be sent wrt LLM web services completions endpoint.
|
||||||
let gChatRequestOptions = {
|
let gChatRequestOptions = {
|
||||||
"temperature": 0.7
|
"temperature": 0.7,
|
||||||
|
"max_tokens": 2048
|
||||||
};
|
};
|
||||||
|
|
||||||
class SimpleChat {
|
class SimpleChat {
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue