updated README.md for main
This commit is contained in:
parent
c52b922d98
commit
0680710b06
1 changed files with 2 additions and 0 deletions
|
@ -161,6 +161,8 @@ A value of -1 will enable infinite text generation, even though we have a finite
|
||||||
|
|
||||||
If the pause is undesirable, a value of -2 will stop generation immediately when the context is filled.
|
If the pause is undesirable, a value of -2 will stop generation immediately when the context is filled.
|
||||||
|
|
||||||
|
The `--no-context-shift` options allows you to stop the inifinite text generation once the finite context window is full.
|
||||||
|
|
||||||
It is important to note that the generated text may be shorter than the specified number of tokens if an End-of-Sequence (EOS) token or a reverse prompt is encountered. In interactive mode, text generation will pause and control will be returned to the user. In non-interactive mode, the program will end. In both cases, the text generation may stop before reaching the specified `--predict` value. If you want the model to keep going without ever producing End-of-Sequence on its own, you can use the `--ignore-eos` parameter.
|
It is important to note that the generated text may be shorter than the specified number of tokens if an End-of-Sequence (EOS) token or a reverse prompt is encountered. In interactive mode, text generation will pause and control will be returned to the user. In non-interactive mode, the program will end. In both cases, the text generation may stop before reaching the specified `--predict` value. If you want the model to keep going without ever producing End-of-Sequence on its own, you can use the `--ignore-eos` parameter.
|
||||||
|
|
||||||
### Temperature
|
### Temperature
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue