From aee859519ec4475b9572b0ca93868dbd75fd2b42 Mon Sep 17 00:00:00 2001 From: Randall Fitzgerald Date: Thu, 15 Jun 2023 01:50:54 -0700 Subject: [PATCH] Update README.md Fixed a typo. --- examples/server/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/server/README.md b/examples/server/README.md index 93c38ce5d..474a28b20 100644 --- a/examples/server/README.md +++ b/examples/server/README.md @@ -119,7 +119,7 @@ node . `top_p`: Limit the next token selection to a subset of tokens with a cumulative probability above a threshold P (default: 0.9). - `n_predict`: Set the number of tokens to predict when generating text. **Note:** May exceed the the limit slightly if the last token is a partial multibyte character. (default: 128, -1 = infinity). + `n_predict`: Set the number of tokens to predict when generating text. **Note:** May exceed the set limit slightly if the last token is a partial multibyte character. (default: 128, -1 = infinity). `n_keep`: Specify the number of tokens from the initial prompt to retain when the model resets its internal context. By default, this value is set to 0 (meaning no tokens are kept). Use `-1` to retain all tokens from the initial prompt.