fix temperature == 0.0f

This commit is contained in:
Johannes Gäßler 2024-05-07 20:49:04 +02:00
parent e25ca431a5
commit ae8235dc0a
2 changed files with 4 additions and 4 deletions

View file

@ -260,7 +260,7 @@ node index.js
`logit_bias`: Modify the likelihood of a token appearing in the generated text completion. For example, use `"logit_bias": [[15043,1.0]]` to increase the likelihood of the token 'Hello', or `"logit_bias": [[15043,-1.0]]` to decrease its likelihood. Setting the value to false, `"logit_bias": [[15043,false]]` ensures that the token `Hello` is never produced. The tokens can also be represented as strings, e.g. `[["Hello, World!",-0.5]]` will reduce the likelihood of all the individual tokens that represent the string `Hello, World!`, just like the `presence_penalty` does. Default: `[]` `logit_bias`: Modify the likelihood of a token appearing in the generated text completion. For example, use `"logit_bias": [[15043,1.0]]` to increase the likelihood of the token 'Hello', or `"logit_bias": [[15043,-1.0]]` to decrease its likelihood. Setting the value to false, `"logit_bias": [[15043,false]]` ensures that the token `Hello` is never produced. The tokens can also be represented as strings, e.g. `[["Hello, World!",-0.5]]` will reduce the likelihood of all the individual tokens that represent the string `Hello, World!`, just like the `presence_penalty` does. Default: `[]`
`n_probs`: If greater than 0, the response also contains the probabilities of top N tokens for each generated token given the sampling settings. Default: `0` `n_probs`: If greater than 0, the response also contains the probabilities of top N tokens for each generated token given the sampling settings. Note that for temperature < 0 the tokens are sampled greedily but token probabilities are still being calculated via a simple softmax of the logits without considering any other sampler settings. Default: `0`
`min_keep`: If greater than 0, force samplers to return N possible tokens at minimum. Default: `0` `min_keep`: If greater than 0, force samplers to return N possible tokens at minimum. Default: `0`

View file

@ -2271,12 +2271,12 @@ struct server_context {
const size_t n_considered = slot.ctx_sampling->n_considered; const size_t n_considered = slot.ctx_sampling->n_considered;
// Make sure at least n_probs top tokens are at the front of the vector: // Make sure at least n_probs top tokens are at the front of the vector:
if (n_probs > n_considered) { if (slot.sparams.temp == 0.0f && n_probs > n_considered) {
llama_sample_top_k(ctx, &cur_p, n_probs, 0); llama_sample_top_k(ctx, &cur_p, n_probs, 0);
} }
if (slot.sparams.temp <= 0.0f) { if (slot.sparams.temp == 0.0f) {
// With greedy sampling the probabilities were never calculated. // With greedy sampling the probabilities have possibly not been calculated.
for (size_t i = 0; i < n_probs; ++i) { for (size_t i = 0; i < n_probs; ++i) {
result.probs.push_back({ result.probs.push_back({
cur_p.data[i].id, cur_p.data[i].id,