llama : remove n_threads from llama_decode_internal (#3614)
This commit removes `n_threads` from the `llama_decode_internal`
functions doc comment as it does not exist anymore.
It looks like this parameter was removed in
Commit 16bc66d947
("llama.cpp : split
llama_context_params into model and context params").
Signed-off-by: Daniel Bevenius <daniel.bevenius@gmail.com>
This commit is contained in:
parent
424b6381c4
commit
2a4bcbacea
1 changed files with 0 additions and 1 deletions
|
@ -5721,7 +5721,6 @@ static struct ggml_cgraph * llama_build_graph(
|
|||
//
|
||||
// - lctx: llama context
|
||||
// - batch: batch to evaluate
|
||||
// - n_threads: number of threads to use
|
||||
//
|
||||
// return 0 on success
|
||||
// return positive int on warning
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue