llama : make model stateless and context stateful (llama_state) (#1797)

* llama : make model stateless and context stateful

* llama : minor cleanup

* llama : update internal API declaration

* Apply suggestions from code review

fix style

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* Missing model memory release

* Fix style

* Add deprecated warning for public API function llama_init_from_file

* Update public API use cases: move away from deprecated llama_init_from_file

* Deprecate public API function llama_apply_lora_from_file

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
This commit is contained in:
Didzis Gosko 2023-06-24 11:47:58 +03:00 committed by GitHub
parent d7b7484f74
commit 527b6fba1d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
13 changed files with 244 additions and 92 deletions

View file

@ -68,11 +68,12 @@ int main(int argc, char ** argv)
llama_init_backend();
llama_context * ctx ;
llama_model * model;
llama_context * ctx;
ctx = llama_init_from_gpt_params( params );
std::tie(model, ctx) = llama_init_from_gpt_params( params );
if ( ctx == NULL )
if ( model == NULL )
{
fprintf( stderr , "%s: error: unable to load model\n" , __func__ );
return 1;
@ -170,6 +171,7 @@ int main(int argc, char ** argv)
} // wend of main loop
llama_free( ctx );
llama_free_model( model );
return 0;
}