This website requires JavaScript.
Explore
Help
Sign in
vbatts
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
1640
commits
380
branches
3056
tags
365
MiB
4b1f70cb03
Commit graph
2 commits
Author
SHA1
Message
Date
crasm
4b1f70cb03
Fix bool return in llama_model_load, remove std::ignore use
2023-12-14 16:29:05 -05:00
crasm
3425e62745
llama : Add test for model load cancellation
2023-12-14 04:47:54 -05:00