This website requires JavaScript.
Explore
Help
Sign in
vbatts
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
3219
commits
380
branches
3056
tags
365
MiB
083bacce14
Commit graph
2 commits
Author
SHA1
Message
Date
Johannes Gäßler
a818f3028d
CUDA: use MMQ instead of cuBLAS by default (
#8075
)
2024-06-24 17:43:42 +02:00
slaren
ae1f211ce2
cuda : refactor into multiple files (
#6269
)
2024-03-25 13:50:23 +01:00