This website requires JavaScript.
Explore
Help
Sign in
vbatts
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
467576b6cc
llama.cpp
/
ggml
History
Download ZIP
Download TAR.GZ
Johannes Gäßler
467576b6cc
CMake: default to -arch=native for CUDA build (
#10320
)
2024-11-17 09:06:34 +01:00
..
include
ggml: new optimization interface (ggml/988)
2024-11-17 08:30:29 +02:00
src
CMake: default to -arch=native for CUDA build (
#10320
)
2024-11-17 09:06:34 +01:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml: new optimization interface (ggml/988)
2024-11-17 08:30:29 +02:00