This website requires JavaScript.
Explore
Help
Sign in
vbatts
/
llama.cpp
Watch
1
Star
0
Fork
You've already forked llama.cpp
0
Code
Issues
Pull requests
Projects
Releases
Packages
Wiki
Activity
Actions
fee824a1a1
llama.cpp
/
ggml
History
Download ZIP
Download TAR.GZ
Frankie Robertson
9150f8fef9
Do not include arm_neon.h when compiling CUDA code (ggml/1028)
2024-11-27 11:10:27 +02:00
..
include
ggml : add support for dynamic loading of backends (
#10469
)
2024-11-25 15:13:39 +01:00
src
Do not include arm_neon.h when compiling CUDA code (ggml/1028)
2024-11-27 11:10:27 +02:00
.gitignore
vulkan : cmake integration (
#8119
)
2024-07-13 18:12:39 +02:00
CMakeLists.txt
ggml : add support for dynamic loading of backends (
#10469
)
2024-11-25 15:13:39 +01:00