llama.cpp/ggml
Ikko Eltociear Ashimine 7bf68a1ca6
chore: update ggml-cpu-aarch64.cpp
appropiate -> appropriate
2025-02-10 12:34:08 +09:00
..
cmake cmake: add ggml find package (#11369) 2025-01-26 12:07:48 -04:00
include CUDA: use mma PTX instructions for FlashAttention (#11583) 2025-02-02 19:31:09 +01:00
src chore: update ggml-cpu-aarch64.cpp 2025-02-10 12:34:08 +09:00
.gitignore vulkan : cmake integration (#8119) 2024-07-13 18:12:39 +02:00
CMakeLists.txt cmake: Add ability to pass in GGML_BUILD_NUMBER (ggml/1096) 2025-02-04 12:59:15 +02:00