llama.cpp/ggml/src/ggml-cpu
issixx d2e518e9b4
ggml-cpu : fix ggml_graph_compute_thread did not terminate on abort. (ggml/1065)
some threads kept looping and failed to terminate properly after an abort during CPU execution.

Co-authored-by: issi <issi@gmail.com>
2025-01-29 11:24:51 +02:00
..
amx remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
llamafile llamafile : ppc64le MMA INT8 implementation (#10912) 2025-01-08 12:54:19 +02:00
CMakeLists.txt ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027) 2024-12-31 15:23:33 +01:00
cpu-feats-x86.cpp ggml : add predefined list of CPU backend variants to build (#10626) 2024-12-04 14:45:40 +01:00
ggml-cpu-aarch64.cpp ggml-backend : only offload from host buffers (fix) (#11124) 2025-01-07 16:11:57 +01:00
ggml-cpu-aarch64.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-hbm.cpp ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-hbm.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-impl.h ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
ggml-cpu-quants.c ggml: aarch64: implement SVE kernels for q4_K_q8_K vector dot (#11227) 2025-01-16 11:11:49 +02:00
ggml-cpu-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu-traits.cpp ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-traits.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu.c ggml-cpu : fix ggml_graph_compute_thread did not terminate on abort. (ggml/1065) 2025-01-29 11:24:51 +02:00
ggml-cpu.cpp CPU/CUDA: fix (GQA) mul mat back, add CUDA support (#11380) 2025-01-24 12:38:31 +01:00