llama.cpp/ggml/src/ggml-cpu
PAB c2082d93a8
ggml : add GGML_PAD_REFLECT_1D operation (ggml/1034)
* ggml_pad_reflect_1d defined in header

* implemented on CPU

* called the forward pass

* impl Metal kernel

* added Metal kernel

* added OP_PAD_REFLECT_1D in test-backend-ops.cpp

* add test-pad-reflect-1d test case

* test case support multiple backend
2024-12-05 13:27:31 +02:00
..
amx ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
llamafile ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
CMakeLists.txt ggml : add predefined list of CPU backend variants to build (#10626) 2024-12-04 14:45:40 +01:00
cpu-feats-x86.cpp ggml : add predefined list of CPU backend variants to build (#10626) 2024-12-04 14:45:40 +01:00
ggml-cpu-aarch64.c ggml : automatic selection of best CPU backend (#10606) 2024-12-01 16:12:41 +01:00
ggml-cpu-aarch64.h ggml-cpu: support IQ4_NL_4_4 by runtime repack (#10541) 2024-11-28 13:52:03 +01:00
ggml-cpu-impl.h ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
ggml-cpu-quants.c ggml : fix I8MM Q4_1 scaling factor conversion (#10562) 2024-11-29 16:25:39 +02:00
ggml-cpu-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu.c ggml : add GGML_PAD_REFLECT_1D operation (ggml/1034) 2024-12-05 13:27:31 +02:00
ggml-cpu.cpp ggml : add predefined list of CPU backend variants to build (#10626) 2024-12-04 14:45:40 +01:00