llama.cpp/ggml/src/ggml-cpu
amritahs-ibm 8cef75c743
llamafile : ppc64le MMA INT8 implementation (#10912)
This change upstreams llamafile's cpu matrix
multiplication kernels for ppc64le using MMA
builtins for quantised int8 datatype.

This change results in 10% - 70% improvement
in total speed(ie all tokens/total time), across
various batch sizes.

The patch is tested with Meta-Lllama-3-8B,
Mistral-7B, Llama-2-7B-chat-hf models on a
IBM POWER10 machine.

Signed-off-by: Amrita H S <amritahs@linux.vnet.ibm.com>
2025-01-08 12:54:19 +02:00
..
amx remove CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS (#10797) 2024-12-12 19:02:49 +01:00
cmake ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
llamafile llamafile : ppc64le MMA INT8 implementation (#10912) 2025-01-08 12:54:19 +02:00
CMakeLists.txt ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027) 2024-12-31 15:23:33 +01:00
cpu-feats-x86.cpp ggml : add predefined list of CPU backend variants to build (#10626) 2024-12-04 14:45:40 +01:00
ggml-cpu-aarch64.cpp ggml-backend : only offload from host buffers (fix) (#11124) 2025-01-07 16:11:57 +01:00
ggml-cpu-aarch64.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-hbm.cpp ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-hbm.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-impl.h ggml : move AMX to the CPU backend (#10570) 2024-11-29 21:54:58 +01:00
ggml-cpu-quants.c ggml : fixes for AVXVNNI instruction set with MSVC and Clang (#11027) 2024-12-31 15:23:33 +01:00
ggml-cpu-quants.h ggml : build backends as libraries (#10256) 2024-11-14 18:04:35 +01:00
ggml-cpu-traits.cpp ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu-traits.h ggml : refactor online repacking (#10446) 2024-12-07 14:37:50 +02:00
ggml-cpu.c ggml : more perfo with llamafile tinyblas on x86_64 (#10714) 2024-12-24 18:54:49 +01:00
ggml-cpu.cpp ggml : fix arm build (#10890) 2024-12-18 23:21:42 +01:00