add amx kernel for gemm (#8998)

add intel amx isa detection

add vnni kernel for gemv cases

add vnni and amx kernel support for block_q8_0

code cleanup

fix packing B issue

enable openmp

fine tune amx kernel

switch to aten parallel pattern

add error message for nested parallelism

code cleanup

add f16 support in ggml-amx

add amx kernels for QK_K quant formats: Q4_K, Q5_K, Q6_K and IQ4_XS

update CMakeList

update README

fix some compilation warning

fix compiler warning when amx is not enabled

minor change

ggml-ci

move ggml_amx_init from ggml.c to ggml-amx/mmq.cpp

ggml-ci

update CMakeLists with -mamx-tile, -mamx-int8 and -mamx-bf16

ggml-ci

add amx as an ggml-backend

update header file, the old path for immintrin.h has changed to ggml-cpu-impl.h

minor change

update CMakeLists.txt

minor change

apply weight prepacking in set_tensor method in ggml-backend

fix compile error

ggml-ci

minor change

ggml-ci

update CMakeLists.txt

ggml-ci

add march dependency

minor change

ggml-ci

change ggml_backend_buffer_is_host to return false for amx backend

ggml-ci

fix supports_op

use device reg for AMX backend

ggml-ci

minor change

ggml-ci

minor change

fix rebase

set .buffer_from_host_ptr to be false for AMX backend
This commit is contained in:
Ma Mingfei 2024-10-18 13:34:36 +08:00 committed by GitHub
parent 8901755ba3
commit 60ce97c9d8
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
14 changed files with 3204 additions and 7 deletions

View file

@ -16,6 +16,14 @@
# include "ggml-cann.h"
#endif
#ifndef __AMX_INT8__
#undef GGML_USE_AMX
#endif
#ifdef GGML_USE_AMX
# include "ggml-amx.h"
#endif
// TODO: replace with ggml API call
#define QK_K 256
@ -3533,6 +3541,7 @@ static size_t llama_get_device_memory(const llama_model & model, int device) {
#else
return 1;
#endif
GGML_UNUSED(model);
GGML_UNUSED(device);
}
@ -7031,7 +7040,14 @@ static bool llm_load_tensors(
// assign cpu layers
for (int i = 0; i < i_gpu_start; ++i) {
#ifdef GGML_USE_AMX
model.buft_layer[i] = {
ggml_backend_amx_buffer_type(),
llama_default_buffer_type_cpu(model, true)
};
#else
model.buft_layer[i] = llama_default_buffer_type_cpu(model, true);
#endif
}
if (split_mode == LLAMA_SPLIT_MODE_LAYER) {
@ -21839,6 +21855,7 @@ const char * llama_print_system_info(void) {
s += "AVX512_VBMI = " + std::to_string(ggml_cpu_has_avx512_vbmi()) + " | ";
s += "AVX512_VNNI = " + std::to_string(ggml_cpu_has_avx512_vnni()) + " | ";
s += "AVX512_BF16 = " + std::to_string(ggml_cpu_has_avx512_bf16()) + " | ";
s += "AMX_INT8 = " + std::to_string(ggml_cpu_has_amx_int8()) + " | ";
s += "FMA = " + std::to_string(ggml_cpu_has_fma()) + " | ";
s += "NEON = " + std::to_string(ggml_cpu_has_neon()) + " | ";
s += "SVE = " + std::to_string(ggml_cpu_has_sve()) + " | ";