iq4_nl: squash commits for easier rebase
* Basics (quantize, dequantize) * CUDA dequantize and dot product * Slightly faster CUDA dot product (120 t/s) * Switch to 6-bit scales * Scalar dot product * AVX2 dot product * ARM_NEON dot product * Works on metal, but still slow * Slightly better Metal dot product * Another small Metal improvement * Metal dot product is getting there * Faster CUDA dot product * Add 1/8 ffn_down layers as Q5_K when no imatrix has been provided * Report the actual bpw * Add _xs mix that is 4.05 bpw for non-MoE models * Remove IQ4_XS for now, slightly adjust kvalues_iq4nl * AVX2 dot product uses Q8_0 instead of Q8_K * Add to test-backend-ops * Minor fix * Also use use Q5_K for attn_output in MoE models * Fixes after merging latest master * Switching to blocks of 32 * AVX2 for blocks of 32 * Scaler dot product for blocks of 32 * ARM_NEON dot product for blocks of 32 * Metal kernels for blocks of 32 * Slightly faster Metal kernels
This commit is contained in:
parent
15499eb942
commit
10a47fa678
1 changed files with 1 additions and 0 deletions
|
@ -9523,6 +9523,7 @@ void ggml_vec_dot_iq4_nl_q8_0(int n, float * restrict s, size_t bs, const void *
|
||||||
float sumf = 0;
|
float sumf = 0;
|
||||||
|
|
||||||
for (int ib = 0; ib < nb; ib += 2) {
|
for (int ib = 0; ib < nb; ib += 2) {
|
||||||
|
|
||||||
q4bits.val[0] = vld1q_u8(x[ib+0].qs);
|
q4bits.val[0] = vld1q_u8(x[ib+0].qs);
|
||||||
q4bits.val[1] = vld1q_u8(x[ib+1].qs);
|
q4bits.val[1] = vld1q_u8(x[ib+1].qs);
|
||||||
q8b.val[0] = vld1q_s8(y[ib+0].qs);
|
q8b.val[0] = vld1q_s8(y[ib+0].qs);
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue