mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-02-10 16:30:12 +00:00
Faster ARM_NEON GEMM implementation for legacy quants (#546)
* iq2_kt and iq3_kt work with new int trellis Much slower than the fp16 based trellis. I guess, Apple doesn't have int8_t SIMD on the M2-Max GPU. * q4_0 83.6 t/s -> 128.4 t/s. q4_0_r8 is at 123.5 t/s * q5_0 74.2 t/s -> 128.5 t/s. q5_0_r4 is at 111.4 t/s. * q6_0 74.2 t/s -> 128.8 t/s. q6_0_r4 is at 107.2 t/s. * q8_0 84.5 -> 128.7 t/s. q8_0_r8 is at 131 t/s. * iq4_nl 84.5 t/s -> 128.1 t/s. iq4_nl_r4 is at 120.4 t/s * q4_1 74.4 -> 115.4 t/s. There is no repacked variant * q5_1 64.2 t/s -> 114.9 t/s. There is no repacked variant. --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -18722,7 +18722,7 @@ static std::pair<ggml_type, int> interleaved_properties(ggml_type type) {
|
||||
{ GGML_TYPE_IQ5_KS_R4, { GGML_TYPE_IQ5_KS, 4} },
|
||||
{ GGML_TYPE_IQ5_K_R4, { GGML_TYPE_IQ5_K, 4} },
|
||||
{ GGML_TYPE_Q8_KV_R8, { GGML_TYPE_Q8_KV, 8} },
|
||||
{ GGML_TYPE_Q8_K_R8, { GGML_TYPE_Q8_K, 8} },
|
||||
{ GGML_TYPE_Q8_K_R8, { GGML_TYPE_Q8_0, 8} },
|
||||
{ GGML_TYPE_BF16_R16, { GGML_TYPE_BF16, 16} },
|
||||
};
|
||||
if (auto it = k_map.find(type); it != k_map.end()) return it->second;
|
||||
|
||||
Reference in New Issue
Block a user