Use Q8_K_128 for IQ1_S_R4 and IQ1_M_R4 matrix multiplications (#194)

* iq1_s_r4: Use Q8_K_128 instead of Q8_1_X4 for gemm (AVX2/Zen4)

* iq1_m_r4: Use Q8_K_128 instead of Q8_1_X4 for gemm (AVX2/Zen4)

* iq1_s_r4: Use Q8_K_128 instead of Q8_1_X4 for gemm (Neon)

* iq1_m_r4: Use Q8_K_128 instead of Q8_0_X4 for gemm (Neon)

* Simdify q8_K128 quantization also on Neon

* Cleanup

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2025-02-09 09:14:52 +02:00
committed by GitHub
parent 716508d196
commit 6658922b94
6 changed files with 169 additions and 43 deletions

View File

@@ -415,6 +415,7 @@ extern "C" {
GGML_TYPE_Q8_K16 = 147,
GGML_TYPE_Q8_K32 = 148,
GGML_TYPE_Q8_KR8 = 149,
GGML_TYPE_Q8_K128 = 150,
GGML_TYPE_Q4_0_R8 = 202,
GGML_TYPE_Q5_0_R4 = 206,