mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-04-30 19:31:48 +00:00
Use Q8_K_128 for IQ1_S_R4 and IQ1_M_R4 matrix multiplications (#194)
* iq1_s_r4: Use Q8_K_128 instead of Q8_1_X4 for gemm (AVX2/Zen4) * iq1_m_r4: Use Q8_K_128 instead of Q8_1_X4 for gemm (AVX2/Zen4) * iq1_s_r4: Use Q8_K_128 instead of Q8_1_X4 for gemm (Neon) * iq1_m_r4: Use Q8_K_128 instead of Q8_0_X4 for gemm (Neon) * Simdify q8_K128 quantization also on Neon * Cleanup --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -415,6 +415,7 @@ extern "C" {
|
||||
GGML_TYPE_Q8_K16 = 147,
|
||||
GGML_TYPE_Q8_K32 = 148,
|
||||
GGML_TYPE_Q8_KR8 = 149,
|
||||
GGML_TYPE_Q8_K128 = 150,
|
||||
|
||||
GGML_TYPE_Q4_0_R8 = 202,
|
||||
GGML_TYPE_Q5_0_R4 = 206,
|
||||
|
||||
Reference in New Issue
Block a user