mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-02-20 21:24:08 +00:00
* q6_K dequantizing GEMM * Much easier: just use different vec_dot types! * WIP * Finally q6_K x q8_2_x4 dot product works * Very slightly better * We don't need the changes in ggml.c * Fix AVX2 * iq2_xs * Fix AVX2 * iq2_s * q3_K * Fix q8_k_r8 on Zen4 * q3_K: repack to q8_k_r8 instead of q8_0_r8 With that we hit 360 t/s for LlaMA-3.1-8B on a Ryzen-7950X. q8_k_r8 is 386 t/s, so for a batch size of 512 repacking costs ~7% of the time taken by the actual GEMM. * q3_K: don't scale when all quants in a block are <= 127 when repacking * iq2_s: repack to q8_k_r8 instead of q8_0_r8 * iq2_xs: rapck to q8_k_r8 * WIP * iq2_xs: repack to q8_k_r8 * iq3_xxs: repack to q8_k_r8 * iq3_s: use q8_k_r8 * iq1_s: repack to q8_k_r8 * iq1_m: repack to q8_k_r8 * iq1_m: slightly faster * Slightly faster --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>