Files
ik_llama.cpp/ggml
Kawrakow 0f8f8b32e2 Much faster CPU prompt processing (part 1) (#531)
* q6_K dequantizing GEMM

* Much easier: just use different vec_dot types!

* WIP

* Finally q6_K x q8_2_x4 dot product works

* Very slightly better

* We don't need the changes in ggml.c

* Fix AVX2

* iq2_xs

* Fix AVX2

* iq2_s

* q3_K

* Fix q8_k_r8 on Zen4

* q3_K: repack to q8_k_r8 instead of q8_0_r8

With that we hit 360 t/s for LlaMA-3.1-8B on a Ryzen-7950X.
q8_k_r8 is 386 t/s, so for a batch size of 512 repacking costs
~7% of the time taken by the actual GEMM.

* q3_K: don't scale when all quants in a block are <= 127 when repacking

* iq2_s: repack to q8_k_r8 instead of q8_0_r8

* iq2_xs: rapck to q8_k_r8

* WIP

* iq2_xs: repack to q8_k_r8

* iq3_xxs: repack to q8_k_r8

* iq3_s: use q8_k_r8

* iq1_s: repack to q8_k_r8

* iq1_m: repack to q8_k_r8

* iq1_m: slightly faster

* Slightly faster

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-06-17 07:12:48 +03:00
..
2024-07-27 07:55:01 +02:00
2025-06-08 17:27:00 +03:00
2024-07-27 07:55:01 +02:00