Files
ik_llama.cpp/ggml/src
Kawrakow 5236c98b41 CUDA: MMQ for iqX_r4 quants (#557)
* cuda: MMQ for iq2_k_r4

* cuda: MMQ for iq3_k_r4

* cuda: MMQ for iq4_k_r4

* cuda: MMQ for iq5_k_r4

* iqk_r4 quants: use MMQ only for batches < 1024 tokens

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-06-26 08:50:49 +02:00
..
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-06-08 17:27:00 +03:00
2024-08-12 15:14:32 +02:00
2025-06-08 17:27:00 +03:00