mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-02-24 07:04:11 +00:00
* cuda: MMQ for iq2_k_r4 * cuda: MMQ for iq3_k_r4 * cuda: MMQ for iq4_k_r4 * cuda: MMQ for iq5_k_r4 * iqk_r4 quants: use MMQ only for batches < 1024 tokens --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>