Files
ik_llama.cpp/ggml
Kawrakow 51db1bf2d2 CUDA: quantized GEMM for for IQ4_K, IQ5_K, IQ6_K (#417)
* MMQ for iq4_k: WIP (not working)

* MMQ for iq4_k: working now

* MMQ for iq5_k

* Cleanup

* MMQ for iq5_k: slightly faster

* MMQ for iq6_k

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-05-14 14:04:11 +03:00
..
2024-07-27 07:55:01 +02:00
2025-05-12 07:47:46 +03:00
2024-07-27 07:55:01 +02:00