Files
ik_llama.cpp/ggml
Iwan Kawrakow 5b6999970e Fix Q5_0 flash attention
When I changed iqk_mul_mat to use type-1 dot products for type-0
legacy quants, I forgot to also change the vec_dot_type when
the dot product is done via ggml as in flash attention.
This commit fixes it.
2024-10-01 15:49:03 +03:00
..
2024-07-27 07:55:01 +02:00
2024-09-28 13:37:25 +03:00
2024-10-01 15:49:03 +03:00
2024-07-27 07:55:01 +02:00