Files
ik_llama.cpp/ggml
Kawrakow d2c74a369b Fix Q5_0 flash attention (#75)
When I changed iqk_mul_mat to use type-1 dot products for type-0
legacy quants, I forgot to also change the vec_dot_type when
the dot product is done via ggml as in flash attention.
This commit fixes it.

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-01 15:52:35 +03:00
..
2024-07-27 07:55:01 +02:00
2024-09-28 13:37:25 +03:00
2024-10-01 15:52:35 +03:00
2024-07-27 07:55:01 +02:00