Files
ik_llama.cpp/ggml/include
Kawrakow c606c19101 CPU Flash Attention improvements (#172)
* Slightly faster FA for bf16 KV cache

~2-3% sort of thing. Sadly, when we go beyond 8k tokens, the
advantage kind of goes away.

* Slightly faster FA for Q8_0 KV cache

* FA: allow bf16 for V-cache with any supported K-cache

E.g., -ctk q8_0 -ctv bf16 is slightly faster than
-ctk q8_0 -ctv q8_0 on Zen4 for not too long context lengths
(say, <= 4096).

* FA: much better bf16 kv-cache speed for large contexts

We now hit 122 t/s for LLaMA-3.1-8B (quantized as iq4_xs and
run-time-repacked) with a context of 32768. IIRC, the previous
best for such large context was ~90 t/s.
Non-negligible improvement at 16384 and 8192 as well:
173.4 and 214 t/s.

* FA: slightly better quantized kv-cache speed for large contexts

E.g., for q8_0 and context of 32768, we are now at 113 t/s
for LLaMA-3.1-8B.

Also simplified the quantized K*Q multiplication.

* Fix q8_0 KV cache when not using FA - WIP (AVX2)

1. We add new types GGML_TYPE_Q8_0_X4 and GGML_TYPE_Q8_1_X4, and use
   those to quantize activations for quants that use Q8_0 or Q8_1
   as their vec_dot type.
2. We revert the changes to quantize_row_q8_0 and quantize_row_q8_1
3. We use GGML_TYPE_Q8_0_X4 and GGML_TYPE_Q8_1_X4 as the vec_dot type
4. We change the FA implementation to use GGML_TYPE_Q8_0 rather than
   GGML_TYPE_Q8_0_X4 as the K and V types
5. We change the expected type to GGML_TYPE_Q8_0_X4/GGML_TYPE_Q8_1_X4
   in iqk_mul_mat

Also added an optimization in ggml_compute_forward_mul_mat when
ne12*ne13 > 1 (K*Q and V*softmax(K*Q)) to process
n12*ne13/GCD(n12*ne13, nthread) threads simultaneously using
nthread/GCD(n12*ne13, nthread) threads per head. This results in
a non-negligible performance gain for large contexts.

Question: why is it not allowed to use quantized V-cache when
not using FA?

* Fix q8_0 KV cache when not using FA - NEON

* Fix AVX2

Again the issue with _mm256_maddubs_epi16 overflowing that I
keep forgetting.

* FA: don't use large Q steps on AVX2 for fp16 K-cache

* On Zen4 it is also better to not use large Q steps for fp16 K-cache

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2025-01-15 18:19:22 +02:00
..
2024-07-27 07:55:01 +02:00
2024-10-25 13:08:43 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-08-12 15:14:32 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-01-15 18:19:22 +02:00