Files
ik_llama.cpp/ggml
Kawrakow d98a6753a6 ARM_NEON Flash Attention (#49)
* NEON Flash Attention - first working version

Simply reuse the Zen4/AVX2 implementation, but use
f16 for the K*Q multiplication and V*softmax(K*Q) accumulation.
This makes the FlashMS portion somewhat awkward because we
do not have fast f16 implementations for expf (and tanh when
softcap is enabled), so we need to convert back-and-fort
to f32.

FA is slightly faster than no-FA for the 4B TriLM model,
but lightly slower for Gemma-2b.

* NEON Flash Attention - convert Q to f16 before computing Q*K

* NEON Flash Attention - use fp32 for K*Q operations

Else I get wrong results for LLaMA-3.1-8B (but it works for
Gemma-2b).

* Delete commented out stuff

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-09-11 10:26:49 +03:00
..
2024-07-27 07:55:01 +02:00
2024-09-11 10:26:49 +03:00
2024-07-27 07:55:01 +02:00