Add softcap to flash attention

Just CPU and CUDA for now (but, as we know, flash attention
on the CPU is useless in llama.cpp).

On CUDA this improves PP performance quite a bit, especially for
long contexts. E.g., for PP-16384, I now get 3777 t/s.
Without this change, one cannot use FA, and one gets 2300 t/s
(after fusing softcap and softmax), or 2000 t/s without the
fused softcap+softmax.

In comparison, mainline llama.cpp has PP-16384 = 1549 t/s before
PR-8542 (where Johannes Gaessler has also added softcap to FA),
and PP-16384 = 3097 t/s after this PR.
This commit is contained in:
Iwan Kawrakow
2024-08-26 18:22:29 +03:00
parent 7168adfe71
commit 46862d725b
12 changed files with 257 additions and 105 deletions

View File

@@ -1811,7 +1811,8 @@ extern "C" {
struct ggml_tensor * v,
struct ggml_tensor * mask,
float scale,
float max_bias);
float max_bias,
float softcap);
GGML_API void ggml_flash_attn_ext_set_prec(
struct ggml_tensor * a,