DeepSeek FA support (CPU only) (#200)

* Adding support for K head size != V head size

This is relevant for DeepSeek models.
At this point ggml CPU FA works.
Now I need to go and change iqk FA to make it work
with Dk != Dv.

* iqk support for K head size != V head size

To not have compilation time explode, just
Dk = 192, Dv = 128 for now (DeepSeek)

* FA: very slightly faster for nq = 1 (TG)

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2025-02-11 14:46:30 +02:00
committed by GitHub
parent a366a3d17d
commit 3c98bfb33d
4 changed files with 221 additions and 129 deletions

View File

@@ -17768,10 +17768,10 @@ struct llama_context * llama_new_context_with_model(
params.flash_attn = false;
}
if (params.flash_attn && model->hparams.n_embd_head_k != model->hparams.n_embd_head_v) {
LLAMA_LOG_WARN("%s: flash_attn requires n_embd_head_k == n_embd_head_v - forcing off\n", __func__);
params.flash_attn = false;
}
//if (params.flash_attn && model->hparams.n_embd_head_k != model->hparams.n_embd_head_v) {
// LLAMA_LOG_WARN("%s: flash_attn requires n_embd_head_k == n_embd_head_v - forcing off\n", __func__);
// params.flash_attn = false;
//}
if (params.type_v != GGML_TYPE_F16 && params.type_v != GGML_TYPE_BF16 && !params.flash_attn) {
LLAMA_LOG_ERROR("%s: V cache quantization requires flash_attn\n", __func__);