Files
ik_llama.cpp/github-data/pull_requests/48 - AVX2 Flash Attention.md
2025-07-23 13:31:53 +02:00

509 B

🔀 #48 - AVX2 Flash Attention

Author ikawrakow
State Closed
Created 2024-09-10
Updated 2024-09-10

Description

We don't gain as much as on a Zen4 system as there aren't as many vector registers, so we need to load/store data much more often. Still, we do get a small gain in performance.

For now it supports only fp16 kv-cache. Quantized kv-cache will be added later.