Files
ik_llama.cpp/ggml
Iwan Kawrakow 093cf3ec9b FA: slightly better quantized kv-cache speed for large contexts
E.g., for q8_0 and context of 32768, we are now at 113 t/s
for LLaMA-3.1-8B.

Also simplified the quantized K*Q multiplication.
2025-01-15 09:03:46 +02:00
..
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-10-04 14:43:26 +03:00