Files
ik_llama.cpp/src
Iwan Kawrakow 0280b8d52b q8_KV: be able to use it for K cache
This required quite a few fixes in ggml and llama.cpp:
* ggml: do not calculate row size as n/block_size*type_size. I had
  removed most of it when implementing the quants with per row scale,
  bit it was stull lurking in ggml_copy. Not sure if these were the last
  remnants of ggmil-style row sizes, or if there are still places left
* llama.cpp: get rid of the the 1d K cache assumption. Create and manage
  the K-cache as a 2D tensor so we can have per row meta data as needed
  by q8_KV.

Using q8_KV for K-cache results in non-negligible performance gains.
More details to follow, but for DeepSeek-Lite with MLA, we get
18% speedup for PP-8192 compared to q8_0 K-cache.
2025-02-19 10:03:15 +02:00
..
2024-07-27 07:55:01 +02:00
2024-09-28 17:59:47 +03:00
2024-07-27 07:55:01 +02:00
2025-02-19 10:03:15 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-01-23 18:24:10 +02:00
2024-07-27 07:55:01 +02:00