mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-02-25 15:44:10 +00:00
q8_KV_r8 - repacked q8_KV
On Zen4 it is slower than q8_k_r8 (292 vs 370 t/s) This makes no sense whatsoever as the q8_KV_r8 GEMM is basically the q8_k_r8 GEMM with the unnecessary block stuff removed (so, one would think that it would be faster).
This commit is contained in:
@@ -207,6 +207,7 @@ extern "C" {
|
||||
LLAMA_FTYPE_MOSTLY_IQ4_K_R4 = 340, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ5_K_R4 = 341, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ4_KS_R4 = 345, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q8_KV_R8 = 398, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q8_K_R8 = 399, // except 1d tensors
|
||||
|
||||
LLAMA_FTYPE_GUESSED = 1024, // not specified in the model file
|
||||
|
||||
Reference in New Issue
Block a user