mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-04-28 18:32:04 +00:00
Q2_K_R4 (#136)
* q2_k_r4: Zen4 PP-512(LLaMA-3.1-8B) = 256 t/s * q3_k_r4: AVX2 * q2_k_r4: AVX2 We get PP-512(LLaMA-3.1-8B) = 287 t/s. Also cherry-picked the q3_k_r4 AVX2 adaptation that I somehow forgot to push upstream. * q2_k_r4: NEON We get PP-512(LLaMA-3.1-8B) = 106.2 t/s. TG-128 is 36.02 t/s, which is ~10% higher than q2_K_S. * Make sure rows per thread are a multiple of 4 --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -183,6 +183,7 @@ extern "C" {
|
||||
LLAMA_FTYPE_MOSTLY_Q4_0_R4 = 202, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q8_0_R4 = 207, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q5_0_R4 = 208, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q2_K_R4 = 210, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q3_K_R4 = 211, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q4_K_R4 = 214, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q5_K_R4 = 216, // except 1d tensors
|
||||
|
||||
Reference in New Issue
Block a user