mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-03-05 03:20:00 +00:00
Q6_K_R4 (#130)
* Adding q6_k_r4 * q6_k_r4: 1st functional AVX2 version * q6_k_r4: AVX2 and simple Zen4 "Simple" as in processing 4 instead of 8 rows at once. On Zen4 we get PP-512(LLaMA-3.1-8B) = 238.3 t/s vs 195.2 t/s for Q6_K. TG-128 @ 1 thread is 7.94 t/s vs 5.38 t/s for Q6_K. * q6_k_r4: 1st NEON version PP-512(LLaMA-3.1-8B) = 78 t/s vs 57.6 t/s for q6_K. TG-128 is slightly lower rthan q6_K for low number of threads, becomes very slightly better at 8 threads. * q6_k_r4: slightly faster NEON PP-512(LLaMA-3.1-8B) = 83.25 t/s * q6_k_r4: slightly faster Zen4 238.3 t/s -> 243.2 t/s --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -184,6 +184,7 @@ extern "C" {
|
||||
LLAMA_FTYPE_MOSTLY_Q8_0_R4 = 207, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q5_0_R4 = 208, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q4_K_R4 = 214, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q6_K_R4 = 218, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ4_NL_R4 = 225, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_IQ4_XS_R4 = 230, // except 1d tensors
|
||||
LLAMA_FTYPE_MOSTLY_Q6_0_R4 = 235, // except 1d tensors
|
||||
|
||||
Reference in New Issue
Block a user