mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-04-29 19:01:47 +00:00
IQ1_S_R4: better 1.5 bpw quants (#185)
* iq1_s_r4: basics - quantize/dequantize * iq1_s_r4: gemm/gemv works on AVX2/Zen4 * Don't forget to make sure we have a multiple of 4 rows per thread * iq1_s_r4: this is better * iq1_s_r4: fix Zen4 after AVX2 changes * iq1_s_r4: NEON gemm/gemv * iq1_s_r4: more bits for shared experts With this mix we arrive at PPL(512) = 9.4140 for Deepseek-Lite using 1.766 bpw for the repeating layers. On the Ryzen-7950X we get PP-512 = 494 t/s and TG-128 = 52 t/s @ 16 threads. * Forgotten counter increment * iq1_s_r4: slightly faster AVX2/Zen4 gemm/gemv * Compiler warnings --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -427,6 +427,7 @@ extern "C" {
|
||||
GGML_TYPE_IQ2_XXS_R4= 216,
|
||||
GGML_TYPE_IQ2_XS_R4 = 217,
|
||||
GGML_TYPE_IQ3_XXS_R4= 218,
|
||||
GGML_TYPE_IQ1_S_R4 = 219,
|
||||
GGML_TYPE_IQ4_NL_R4 = 220,
|
||||
GGML_TYPE_IQ3_S_R4 = 221,
|
||||
GGML_TYPE_IQ2_S_R4 = 222,
|
||||
@@ -510,6 +511,7 @@ extern "C" {
|
||||
GGML_FTYPE_MOSTLY_IQ2_XXS_R4= 215, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ2_XS_R4 = 216, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ3_XXS_R4= 217, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ1_S_R4 = 218, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ4_NL_R4 = 219, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ3_S_R4 = 220, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ2_S_R4 = 221, // except 1d tensors
|
||||
|
||||
Reference in New Issue
Block a user