mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-05-01 03:41:53 +00:00
Q4_0_R4 (#119)
* Adding iq4_0_r4 - q4_0 repacked We get PP-512(LLaMA-3.1-8B) = 278 t/s on a Ryzen-7950X CPU, so ~5-6% faster than iq4_nl_x4. * q4_0_r4: NEON Here we get 115.8 t/s, so also ~5% better than iq4_nl_x4. --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -407,6 +407,7 @@ extern "C" {
|
||||
GGML_TYPE_IQ2_KS = 145,
|
||||
GGML_TYPE_IQ4_KSS = 146,
|
||||
|
||||
GGML_TYPE_Q4_0_R4 = 202,
|
||||
GGML_TYPE_IQ4_NL_X4 = 220,
|
||||
GGML_TYPE_COUNT,
|
||||
};
|
||||
@@ -467,6 +468,7 @@ extern "C" {
|
||||
GGML_FTYPE_MOSTLY_IQ2_KS = 138, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ4_KSS = 139, // except 1d tensors
|
||||
//
|
||||
GGML_FTYPE_MOSTLY_Q4_0_R4 = 202, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ4_NL_X4 = 219, // except 1d tensors
|
||||
};
|
||||
|
||||
|
||||
Reference in New Issue
Block a user