mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-04-29 02:41:47 +00:00
IQ4_NL_X4 (#118)
* Adding iq4_nl_x4 Looks very promising - I get PP-512(LLaMA-3.1-8B) = 230 t/s on the Ryzen-7950X! This is faster than any other quant and ~40% faster than iq4_nl. * iq4_nl_x4: getting amazing This Zen4 variant gets us to PP-512(LLaMA-3.1-8B) = 263 t/s! * iq4_nl_x4: AVX2 Here we gain only 25% compared to iq4_nl * iq4_nl_x4: NEON On M2-Max we get PP-512(LLaMA-3.1-8B) = 109.7 t/s, up from 82.4 t/s for iq4_nl. * iq4_nl_x4: minor NEON improvement and cleanup This gets us to 110.3 t/s. In comparison, IQ4_NL_4_4 in mainline llama.cpp achieves 92.3 t/s. * iq4_nl_x4: NEON specialization for matrix x vector --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -406,6 +406,8 @@ extern "C" {
|
||||
GGML_TYPE_IQ4_KS = 144,
|
||||
GGML_TYPE_IQ2_KS = 145,
|
||||
GGML_TYPE_IQ4_KSS = 146,
|
||||
|
||||
GGML_TYPE_IQ4_NL_X4 = 220,
|
||||
GGML_TYPE_COUNT,
|
||||
};
|
||||
|
||||
@@ -464,6 +466,8 @@ extern "C" {
|
||||
GGML_FTYPE_MOSTLY_IQ4_KS = 137, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ2_KS = 138, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ4_KSS = 139, // except 1d tensors
|
||||
//
|
||||
GGML_FTYPE_MOSTLY_IQ4_NL_X4 = 219, // except 1d tensors
|
||||
};
|
||||
|
||||
// available tensor operations:
|
||||
|
||||
Reference in New Issue
Block a user