* q2_k_r4: Zen4

PP-512(LLaMA-3.1-8B) = 256 t/s

* q3_k_r4: AVX2

* q2_k_r4: AVX2

We get PP-512(LLaMA-3.1-8B) = 287 t/s.

Also cherry-picked the q3_k_r4 AVX2 adaptation that I somehow
forgot to push upstream.

* q2_k_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 106.2 t/s.
TG-128 is 36.02 t/s, which is ~10% higher than q2_K_S.

* Make sure rows per thread are a multiple of 4

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2024-12-11 18:16:49 +01:00
committed by GitHub
parent 680d96ad6f
commit 0f6621d410
10 changed files with 448 additions and 21 deletions

View File

@@ -31,6 +31,7 @@ static const std::vector<struct quant_option> QUANT_OPTIONS = {
{ "IQ2_BN", LLAMA_FTYPE_MOSTLY_IQ2_BN, " 2.00 bpw quantization (Bitnet)", },
{ "IQ2_BN_R4",LLAMA_FTYPE_MOSTLY_IQ2_BN_R4," 2.00 bpw quantization (Bitnet)", },
{ "Q2_K", LLAMA_FTYPE_MOSTLY_Q2_K, " 2.63G, +0.6717 ppl @ LLaMA-v1-7B", },
{ "Q2_K_R4", LLAMA_FTYPE_MOSTLY_Q2_K_R4, "Q2_K_S repacked", },
{ "Q2_K_S", LLAMA_FTYPE_MOSTLY_Q2_K_S, " 2.16G, +9.0634 ppl @ LLaMA-v1-7B", },
{ "IQ3_XXS", LLAMA_FTYPE_MOSTLY_IQ3_XXS, " 3.06 bpw quantization", },
{ "IQ3_S", LLAMA_FTYPE_MOSTLY_IQ3_S, " 3.44 bpw quantization", },