* Adding iq4_0_r4 - q4_0 repacked

We get PP-512(LLaMA-3.1-8B) = 278 t/s on a Ryzen-7950X CPU,
so ~5-6% faster than iq4_nl_x4.

* q4_0_r4: NEON

Here we get 115.8 t/s, so also ~5% better than iq4_nl_x4.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2024-12-02 17:01:48 +01:00
committed by GitHub
parent 6d0462d4a3
commit 239a344f99
9 changed files with 304 additions and 15 deletions

View File

@@ -180,6 +180,7 @@ extern "C" {
LLAMA_FTYPE_MOSTLY_IQ2_KS = 147, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ4_KSS = 148, // except 1d tensors
//
LLAMA_FTYPE_MOSTLY_Q4_0_R4 = 202, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ4_NL_X4 = 225, // except 1d tensors
LLAMA_FTYPE_GUESSED = 1024, // not specified in the model file