* Adding q8_0_r4

We get PP-512(LLaMA-3.1-8B) = 268 t/s on a Ryzen-7950X compared
to 175.6 t/s for Q8_0.

* q8_0_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 112.6 t/s on M2-Max.

* q8_0_r4: Zen4 matrix-vector specialization

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2024-12-03 06:15:29 +01:00
committed by GitHub
parent 61304f5c04
commit 6b26cb05f5
9 changed files with 320 additions and 1 deletions

View File

@@ -181,6 +181,7 @@ extern "C" {
LLAMA_FTYPE_MOSTLY_IQ4_KSS = 148, // except 1d tensors
//
LLAMA_FTYPE_MOSTLY_Q4_0_R4 = 202, // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q8_0_R4 = 207, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ4_NL_X4 = 225, // except 1d tensors
LLAMA_FTYPE_GUESSED = 1024, // not specified in the model file