Adding IQ1_KT - 1.75 bpw SOTA quants (#616)

* iq1_kt: basics

* iq1_kt: CUDA dequantize

Testing with LlaMA-3.1-8B-Instruct, we get almost the same PPL
as iq2_xxs, so about 0.2 bpw fewer bits for the same quality.

* iq1_kt: CUDA MMQ

* iq1_kt: CUDA MMVQ

* iq1_kt: AVX2 GEMM/GEMV

* iq1_kt: convert/repack to q8_0_r8 (AVX2)

* iq1_kt: slightly faster GEMV

18.6 t/s -> 19.4 t/s

* iq1_kt: NEON GEMM/GEMV

Pathetic as usual

* iq1_kt: slightly faster NEON - still pathetic

* iq1_kt: tiny bit better GEMV on NEON

* iq1_kt: convert/repack to q8_0_r8 (NEON)

* iq1_kt: very slightly faster convert/repack to q8_0_r8 on NEON

* Adding frgotten file

* iq1_kt: add to constants.py

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2025-07-20 10:05:23 +02:00
committed by GitHub
parent d0bc1f8296
commit e1164e1fd8
21 changed files with 930 additions and 6 deletions

View File

@@ -206,6 +206,7 @@ extern "C" {
LLAMA_FTYPE_MOSTLY_IQ4_KT = 153, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ3_KS = 154, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ2_KL = 155, // except 1d tensors
LLAMA_FTYPE_MOSTLY_IQ1_KT = 156, // except 1d tensors
//
LLAMA_FTYPE_MOSTLY_Q4_0_R8 = 202, // except 1d tensors
LLAMA_FTYPE_MOSTLY_Q8_0_R8 = 207, // except 1d tensors