mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-05-01 03:41:53 +00:00
Adding IQ1_KT - 1.75 bpw SOTA quants (#616)
* iq1_kt: basics * iq1_kt: CUDA dequantize Testing with LlaMA-3.1-8B-Instruct, we get almost the same PPL as iq2_xxs, so about 0.2 bpw fewer bits for the same quality. * iq1_kt: CUDA MMQ * iq1_kt: CUDA MMVQ * iq1_kt: AVX2 GEMM/GEMV * iq1_kt: convert/repack to q8_0_r8 (AVX2) * iq1_kt: slightly faster GEMV 18.6 t/s -> 19.4 t/s * iq1_kt: NEON GEMM/GEMV Pathetic as usual * iq1_kt: slightly faster NEON - still pathetic * iq1_kt: tiny bit better GEMV on NEON * iq1_kt: convert/repack to q8_0_r8 (NEON) * iq1_kt: very slightly faster convert/repack to q8_0_r8 on NEON * Adding frgotten file * iq1_kt: add to constants.py --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -436,6 +436,7 @@ extern "C" {
|
||||
GGML_TYPE_IQ4_KT = 155,
|
||||
GGML_TYPE_IQ3_KS = 156,
|
||||
GGML_TYPE_IQ2_KL = 157,
|
||||
GGML_TYPE_IQ1_KT = 158,
|
||||
|
||||
GGML_TYPE_Q4_0_R8 = 202,
|
||||
GGML_TYPE_Q5_0_R4 = 206,
|
||||
@@ -530,6 +531,7 @@ extern "C" {
|
||||
GGML_FTYPE_MOSTLY_IQ4_KT = 144, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ3_KS = 145, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ2_KL = 146, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ1_KT = 147, // except 1d tensors
|
||||
//
|
||||
GGML_FTYPE_MOSTLY_Q4_0_R8 = 202, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_Q8_0_R8 = 207, // except 1d tensors
|
||||
|
||||
Reference in New Issue
Block a user