mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-04-30 19:31:48 +00:00
Adding IQ3_KS quants (#566)
* iq3_ks: basics * iq3_ks: CUDA dequantize * iq3_ks: CUDA mmvq * iq3_ks: mmq * iq3_ks: faster mmq * iq3_ks: Zen4 * iq3_ks: AVX2 convert to q8_k_r8 This gives usPP-512 = 360 t/s. * iq3_ks: AVX2 GEMM/GEMV * iq3_ks: NEON GEMM/GEMV * iq3_ks: NEON convert to q8_k_r8 This gives us PP-512 = 164 t/s. * iq3_ks: Metal dequantize * iq3_ks: Metal gemv - pathetic performance --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -429,6 +429,7 @@ extern "C" {
|
||||
GGML_TYPE_IQ2_KT = 153,
|
||||
GGML_TYPE_IQ3_KT = 154,
|
||||
GGML_TYPE_IQ4_KT = 155,
|
||||
GGML_TYPE_IQ3_KS = 156,
|
||||
|
||||
GGML_TYPE_Q4_0_R8 = 202,
|
||||
GGML_TYPE_Q5_0_R4 = 206,
|
||||
@@ -521,6 +522,7 @@ extern "C" {
|
||||
GGML_FTYPE_MOSTLY_IQ2_KT = 142, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ3_KT = 143, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ4_KT = 144, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_IQ3_KS = 145, // except 1d tensors
|
||||
//
|
||||
GGML_FTYPE_MOSTLY_Q4_0_R8 = 202, // except 1d tensors
|
||||
GGML_FTYPE_MOSTLY_Q8_0_R8 = 207, // except 1d tensors
|
||||
|
||||
Reference in New Issue
Block a user