BF16_R16 - 16 interleaved bf16 rows (#142)

* Not working bf16_r4

* Adding bf16_r8

Small performance gain compared to bf16 - 258 t/s vs 234 t/s.
I guess, this is still sub-obtimal.

* bf16_rx: Very slightly faster by interleaving 16 rows

258 t/s -> 263 t/s

* Rename bf16_r4 to bf16_r16

We are interleaving 16 rows now.

* Cleanup unused stuff

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2024-12-15 09:54:21 +01:00
committed by GitHub
parent 20758edcae
commit 85c5a1a995
9 changed files with 175 additions and 2 deletions

View File

@@ -420,6 +420,7 @@ extern "C" {
GGML_TYPE_Q6_K_R4 = 214,
GGML_TYPE_IQ4_NL_R4 = 220,
GGML_TYPE_IQ4_XS_R4 = 223,
GGML_TYPE_BF16_R16 = 230,
GGML_TYPE_Q6_0_R4 = 233,
GGML_TYPE_IQ2_BN_R4 = 335,
GGML_TYPE_IQ4_K_R4 = 339,
@@ -493,6 +494,7 @@ extern "C" {
GGML_FTYPE_MOSTLY_Q6_K_R4 = 214, // except 1d tensors
GGML_FTYPE_MOSTLY_IQ4_NL_R4 = 219, // except 1d tensors
GGML_FTYPE_MOSTLY_IQ4_XS_R4 = 222, // except 1d tensors
GGML_FTYPE_MOSTLY_BF16_R16 = 224, // except 1d tensors
GGML_FTYPE_MOSTLY_Q6_0_R4 = 227, // except 1d tensors
GGML_FTYPE_MOSTLY_IQ2_BN_R4 = 329, // except 1d tensors
GGML_FTYPE_MOSTLY_IQ4_K_R4 = 332, // except 1d tensors