Files
ik_llama.cpp/ggml
Iwan Kawrakow 473e280500 Fusing a mat mul op followed by scale op on the CPU
This is useful for Bitnet here we have almost all matricx
multiplications be followed by scale operations.
As a result, we get a ~2% boost in Bitnet PP performance.

Implementation is easy when the matrix multiplication is done
by iqk_mul_mat. But if iqk_mul_mat is not implemented for the
quant type/architecture, we need to add the scaling to
llamafile sgemm and to ggml itself, which is way more
messy, so I didn't do it yet.
Given that Bitnet is just a niche thing for now, I'll just
leave it on a branch for now.
2024-07-27 10:45:56 +03:00
..
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00