mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-04-24 00:19:19 +00:00
This is useful for Bitnet here we have almost all matricx multiplications be followed by scale operations. As a result, we get a ~2% boost in Bitnet PP performance. Implementation is easy when the matrix multiplication is done by iqk_mul_mat. But if iqk_mul_mat is not implemented for the quant type/architecture, we need to add the scaling to llamafile sgemm and to ggml itself, which is way more messy, so I didn't do it yet. Given that Bitnet is just a niche thing for now, I'll just leave it on a branch for now.