Files
ik_llama.cpp/ggml
Iwan Kawrakow 1d2cdf2f58 q6_0_r4: NEON
We get PP-512(LLaMA-3.1-8B) = 95 t/s on M2-Max.
In terms of ops, q6_0_r4 is identical to q5_0_r4
except for loading the high bits being
vld1q_u8_x2 instead of vld1q_u8. It is strange that
this can make a 5% difference in performance, especially
considering that this is amortized (re-used) over 8 columns
in the right matrix. Or am I running out of vector registers?
2024-12-03 14:32:03 +01:00
..
2024-07-27 07:55:01 +02:00
2024-12-03 12:59:22 +01:00
2024-12-03 14:32:03 +01:00
2024-07-27 07:55:01 +02:00
2024-10-04 14:43:26 +03:00