Files
ik_llama.cpp/ggml
Kawrakow fc8920282f iqk_mul_mat(ARM_NEON): adding bf16 support (#41)
It looks like ArmV8 ISA has support for bf16, but my M2 Max
does not have it, so resorting to bf16 -> f32 conversion and
computations in f32. This is 2x slower than f16, but 8x better
compared to what I get if I try to run a bf16 model on the M2
(NEON and Metal).

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-09-16 16:47:36 +03:00
..
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00