Commit Graph

3507 Commits

Author SHA1 Message Date
Kawrakow
3283fbeb82 q8_k_r8: this change for NEON got lost? 2024-12-16 13:31:21 +01:00
Kawrakow
e811de75e9 BF16_R16 - 16 interleaved bf16 rows (#142)
* Not working bf16_r4

* Adding bf16_r8

Small performance gain compared to bf16 - 258 t/s vs 234 t/s.
I guess, this is still sub-obtimal.

* bf16_rx: Very slightly faster by interleaving 16 rows

258 t/s -> 263 t/s

* Rename bf16_r4 to bf16_r16

We are interleaving 16 rows now.

* Cleanup unused stuff

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-15 09:54:21 +01:00
Kawrakow
e885c1e59b Q8_K_R8: Fastest quantized matrix multiplications (#141)
* q8_k_r8: fastest matrix multiplication known to human kind

We get PP-512(LLaMA-3.1-8B) = 370 t/s on a Ryzen-7950X!

* q8_k_r8: AVX2

I was worried that we don't have enough vector registrers on
AVX2, but it looks like it handles it just fine. We get
PP-512(LLaMA-3.1-8B) = 354 t/s on a Ryzen-5975WX.
Slightly slower than the Zen4 version with double the threads,
but still a huge upgrade compared to Q8_0_R4.

* q8_k_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 159.2 t/s.
Compare this to the 128 t/s we have fr Q8_0_R4.

* q8_k_r4: go to signed ints

Why?
* On AVX2 _mm256_maddubs_epi16() may overflow, so we need to
  stay within the signed int range and use _mm256_sign_epi8.
  Not yet tested on the AVX2 comp, vut expect major slowdown.
* It is almost 10% faster on ARM_NEON. Somehow the veorrq_u8()
  needed tto convert from unsigned to signed seems to be extremely
  slow on the M2-Max
* We only lose ~0.5% in oerformance on Zen4 (there the exclusive
  or that we now use to convert fro signed to unsigned seems to be
  much faster than on M2-Max)

* Shutup useless compiler warnings

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-14 09:24:30 +01:00
Kawrakow
eae584dc98 Faster R4 quants on Zen4 (#139)
* q3_k_r4: faster Zen4

* q3_k_r4: faster Zen4

256.2 -> 272.7 t/s for PP-512

* q6_k_r4: faster Zen4

243.2 -> 261.3 t/s for PP-512

* q4_k_r4: slightly faster Zen4

262.4 t/s -> 268.1 t/s

* q5_k_r4: slightly faster Zen4

248.3 t/s -> 256.7 t/s

* iq4_xs_r4: slightly faster Zen4

256.8 t/s -> 272.0 t/s

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-13 15:47:59 +01:00
Kawrakow
6dd42c1b1c Another fix 2024-12-13 10:14:53 +02:00
Kawrakow
49e10a4bdd Adding lost q4_k_r4 case
Not sure how it got lost.
2024-12-13 10:09:04 +02:00
Kawrakow
ce97b0325e IQ4_K_R4 (#138)
* iq4_k_r4: WIP

* iq4_k_r4: Zen4 and hopefully AVX2

On Zen4 we get PP-512(LLaMA-3.1-8B) = 232.6 t/s, up from 182.2 t/s
for iq4_k. Applying the extra shift costs a ~6 performance penalty.

* iq4_k_r4: AVX2

PP-512 = 227.60 t/s. The shifts are really costly.

* iq4_k_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 108 t/s, up from 58.2 t/s for iq4_k.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-12 16:04:20 +01:00
Kawrakow
66ade83e56 Fix AVX2 implementation of iq4_nl_r4 (#137)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-11 18:55:21 +01:00
Kawrakow
0f6621d410 Q2_K_R4 (#136)
* q2_k_r4: Zen4

PP-512(LLaMA-3.1-8B) = 256 t/s

* q3_k_r4: AVX2

* q2_k_r4: AVX2

We get PP-512(LLaMA-3.1-8B) = 287 t/s.

Also cherry-picked the q3_k_r4 AVX2 adaptation that I somehow
forgot to push upstream.

* q2_k_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 106.2 t/s.
TG-128 is 36.02 t/s, which is ~10% higher than q2_K_S.

* Make sure rows per thread are a multiple of 4

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-11 18:16:49 +01:00
Kawrakow
680d96ad6f Better ARM_NEON implementation for R4 quants (#135)
* q6_k_r4: Better ARM implementation

PP-512(LLaMA-3.1-8B) is now 104.2 t/s up from 83.2 t/s.
I.e., q6_k_r4 now beats q6_0_r4.

* q5_k_r4: Better ARM implementation

PP-512(LLaMA-3.1-8B) is now 107.8 t/s up from 96.9 t/s.
I.e., q5_k_r4 now beats q5_0_r4.

* q4_k_r4: Better ARM implementation

PP-512(LLaMA-3.1-8B) is now 122.1 t/s up from 110 t/s.
I.e., q4_k_r4 is now (nearly) on par with q4_0_r4.

* iq4_xs_r4: Better ARM implementation

PP-512(LLaMA-3.1-8B) is now 131.3 t/s up from 115.8 t/s.
iq4_xs_r4 is now the prompt processing champion on ARM.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-11 14:20:27 +01:00
Kawrakow
4872f2f57e Q3_K_R4 (#134)
* q3_k_r4: Zen4 works, but not as good as it should be

238 t/s, so sloghtly slower than q6_k_r4.

* q3_k_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 106.9 t/s.
This is 1.93X faster than q3_K_S!

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-11 11:19:00 +01:00
Kawrakow
e78e47b857 Q5_K_R4 (#132)
* q5_k_r4: WIP

* q5_k_r4: Zen4 and AVX2

We get PP-512(LLaMA-3.1-8B) = 248.3 t/s on Zen4.
Q5_K_S has PP-512 = 190 t/s.

* q5_k_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 96.1 t/s.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-10 18:13:47 +01:00
Kawrakow
3a8795d422 Slightly faster Q4_K_R4 and IQ4_XS_R4 on Zen4 (#131)
* iq4_k_r4: slightly faster on Zen4

* iq4_xs_r4: very slightly faster Zen4

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-10 14:14:40 +01:00
Kawrakow
b7e2f656f5 Q6_K_R4 (#130)
* Adding q6_k_r4

* q6_k_r4: 1st functional AVX2 version

* q6_k_r4: AVX2 and simple Zen4

"Simple" as in processing 4 instead of 8 rows at once.
On Zen4 we get PP-512(LLaMA-3.1-8B) = 238.3 t/s vs
195.2 t/s for Q6_K. TG-128 @ 1 thread is 7.94 t/s
vs 5.38 t/s for Q6_K.

* q6_k_r4: 1st NEON version

PP-512(LLaMA-3.1-8B) = 78 t/s vs 57.6 t/s for q6_K.
TG-128 is slightly lower rthan q6_K for low number of threads,
becomes very slightly better at 8 threads.

* q6_k_r4: slightly faster NEON

PP-512(LLaMA-3.1-8B) = 83.25 t/s

* q6_k_r4: slightly faster Zen4

238.3 t/s -> 243.2 t/s

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-10 12:26:40 +01:00
Kawrakow
13126ce100 Q4_K_R4 (#129)
* Something is still wrong

* Simply don't see what is wrong

* q4_k_r4: finally works on Zen4

I had forgotten to prevent token_embd.weight being quantized
with q4_k_r4!

* q4_k_r4: AVX2

We get PP-512(LLaMA-3.1-8B) = 267 t/s on a Ryzen-5975WX.
This is ~30% better than Q4_K_S.

* q4_k_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 110 t/s.
Not quite as good as q4_0_r4, but still a massive
improvement compared to he 69 t/s for q4_K.

* q4_k_r4: slightly better AVX2

PP-512 goes from 267 t/s to 282 t/s on Ryzen-5975WX

* Minor

* Minor

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-09 16:59:18 +01:00
Kawrakow
b39bbb0405 Faster IQ4_XS_R4 on Zen4 (#128)
* Faster iq4_xs_r4 on Zen4

The trick is to simply prepare the Q8 block sums for
blocks of 32 as floats. This brings PP-512 up to 254.6 t/s
from 224 t/s.

* Fix broken matrix x vector product on Zen4

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-08 15:27:13 +01:00
Kawrakow
daf5f52022 Rename iq4_nl_x4 to iq4_nl_r4 (#126)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-08 09:34:42 +01:00
Kawrakow
cc9acdbcff R4 improvements on ARM_NEON (#125)
* q4_0_r4: 6% faster PP on NEON

* qx_0_r4_q8_0 template

Applied to q4_0_r4 and q5_0_r4. It makes q5_0_r4 PP
~7% faster.

* Apply qx_0_r4_q8_0 template also to q6_0_r4 and iq4_nl_x4

* Simplify

* Minor iq4_xs_r4 improvement on NEON

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-08 09:13:10 +01:00
Kawrakow
612a207676 iq2_bn_r4: fastest Bitnet CPU implementation on the planet (#124)
* Adding iq2_bn_r4

This Zen4-only implementation achieves PP-512 = 826 t/s (!!!)
for Bitnet-1.58b-3B, up from 620 t/s for iq2_bn.

* Make sure rows per thread are a multiple of the number of interleaved rows

With this I can run iq2_bn_r4 with 32 threads and this increases
PP-512 to 872 t/s.

* iq2_bn_r4: 1st shot at NEON

PP-512 is already faster than iq2_bn (284 t/s vs 246 t/s
for Bitnet-1.58b-3B). TG-128 is ~5% slower.

* iq2_bn_r4: NEON

PP-512 is now 296 t/s. TG-128 is ~20% faster than iq2_bn
for 1 thread, but saturates to about the same 93 t/s at
8 threads.

* iq2_bn_r4: Experimenting on NEON

The matrix x vvector multiplication is erratic.
iq2_bn_r4 is faster at 1, 2, and 4 threads, but
saturates to a lower t/s at 8 threads compared to
iq2_bn. iq2_bn actually manages 99 t/s at 8 threads
and not 93 as I wrore in the last commit. iq2_bn_r4
performance has huge fluctuations at 4 and 8 threads.

* Some cleanup

* iq2_bn_r4: AVX2

As expected, PP is slightly slower as we just don;t have
enough vector registers (690 vs 710 t/s). TG is slightly faster
(18.2 vs 16.7 t/s at 1 thread).

* iq2_bn_r4: use AVX2 implementation on Zen4 for matrix x vector

It is faster - we get 29.6 t/s at 1 thread vs 25.9 t/s for iq2_bn.

* iq2_bn_r4: simdify q8_K16 quantization (AVX2)

PP-512 becomes 834 t/s and TG-128 now saturates to the same
performance as iq2_bn for 4 threads.

* iq2_bn_r4: simdify q8_K16 quantization (NEON)

PP-512 is now 304.7 t/s, and TG-128 @ 8 threads
very slightly outperforms iq2_bn (100.7 t/s vs 99.6 t/s)

* iq2_bn_r4: fix AVX2 after breaking it two commits ago

* iq2_bn_r4: better AVX2

As we don't have enough vector registers on AVX2, it is better
to do two passes per row needing only half of the accumulator
registers that way.
With this, we now beat iq2_bn PP also on AVX2 by a small margin.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-06 12:15:39 +01:00
Kawrakow
9119023a4b IQ4_XS_R4 (#123)
* Adding iq4_xs_r4

This is a 1st working version on Zen4.
We get PP-512(LLaMA-3.1-8B) = 226 t/s, so 16% slower
than iq4_nl_x4.

* iq4_xs_r4: WIP

* iq4_xs_r4: Use AVX2 version for matrix x vector on Zen4

* iq4_xs_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 115.6 t/s on M2-Max,
up from 68.2 t/s for iq4_xs!

* DRY

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-04 15:20:07 +01:00
Kawrakow
bb699e1e6b Q6_0_R4 (#122)
* Adding q6_0_r4

We get PP-512(LLaMA-3.1-8B) = 257 t/s on a Ryzen-7950X.

* q6_0_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 95 t/s on M2-Max.
In terms of ops, q6_0_r4 is identical to q5_0_r4
except for loading the high bits being
vld1q_u8_x2 instead of vld1q_u8. It is strange that
this can make a 5% difference in performance, especially
considering that this is amortized (re-used) over 8 columns
in the right matrix. Or am I running out of vector registers?

* Fix AVX2

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-03 14:48:26 +01:00
Kawrakow
d9593f3689 Q5_0_R4 (#121)
* Adding q5_0_r4

We get PP-512(LLaMA-3.1-8B) = 256.7 t/s on a Ryzen-7950X.
We even get TG-128 improvement to 11.7 t/s from 11.1 t/s.

* q5_0_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 99.6 t/s on M2-Max,
up from 71.0 t/s for Q5_0. The difference to mainline llama.cpp
is no longer funny: they get 26.5 t/s for Q5_0.

For TG, we are nor able to fully saturate memory bandwidth
and arrive at 22.1 t/s @ 8 threads. Mainline llama.cpp gets
20.6 t/s for Q5_0.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-03 12:59:22 +01:00
Kawrakow
6b26cb05f5 Q8_0_R4 (#120)
* Adding q8_0_r4

We get PP-512(LLaMA-3.1-8B) = 268 t/s on a Ryzen-7950X compared
to 175.6 t/s for Q8_0.

* q8_0_r4: NEON

We get PP-512(LLaMA-3.1-8B) = 112.6 t/s on M2-Max.

* q8_0_r4: Zen4 matrix-vector specialization

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-03 06:15:29 +01:00
Kawrakow
61304f5c04 Q4_0_R4 (#119)
* Adding iq4_0_r4 - q4_0 repacked

We get PP-512(LLaMA-3.1-8B) = 278 t/s on a Ryzen-7950X CPU,
so ~5-6% faster than iq4_nl_x4.

* q4_0_r4: NEON

Here we get 115.8 t/s, so also ~5% better than iq4_nl_x4.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-02 17:01:48 +01:00
Kawrakow
72d94fbf22 IQ4_NL_X4 (#118)
* Adding iq4_nl_x4

Looks very promising - I get PP-512(LLaMA-3.1-8B) = 230 t/s
on the Ryzen-7950X! This is faster than any other quant and
~40% faster than iq4_nl.

* iq4_nl_x4: getting amazing

This Zen4 variant gets us to PP-512(LLaMA-3.1-8B) = 263 t/s!

* iq4_nl_x4: AVX2

Here we gain only 25% compared to iq4_nl

* iq4_nl_x4: NEON

On M2-Max we get PP-512(LLaMA-3.1-8B) = 109.7 t/s, up from
82.4 t/s for iq4_nl.

* iq4_nl_x4: minor NEON improvement and cleanup

This gets us to 110.3 t/s. In comparison,
IQ4_NL_4_4 in mainline llama.cpp achieves 92.3 t/s.

* iq4_nl_x4: NEON specialization for matrix x vector

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-12-02 07:25:39 +01:00
Nexes the Elder
6c73f704ca Use Q6_0 instead of Q5_1 for tensors incompatible with IQ5_K/Q5_K (#116) 2024-11-21 12:01:23 +02:00
Kawrakow
f0a0503ec0 MMQ for Q6_0 (#115)
* MMQ for Q6_0

* Add Q6_0 MMQ to template generator

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-11-21 07:12:11 +01:00
Kawrakow
7e5af2073c Faster MoE inference (#112)
* multi_sdd: WIP

* multi_sdd: CPU works

* multi_add: CUDA

* multi_add: simplify

* multi_add: Metal

* Metal: speed up mul_mat_id

For the Granite-1B MoE model PP-512 goes from
156 t/s to 890 t/s, so nearly a 6X speedup!

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-31 12:05:27 +01:00
Kawrakow
ba3f7a2e94 Use fused mul - unary op also for MoE models (#111)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-26 18:23:54 +02:00
Kawrakow
cd96f6c4e5 Bitnet: use the fused mul-silu in the FFN network (#110)
I had forgotten that build_bitnet() does not use the standerd
llm_build_ffn function, so the fused mul-silu didn't get used
for Bitnet when I added it to llm_build_ffn.

This gives us another ~1% speedup for TG-128.

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-26 17:40:32 +02:00
Kawrakow
8ccd9bc7e5 Bitnet CUDA improvements (#109)
* iq1_bn: improve CUDA TG

On RTX-3080 TG-128(Bitnet-1.58b-3B) goes from 318 t/s to 340 t/s.
I see I have on the front page 301 t/s, so pretty nice improvement
since then.

* iq2_bn(CUDA): quants are not 4-byte aligned

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-26 16:26:04 +02:00
Kawrakow
3b5fa426f1 Improve Bitnet PP on Metal (#108)
iq1_bn goes from 702 t/s to 716 t/s
iq2_bn goes from 714 t/s to 743 t/s

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-26 15:13:45 +02:00
Kawrakow
fdfbd98022 Faster IQ1_BN Metal implementation (#107)
* iq1_bn: faster Metal dot product

82 t/s -> 87.9 t/s

* iq1_bn(Metal): 87.9 -> 89.0 t/s for TG-128

* iq1_bn(Metal): 89.0 -> 94.7 t/s for TG-128

So, total improvement is ~15%. Not bad.

* iq1_bn(Metal): 686 -> 702 t/s for PP-512

* iq2_bn(Metal): 710 -> 714 t/s for PP-512

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-26 10:59:59 +02:00
Kawrakow
856376a9af Remove forgotten IQ1_TN, IQ2_TN enum values 2024-10-25 14:14:56 +03:00
Kawrakow
4b35340f45 Bitnet changes (#106)
* Adapting iq2_bn to work without separate scale tensors

Why? It is becoming burdensome to maintain the special Bitnet
conversion in convert_hf_to_gguf.py, so I thnk it is better
to make iq1_bn and iq2_bn just work with the mainline
conversion script (which does not generate scales).

* Adapting iq1_bn to work without separate scale tensors

* Adapting iq2_bn: CUDA dequantize

* Adapting iq2_bn: CUDA works

* Adapting iq1_bn: CUDA works

* Adapting iq1_bn, iq2_bn: NEON

* Adapting iq1_bn, iq2_bn: Metal

Dequantize works, but there is still something wrong
with the dot products.

* WIP

Absoolutely don't see what is wrong with the iq1_bn and iq2_bn
vector dot product kernels.

* Remove iq1_tn and iq2_tn - Part 1

Now that iq1_bn and iq2_bn have per row scales, there is no
reason to also have iq1_tn and iq2_tn.

* Remove iq1_tn and iq2_tn - Part 2

* Bitnet: use the standard llm_build_kv to build self attention

My main motivation was to enable FA. But FA does not work anyway
because head size is 100 for the Botnet ternary models
(and I had forgotten this little detail).

* Revert "Avoid rebuild of GGML graph for each token (#98)"

This reverts commit f2d315b46f.
As far as I can tell, the commit breaks Metal TG.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-25 13:08:43 +02:00
Kawrakow
b535dcd416 Fix quantized k-cache without FA (#105)
* Added Johannes' changes, still getting NaNs with quantized k-cache.

Also getting NaN's on Johannes's mainline branch.

* This fixes it

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-24 12:20:30 +02:00
Kawrakow
33a582466d Add support for Granite and GraniteMoE models (#102)
* Add Granite and GranoteMoE models

* Granite: avoid NaNs on CUDA by scaling Q before K*Q multiplication

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-22 17:28:14 +02:00
Kawrakow
0f3a424166 Enable q6_0 for flash attention (#101)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-22 11:34:49 +02:00
Kawrakow
7c5a91daf1 Enable IQ4_NL for KV-cache in token generation using Flash Attention (#99)
* Enable IQ4_NL for V-cache in token generation

* We don't need these

* Update printour of allowed quantized KV-cache combinations

* Add IQ4_NL + IQ4_NL to FA

This is a better alternative than Q4_0 + Q4_0 for the VRAM poor.

* Remove file added by mistake

* Fix typo, which is not really a bug

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-21 12:16:54 +02:00
agray3
d336410509 Avoid rebuild of GGML graph for each token (#98)
Introduces caching of GGML graph to avoid unnecessary full rebuild between each token.
KV cache parameters, which change with each token, are updated directly in cached GGML
graph. Can be disabled with GGML_DISABLE_GRAPH_CACHING environment variable.
2024-10-20 08:36:16 +02:00
Kawrakow
b091a3513e Bitnet: make the scale tensors optional (#97)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-19 18:52:58 +02:00
Nexes the Elder
b94179b741 Quant strategies: attn_q Q4 & attn_v Q6 for Llama 3.1 Q5_K_S (#96)
* attn_q Q4 & attn_v Q6 for Llama 3.1 Q5_K_S

Pattern worth to be tested on more quants and on L3 8B.
PPL 512 = -0.024 for 70b ; - 0.005 for 8b
Size = - 640MiB for 70b ; - 64MiB for 8b

70b Q5_K_S now beats Q5_K_M by -0.012 ppl

I suspect that it goes for L3 as well, which was quite insensitive to attn_q quantization.

* indent
2024-10-19 17:24:43 +02:00
Kawrakow
a049537904 Attempt to blindly fix Windows build failure (#93)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-19 11:43:04 +02:00
Nexes the Elder
2b1af6bade CLI - Specify GGML_TYPE to quantize for the main tensors. (#91)
To complement the token_embd.weight and output.weight :

attn_v.weight
attn_k.weight.
attn_q_weight
attn_output.weight
attn_qkv.weight
ffn_gate
ffn_down
ffn_up
2024-10-18 09:48:15 +02:00
Kawrakow
f369c6f921 Adding IQ4_KSS: 4.0 bpw quants (#89)
* iq4_kss: WIP

* iq4_kss: CUDA dequantize works

So we can run perplexity. Sadly, the result does not look good
on the bpw vs quantization error plot.

* iq4_kss: slightly better quantization

* iq4_kss: another small quantization improvement

* iq4_kss: CUDA works

TG-128 performance is very decent with 131 t/s for LLaMA-3.1-8B.
In comparison, we have 123 t/s for q4_0 and 128 t/s for iq4_ks.
I.e., the reduced model size more than offsets the additional
bit fiddling required for iq4_kss.

* iq4_kss: new bit arrangement - CUDA and Zen4 work

Did not lose performance on CUDA. Zen4 is decent, but not great:
PP-512(LLaMA-3.1-8B) = 163 t/s.
TG-128 is of course better than other 4-bit quants due to smaller model size.
We get 14.5 t/s @ 8 threads.

* iq4_kss: ARM_NEON. Predictably very slow

* iq4_kss: Metal

PP is not too bad - just 10% slower than q4_0.
But TG is 30% slower, i.e., predictably bad.

* iq4_kss: somewhat faster Metal dot product

45.75 t/s -> 48.75 t/s.
Still 22% slower than q4_0

* iq4_kss: AVX2

Bad, but better than I expected.
PP-512(LLaMA-3.1-8B) = 167 t/s on the Ryzen-5950X.
I.e., with 32 AVX2 threads we get the performance of
16 Zen4 threads.

* iq4_kss: very slightly faster Metal dot product

48.7 t/s -> 49.3 t/s

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-16 15:18:26 +03:00
Kawrakow
a09de6eaef iq4_ks: faster dot product on Metal (#90)
TG-128(LLaMA-3.1-8B) goes to 52.5 t/s up from 48.4 t/s.

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-16 14:13:03 +03:00
Kawrakow
1882040c70 Minor iq3_k tweak 2024-10-14 18:13:11 +03:00
Kawrakow
250c325e7e iq3_k: fix and optimize Metal dot product (#87)
* iq3_k: fix Metal dot product

I was accessing the scales as 4-byte aligned, but iq3_k is
not 4-byte aligned. Instead of throwing an error (as it happens
on CUDA when one makes this mistake), Metal silently accepts
and we get garbage.

* iq3_k: slightly faster Metal dot product

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-14 10:46:41 +03:00
Kawrakow
f61bd33a04 Fix and optimize iq2k Metal implementation (#86)
* I somehow broke iq2_k on Metal? - fix dequantize

* I somehow broke iq2_k on Metal? - fix dot product

* iq2_k: optimize Metal dot product

42.6 t/s -> 46.2 t/s

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-13 14:30:30 +03:00
Kawrakow
67817fb5b9 IQ2_KS: 2.1875 bpw non-linear quantization (#85)
* Experimenting

* iq2k: Try make_qx_quants for the scale

Slightly better for LLaMA-3.1, Gemma-2, slightly worse for
Qwen2.5

* iq2k with make_qx_quants: adjust scale

* iq2ks: basics

* iq2_ks: CUDA works

* iq2_ks: WIP

* iq2_ks: WIP

* iq2_ks: Zen4

* iq2_ks: AVX2

* iq2_ks: scalar dot product

* iq2_ks: ARM_NEON

* iq2_ks: Metal

* iq2_ks: faster Metal

LLaMA-3.1-8B:
PP-512 = 475.22 ± 0.37 t/s
TG-128 =  45.32 ± 0.03 t/s

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-10-13 13:34:30 +03:00