Commit Graph

3387 Commits

Author SHA1 Message Date
Iwan Kawrakow
07d3b4caec iq6_k: CUDA dot product
90.2 t/s for LLaMA-3.1-8B. Q6_K gives 91.2 t/s, so we are good.
2024-08-07 19:24:09 +03:00
Iwan Kawrakow
b3d6e10c7d iq6_k: CUDA dequantize
We get a slightly better PPL for LLaMA-3.1-8B compared to q6_K
(0.14% vs 0.26% quantization error).
2024-08-07 17:25:21 +03:00
Iwan Kawrakow
85f448e2b1 iq6_k: WIP (quantize/dequantize) 2024-08-07 16:49:43 +03:00
Iwan Kawrakow
54ce23bb61 iq6_k: WIP (nothing works) 2024-08-07 15:24:16 +03:00
Kawrakow
a9f302ebe2 Adding IQ2_TN for use with ternary models (#13)
* iq2_tn: TriLM specific 2.0625 bpw quantization

Quantize/dequantize/scale dot product.

I get 46 t/s for the TriLM-3.9B with any SIMD!
Finally a compiler doing a decent job auto-vectorizing the
scalar implementation.

* iq2_tn: AVX512

Just reusing the k-quants template gets us to PP-512 = 376 t/s,
TG-128 = 47.6 t/s for TriLM-3.9B.

* iq2_tn: AVX512

With this tweak we get to PP-512 = 431 t/s.

* iq2_tn: AVX512

With this tweak we get TG-128 = 19.58 / 35.18 t/s for 1 / 2 threads.
At 4 threads we saturate at 48.41 t/s, and then performance slowly
degrades with increasing number of threads.

* iq2_tn: AVX2

PP512 = 440 t/s on the Ryzen-5975WX.
We should be able to do better.

* iq2_tn: initial NEON version

* iq2_tn: NEON

For TriLM-3.9B running on the M2-Max we get PP-512 = 193.5 t/s,
TG-128 = 75.5 t/s. This is in line with what we have for
iq2_bn ant 3.3B Bitnet.

* iq2_tn: Metal

For TriLM-3.9B on a 30-core M2-Max we get PP-512 = 890 t/s,
TG-128 = 98.5 t/s.

* iq2_tn: CUDA

For TriLM-3.9B running on RTX-4080 we get PP-512 = 9936 t/s,
TG-128 = 299.2 t/s.

* iq2_tn: AVX2 PP improvement

We now get PP-512 = 490.73 t/s for TriLM-3.9B on the Ryzen-5975WX.
We have PP-512 = 636.61 t/s for Bintnet-3B quantized with iq2_bn.
Bintnet-3B is actually 3.4B, TriLM-3.9B is 3.99B, so we would
expect 3.43/3.99 * 636 = 546 t/s, so it seems we still have something
that is not quite optimal in iq2_tn.

* iq2_tn: small NEON improvement

For TriLM-3.9B we now get PP-512 = 206.6 t/s and TG-128 = 76.4 t/s.

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-08-07 07:56:09 +02:00
Iwan Kawrakow
b409c15363 q2_K: allow it to detect ternary nets and quantize accordingly 2024-08-05 11:39:10 +02:00
Kawrakow
c11c7c8cae Update README.md
There have been a few minor improvements here and there, so updated the AVX2 Bitnet performance values to current main branch.
2024-08-05 07:35:30 +02:00
Iwan Kawrakow
6901b3bf14 iq3_k, iq5_k: faster quantization
Just use the same trick as iq4_k
2024-08-05 07:18:18 +02:00
Iwan Kawrakow
e830f4a5f7 iq4_k: speedup quantization by a factor of ~2 2024-08-03 18:38:39 +02:00
Iwan Kawrakow
3d1446b937 Add copyright notice 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
b572dd5347 iq2/3_k: tiny bit faster Metal dot products 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
394ed3913c iq3_k: slightly faster Metal dequantize kernel
PP-512 goes to 473 t/s up from 452 t/s.
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
062313dab4 iq3_k: Metal dot product
Quite slow: 43 t/s for a 7B model
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
57df5ccdd7 iq2_k: Metal dot product finally works
It is slow: 45.4 t/s for 7B model vs 50 t/s for iq2_xs,
or 63.3 t/s for q2_K_S.
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
30d2d1b1eb iq3_k: Metal dequantize 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
4c2c644dcc iq3_k: NEON 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
93d09d1935 iq3_k: AVX2 iqk_mul_mat
We get PP-512 = 196 t/s for LLaMA-3.1-8B on the Ryzen-5975WX.
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
9d0cf7a399 iq3_k: AVX512 iqk_mul_mat
We get PP-512 = 180 t/s, TG-128(4 threads) = 16.35 on the Ryzen-7950X
for LLaMA-3.1-8B.
In comparison, iq3_s has PP-512 = 96 t/s, TG-128 = 7.6 t/s with
iqk_mul_mat, and PP-512 = 28 t/s, TG-128 = 6.8 t/s in mainline llama.cpp
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
fd1ae85a32 iq3_k: faster CUDA dot product
138 t/s for LLaMA-3.1-8B, which is almost on par with iq3_s.
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
0d19d19af8 iq3_k: CUDA dot product
Slightly slower than iq3_s - 132 t/s vs 138 t/s for
LLaMA-3.1-8B.
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
4f237d44f6 iq3_k: Basics
Quantize/dequantize, CUDA dequantize.
PPL of LLaMA-3.1-8B is better than iq3_s and iq3_m.
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
36204c4ec7 iq2_k: very slightly better CUDA dot product
169.2 t/s vs 167.8 t/s before.
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
e950b17125 iq2_k: better CUDA dot product
Almost on par with iq2_xs (168 t/s vs 172 t/s).
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
ab4f9e1fdb iq2_k: CUDA dot product finally works
Performance is pathetic: 140 t/s for LLaMA-3.1-8B vs
172 t/s for iq2_xs.
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
69842c6ad8 iq5_k: CUDA dot product finally works 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
f6813cac0e Factor out iqk CUDA dot products
I cannot possibly wait for a 5 minutes nvcc compilation
each time I touch vecdotq.cuh.

Also, cmake was adding --options-file X.rsp to the nvcc
compile commands, which confuses clangd, so I have turned
that off.
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
22d1568c1c iq5_k: CUDA dot product still not working 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
d8d022a01b iq5_k: Metal
Performance is roughly on par with q5_0.
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
bd36ade98d iq5_k: NEON 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
c0d0607f19 iq5_k: AVX512 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
c56ddee38c iq5_k: AVX2 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
5d341757bc iq5_k: Basics
Quantize/dequantize, CUDA dequantize
2024-08-01 09:38:06 +02:00
Iwan Kawrakow
06e255ac9d iq2_k: Metal. Dot product is wrong 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
f476ea3b50 iq2_k: NEON 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
c0fe03b5c8 iq2_k: slightly faster AVX512 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
7d08719975 iq2_k: simplify AVX512 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
13091d39e8 iq2_k: AVX2 2024-08-01 09:38:06 +02:00
Iwan Kawrakow
c85e139c68 iq2_k: Basics
Quantize/dequantize, CUDA deqantize, AVX512 iqk_mul_mat.
2024-08-01 09:38:06 +02:00
Kawrakow
291066e6df IQ4_K: SOTA 4-bit quantization (#6)
* iq4_k: basics

* quantize/dequantize works
* CUDA dequantize works and one can run PPL calcs. I get
  PPL = 6.5258 for LlaMA-3.1-8B, which is 1.77% above fp16.
  In comparison, q4_K_S (same size) is 2.88% above fp16.
* TG on CUDA does not work. Johannes has changed the way i-quant dot
  products are done, so need to sort out what he had in mind
* iqk_mul_mat is not implemented.

* iq4_k: TG now works on CUDA

* iq4_k: AVX512 implementation

For LLaMA-3.1-8B we get PP-512 = 182.6 t/s, TG-128 = 13.6 t/s,
so almost the same as q4_K_S.

* iq4_k: AVX2 implementation

For LLaMA-3.1-8B we get PP-512 = 203.1 t/s, TG-128 = 12.9 t/s
on the Ryzen-5975X.

* iq4_k: NEON implementation

For LLaMA-3.1-8B we get PP-512 = 60.7 t/s, TG-128 = 25.0 t/s
on the M2-Max. TG is on par with q4_K_S, PP is ~10% slower.

* iq4_k: Metal implementation

For LLaMA-3.1-8B we get PP-512 = 445 t/s, TG-128 = 46.3 t/s
on a 30-core M2-Max GPU. This is to be compared with (currently)
PP-512 = 460 t/s, TG-128 = 51 t/s for q4_K_S.

* iq4_k: scalar dot product

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-28 12:11:59 +02:00
Kawrakow
f62615b44f Simdify and multi-thread tanh (#4)
It seemed Gemma-2 performance is lower than expected for its size.
Looking at the architecture, I noticed that tanh is used in each layer,
and then at the end for softcaping the final output. ggml had tanh
set to be computed with a single thread. Combined with tanh(x) being a
pretty expensive operation, this resulted in a significant fraction
of the time being spent in the tanh operation.

After multi-threading ggml_vec_soft_max_f32 and simd-ifying the
tanh computation, I observe a 33% gain in prompt processing speed (!!!)
TG is of course memory bound, but despite this, we still get a
~2% boost at 4 threads (which gives max TG performance on my
Ryzen-7950X).

Simd-ifying:
We have
   tanh(x) = (exp(2*x) - 1)/(exp(2*x) + 1)
so we can just use Justine Tunney's SIMD exp implementation.

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27 08:44:18 +02:00
Kawrakow
154e0d75fc Merge mainline llama.cpp (#3)
* Merging mainline - WIP

* Merging mainline - WIP

AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.

* Merging mainline - fix Metal

* Remove check

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27 07:55:01 +02:00
Kawrakow
0684c3e9c7 Offload Bitnet token embeddings to the GPU - the right way (#2)
OK, I should have checked how it was done for Gemma and do
the same for Bitnet. But better late than never.

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-26 12:57:23 +02:00
Kawrakow
94b5916319 Offload Bitnet token embeddings to the GPU (#1)
* bitnet: put token embeddings on the GPU

* Update README with the new CUDA/Meat performance

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-26 09:41:04 +02:00
Iwan Kawrakow
c2158c15d9 iqk_mul_mat(NEON): adding forgotten fp16 matrix x vector implementation 2024-07-25 08:37:13 +02:00
Kawrakow
28fb349db4 Update README.md 2024-07-24 19:55:06 +02:00
Kawrakow
eb246cd0ae Update README.md
Trying to avoid line breaks in table
2024-07-24 19:44:52 +02:00
Kawrakow
fc07ca7847 Update README.md 2024-07-24 19:20:46 +02:00
Iwan Kawrakow
770f3585c2 Add copyright notices
Only on the files where I have contributed in a significant way,
or the files I wrote myself.
2024-07-24 20:11:42 +03:00
Iwan Kawrakow
9eee03f4ee Remove unused file 2024-07-24 19:33:19 +03:00
Iwan Kawrakow
3d83f58654 Remove security 2024-07-24 19:25:21 +03:00