Files
ik_llama.cpp/ggml
Iwan Kawrakow dbe085474a iq2_kt: SOTA
We arrive at
PPL(LLaMA-3.1-8B-Instruct, 8192) = 9.0297
PPL(LLaMA-2-7B,            4096) = 6.3913

Ah, quantization is faster too. About 20% faster.
2024-11-21 08:16:41 +02:00
..
2024-07-27 07:55:01 +02:00
2024-11-21 08:16:41 +02:00
2024-07-27 07:55:01 +02:00
2024-10-04 14:43:26 +03:00