Files
ik_llama.cpp/ggml
Iwan Kawrakow 4213ab1cb3 iq2_kt: SOTA
We arrive at
PPL(LLaMA-3.1-8B-Instruct, 8192) = 8.9627
PPL(LLaMA-2-7B,            4096) = 6.3825

Quantization is faster too: ~200 seconds for LLaMA-3.1-8B
on Ryzen-5975WX.
2024-11-21 08:16:41 +02:00
..
2024-07-27 07:55:01 +02:00
2024-11-21 08:16:41 +02:00
2024-07-27 07:55:01 +02:00
2024-10-04 14:43:26 +03:00