Kawrakow b94cd3b632 Refactor iqk_mul_mat.cpp (#435)
* Refactor iqk: WIP

* Refactor iqk: Factor out float GEMM (AVX2/AVX512)

* Refactor iqk: Factor out GEMM for legacy quants (AVX2/AVX512)

* Refactor iqk: Factor out GEMM for k-quants (AVX2/AVX512)

* Refactor iqk: fix AVX2

* Refactor iqk: Factor out GEMM for i-quants (AVX2/AVX512)

* Refactor iqk: fix AVX2

* Refactor iqk: Factor out GEMM for iqk-quants (AVX2/AVX512)

* Refactor iqk: fix AVX2

* Refactor iqk: Factor out GEMM for 1-bit quants (ABX2/AVX512)

* Refactor iqk: fix AVX2

* Refactor iqk: Factor out GEMM for iq1_bn, iq2_bn, iq2_bn_r4

* Refactor iqk: Factor out GEMM for repacked legacy quants

* Refactor iqk: Factor out GEMM for q8_K_R8, q8_KV

* Refactor iqk: Factor out GEMM for repacked i-quants

* Refactor iqk: GEMM kernels are refactored on AVX2/AVX512

* Refactor iqk: factor out 1-bit quants (NEON)

* Refactor iqk: factor out k-quants (NEON)

* Refactor iqk: factor out floats (NEON)

* Also iq4_xs belongs to k-quants

* Refactor iqk: factor out iqk quants (NEON)

* Refactor iqk: factor out legacy quants (NEON)

* Refactor iqk: factor out repacked legacy quants (NEON)

* Refactor iqk: factor out repacked k-quants (NEON)

* Refactor iqk: factor out repacked iqk quants (NEON)

* Refactor iqk: GEMM kernels are refactored on NEON

* Refactor iqk: FA compiles

If it works is a different story.
Current compile time: 107.3 sesonds on the Ryzen-7950X

* Refactor iqk: FA refactored (Zen4)

Compile time for the FA files is now ~21 seconds on my
Ryzen-7950X, so still slightly too long for my taste
but much better than the 142 seconds we had before.

* Adding forgotten file

* Most helpers don't need to be templates

Also hide Q4_0 and Q8_KV behind IQK_FA_ALL_QUANTS.

Compilation time drops to 14 second on the Ryzen-5975WX

* Fix bf16

* Refactor iqk: FA refactored (NEON)

* Forgotten MMQ ref and typo (#431)

* Adding forgotten iq5_k_r4

* Fix iq4_k_r4 on NEON

* Fix iq4_ks on NEON

It was broken before the refactoring (the shifts were not correctly
applied).

* Fix q8_0 on NEON

* Fix q6_0 K cache

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Nexes the Elder <124105151+Nexesenex@users.noreply.github.com>
2025-05-22 10:05:51 +03:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-08-12 15:14:32 +02:00
2025-05-22 10:05:51 +03:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2023-12-01 20:16:31 +02:00
2024-07-27 07:55:01 +02:00
2024-08-27 17:40:59 +03:00
2024-01-29 15:50:50 -05:00
2024-07-27 07:55:01 +02:00
2024-08-12 15:14:32 +02:00
2024-07-27 07:55:01 +02:00
2025-04-29 07:22:06 +02:00
2024-08-12 15:14:32 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-05-12 15:48:37 +03:00
2024-07-27 07:55:01 +02:00

ik_llama.cpp: llama.cpp fork with better CPU performance

License: MIT

TL;DR

This repository is a fork of llama.cpp with better CPU and hybrid GPU/CPU performance, new SOTA quantization types, first-class Bitnet support, better DeepSeek performance via MLA, FlashMLA, fused MoE operations and tensor overrides for hybrid GPU/CPU inference, row-interleaved quant packing, etc.

Latest News

  • May 12 2025: User can now control if/which operations with tensors held in RAM are offloaded to the GPU. See PR 405
  • May 12 2025: Compatibility issues with mainline llama.cpp GGUFs for DeepSeek models with MLA enabled were resolved in PR 394. The lower prompt processing performance resulting from using llama.cpp-style MLA GGUFs was recovered in PR 409.
  • May 11 2025: 🚀 Slightly faster flash attention for DeepSeek models on CUDA, along with extending compatibility to Touring or newer GPUs. See PR 408
  • May 9 2025: Support for LlaMA-3-Nemotron models added, see PR 377
  • May 7 2025: 🚀 Faster TG for DeepSeek models with GPU or hybrid GPU/CPU inference. See PR 386 for details. Caveat: Ampere or newer Nvidia GPU required
  • May 4 2025: 🚀 Significant token generation performance improvement on CUDA with Flash Attention for GQA models. For details and benchmarks see PR #370
  • April 29 2025: Qwen3 support added, see PR 355
  • April 26 2025: GLM-4 support added, see PR 344
  • April 26 2025: Command-A support added, see PR 341
  • April 22 2025: Support for the latest Microsoft Bitnet model added, see PR 337
  • April 21 2025: ik_llama.cpp builds and runs successfully on Android (using termux), see PR 336
  • April 17 2025: 🚀 Better CPU Flash Attention token generation performance, see PR 332
  • April 13 2025: IQ1_M quantization improvements, see PR 327
  • April 10 2025: LLaMA-4 support added, see PR 321. In the PR there are also some custom quantization recipes for L4-Scout provided.
  • April 7 2025: IQ2_XS quantization improvements, see PR 312
  • April 3 2025: 🚀 Much faster MoE implementation on Metal, see PR 307
  • April 1 2025: Quantization improvements for Q2_K, Q4_K, Q5_K, Q4_1, Q5_1, see PR 302
  • March 28 2025: Quantization imrovements for Q4_0, Q5_0, Q6_0, Q3_K, Q6_K, IQ4_XS, IQ4_NL, see PR 295
  • March 25 2025: 🚀 Better MoE performance on CUDA
  • March 23 2025: 🚀 Better batched processing speed for DeepSeek models
  • March 22 2025: Gemma3 support added
  • March 21 2025: 🚀 FlashMLA-3: fastest CPU-only inference for DeepSeek models
  • March 18 2025: Reduce compute buffer size
  • March 17 2025: 🚀 FlashMLA-2 performance improvements
  • March 12 2025: Allow Q8_0 KV cache with FlashMLA-2 on CUDA
  • March 10 2025: 🚀 Better TG performance for MoE models on CUDA
  • March 9 2025: 🚀 FlashMLA on CUDA
  • March 8 2025: 🚀 Faster FlashMLA CPU implementation
  • March 7 2025: Custom quantization mixes using regular expressions
  • March 5 2025: 🚀 FlashMLA on CUDA
  • March 3 2025: 🚀 Introducing FlashMLA - MLA with Flash Attention
  • March 1 2025: Smart Expert Reduction for faster DeepSeek inference
  • Feb 27 2025: MLA without transposed cache
  • Feb 25 2025: Tensor overrides for better control where model weights are stored (GPU or CPU)
  • Feb 23 2025: 🚀 Fused FFN ops for faster MoE inference
  • Feb 23 2025: sweep-bench - better performance benchmarking
  • Feb 20 2025: 🚀 Fast GEMM/GEMV for IQ1_S
  • Feb 19 2025: Q8_KV - new type for 8-bit KV-cache quantization
  • Feb 13 2025: Allow Q8_0 quantized cache with MLA
  • Feb 11 2025: 🚀 Flash Attention support for DeepSeek models
  • Feb 9 2025: 🚀 MLA for DeepSeek models
  • Jan 23 2025: DeepSeek-V3 support added

Resources

There is no single point of reference describing all new ik_llama.cpp features. Pull requests often contain detailed information, so browsing the PRs is often the best way to learn about new features and how to use them. In addition

  • The Wiki page has performance comparisons to mainline llama.cpp
  • This guide is a good place to start if you came here because of DeepSeek models
  • This discussion is about running DeepSeek-V3/R1 on a 16 x 3090 setup
  • This discussion describes the new quantization types available in ik_llama.cpp

Contributing

Contributions in form of pull requests, issue submissions (bug reports, feature requests), or general discussions, are welcome.

License

MIT

Description
llama.cpp fork with additional SOTA quants and improved performance
Readme MIT 172 MiB
Languages
C++ 55.4%
C 16.3%
Cuda 14%
Python 5.5%
Metal 3%
Other 5.8%