Kawrakow e30198a553 WIP: Qwen3Next (#1266)
* qwen3next: add architecture support and recurrent-state fixes

* qwen3next: optimize broadcast sub and single-seq ssm conv

* cuda: build MoE row mapping on device in mul_mat_id

* cuda: add guarded multi-seq fast path for ssm_conv

* docs: update qwen3next perf report for cuda MoE/SSM tuning

* cuda: reduce qwen3next moe/ssm sync overhead and refresh eval

* qwen3next: split cpu/cuda eval builds and tune PP scheduling

* qwen3next: harden seq-state flow and support optional dense FFN layers

* qwen3next: trim delta-net graph overhead in chunking path

* qwen3next: remove redundant v_conv cont in delta path

* qwen3next: avoid extra cont on linear attention output

* qwen3next: drop redundant cont before recurrent state flatten

* qwen3next: keep recurrent state in 4d layout through delta path

* qwen3next: add fused delta-net op and wire model path

* tests: add backend-op coverage for ggml_delta_net

* qwen3next: add runtime switch for fused delta-net path

* docs: refresh qwen3next perf review and benchmark matrix

* qwen3next: default fused delta-net off and document quality checks

* qwen3next: add decode-only fused delta mode

* qwen3next: make fused delta safe by default and fix fused tensor layout

* qwen3next: warn when forcing fused decode mode

* qwen3next: add fused-delta regression runner script

* qwen3next: integrate fused regression into eval harness

* qwen3next: clean up chunked delta-net shape handling

* qwen3next: add absolute sanity guards to fused regression

* qwen3next: add unified regression runner script

* qwen3next: disable flash-attn for cpu-only contexts

* docs: reconcile qwen3next status and remaining upstream gaps

* common: add qwen3next fused-delta runtime flag

* cuda: add qwen3next delta-net kernel dispatch override

* docs: update qwen3next quality and serving baseline findings

* qwen3next: keep fused delta on safe path and remove PR artifacts

* qwen3next: align autoregressive delta-net decode layout

* Revert "qwen3next: align autoregressive delta-net decode layout"

This reverts commit 9241164a5e.

* cuda: port solve-tri fast-paths for qwen3next delta-net

* qwen3next: add fused-delta runtime flag and drop env toggle

* qwen3next: make fused delta single-flag and default on

* Account for GPU arch differences

* Revert "cuda: build MoE row mapping on device in mul_mat_id"

This reverts commit 89e9ecfa84.

* qwen3next: drop non-essential MoE scheduling and split heuristics

* qwen3next: avoid generic ggml_sub broadcast changes

* llama: restore only_active_experts log message

* Remove unnecessary hacks, disable fusion for now.

* qwen3next: port hybrid recurrent state memory semantics

* qwen3next: clean up recurrent state slot plumbing

* qwen3next: fix hybrid V-cache layout plumbing

* qwen3next: guard recurrent state slots against kv capacity

* qwen3next: persist recurrent state in session data

- serialize/restore qwen3next cache.s_l in state/session paths\n- bump session and sequence-state file versions for format change\n- fallback to single-token chunking for mixed repeated seq_id batches

* qwen3next: drop unused fused-delta builder path

- remove dead build_delta_net_fused lambda\n- remove unused llm_build_context::fused_delta member

* qwen3next: remove unused fused-delta CLI/context plumbing

- drop -fd/-no-fd options and related YAML dump field\n- remove fused_delta fields from public/internal context params\n- remove fused_delta assignment and logging in context init

* ggml: remove unused DELTA_NET operator stack

* Missing include

* Reorder ops/unary ops

So we don't change again the enum values of the mul mat ops

* Minor

* Discard unnecessary changes in llama-build-context.cpp

* Minor

* Revert "Discard unnecessary changes in llama-build-context.cpp"

This reverts commit edadb80ed6.

* Increase GGML_SCHED_MAX_SPLITS - required for larger u-batches

* Fix CPU concat in the TG case: 7.25 -> 10.5 t/s for Qwen3Next

* Fix CPU sum_rows: 10.5 -> 13.6 t/s for Qwen3Next

It was single-threaded and was taking ~25% of the computation time
during TG. It is now down to 2%.

Strangely enough, I measure 13.6 t/s with llama-bench, but if I
let the model give me an actual response with llama-cli, I get close
to 17 t/s.

* Fix CPU scale: 13.6 -> 16.7 t/s for Qwen3Next

For Qwen3Next there is a scale op on a largish tensor (548k elements)
that has a single row for TG, so was done in a single thread.
We now simply use blocks of 1024 elements.

* Optimize CPU mul: 16.7 -> 17.6 t/s for Qwen3Next

* CPU: fuse transpose -> cont -> sum_rows -> transpos: 17.6 -> 23.1 t/s for Qwen3Next

* Optimize CPU repeat: 176 -> 200 t/s for Qwen3Next PP-512

* Multithreading for OP_SUB

* Don't commit with timing trace on

* Multithread neg and sigmoid

* Be able to turn on/off fusion more easily (CPU)

* Name the mul_mat ops so we know where the time goes

* WIP

* Much better PP on CUDA

* CUDA: fuse transpose -> cont -> sum_rows -> transpose

Needs non-coontiguous variant of sum_rows.
On the CPU this gave 30+% improvement in TG performance,
on CUDA ist is disapointing 6-7%. I guess, this is because
Georgi's cont CPU implementation was so bad that skipping
it made such a big difference.

* CUDA: faster mul for special case relevant for Qwen3Next

Worth 1% in TG

* Fix CPU OP_CONT

---------

Co-authored-by: yurko <yurko@local>
Co-authored-by: Yurko <yurko@example.com>
Co-authored-by: yurko <yurko@pop-os.tail5a1a6b.ts.net>
Co-authored-by: Yurko Hoshko <YurkoHoshko@users.noreply.github.com>
2026-02-16 06:50:28 +01:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2026-02-16 06:50:28 +01:00
2026-02-14 09:01:52 +01:00
2026-02-16 06:50:28 +01:00
2026-02-03 07:39:45 +02:00
2026-02-16 06:50:28 +01:00
2023-12-01 20:16:31 +02:00
2025-09-26 18:22:47 +02:00
2024-07-27 07:55:01 +02:00
2026-02-16 06:50:28 +01:00
2024-01-29 15:50:50 -05:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-07-22 14:53:50 +03:00
2026-01-10 08:10:21 +02:00
2024-08-12 15:14:32 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2026-02-14 09:02:33 +01:00

ik_llama.cpp: llama.cpp fork with better CPU performance

License: MIT

TL;DR

This repository is a fork of llama.cpp with better CPU and hybrid GPU/CPU performance, new SOTA quantization types, first-class Bitnet support, better DeepSeek performance via MLA, FlashMLA, fused MoE operations and tensor overrides for hybrid GPU/CPU inference, row-interleaved quant packing, etc.

Latest News

Step by step guide for ik_llama.cpp in podman/docker container including llama-swap

Model Support

LlaMA-3-Nemotron PR 377, Qwen3 PR 355, GLM-4 PR 344, Command-A PR 341, bitnet-b1.58-2B-4T PR 337, LLaMA-4 PR 321, Gemma3 PR 276, DeepSeek-V3 PR 176, Kimi-2 PR 609, dots.llm1 PR 573, Hunyuan PR 565, GLM-4.5 PR 668 (4.5/4.6/4.7/AIR), Ernie 4.5 MOE and 0.3B PR 759, grok-2 PR 782, Ling/Ring (Bailing-MoE2) PR 833, Qwen3-VL PR 883, SmolLM3 PR 934, GigaChat3 PR 995, ministral3 PR 1030, Mimo-V2-Flash PR 1096, GLM-4.7-Flash PR 1168, Seed-OSS PR 1218, Step-3.5-Flash PR 1231

Quantization

Quantization additions

Trellis quants (IQ1_KT, IQ2_KT, IQ3_KT, IQ4_KT)

Information and the original CUDA implementation in PR 113. Additional implementations: Metal PR 475, Neon PR 471, CPU PR 441. IQ1_KT was added more recently in PR 616. Note: these are base on a novel, integer-base trellis, which allows to achieve reasonable CPU performance, see PR 529 and PRs quoted there for details.

IQK quants

Information can be found in Discussion 8.

Initial implementations (Zen4, AVX2, NEON): IQ5_KS_R4 PR 426, IQ5_KS PR 422, IQ4_KS_R4 PR 150, IQ5_K_R4 PR 149, IQ2_K_R4 PR 146, IQ3_K_R4 PR 145, IQ4_K_R4 PR 138, IQ4_KSS PR 89, IQ2_KS PR 85, IQ4_KS PR 83, IQ6_K PR 14, IQ2_K, IQ3_K and IQ5_K PR 7, IQ4_K PR 6

Cuda implementations: IQ4_KS_R4 and IQ5_KS_R4 PR 493, IQ1_S_R4 PR 492, IQ1_M_R4 PR 494. IQ4_KS_R4 and IQ5_KS_R4 PR 462, IQ2_K_R4, IQ3_K_R4, IQ4_K_R4, IQ5_K_R4 PR 461, IQ4_K, IQ5_K, IQ6_K PR 417, IQ2_KS, IQ2_K, IQ3_K PR 418

IQ2_KL is a more recent addition in PR 602

Hadamard transforms for K-cache

CPU PR 1033 and CUDA PR 1034

MXFP4 as used in gpt-oss models

Implemented for Zen4, AVX2, ARM_NEON, Metal, CUDA PR 682

Quantization improvements

IQ1_M PR 327, IQ2_XS PR 312, Q2_K, Q4_K, Q5_K, Q4_1, Q5_1 PR 302, Q4_0, Q5_0, Q6_0, Q3_K, Q6_K, IQ4_XS, IQ4_NL PR 295

Quantization performance improvements

  • Much faster CPU prompt processing for all non-interleaved quants. Initial idea in PR 515 and PR 531, with many follow up PRs to apply to all quantization types for the 3 supported CPU platforms.
  • All quantization types now have quantized matrix multiplication CUDA kernels, see PR 557 and several others
  • Faster CPU prompt processing for Trellis quants and MoE models. PR 488
  • Trellis quants: faster CPU prompt processing PR 482.
  • Minor (~2%) iq2_ks TG performance improvement on CUDA PR 468
  • Faster IQ3_KT and IQ4_KT PR 453
  • Zen4: Faster PP for IQ2_KS, IQ4_KS, IQ5_KS PR 428
  • Fast GEMM/GEMV for IQ1_S PR 212

Features

  • New split mode "graph" for multi GPU setups PR 1022
  • Function call support PR 628
  • jinja template support PR 677
  • Webui: New Features for Conversations, Settings, and Chat Messages PR 618
  • Legacy quants conversion schemes in convert_hf_to_gguf.py PR 449, Q6_0 in PR 483
  • Adaptive-P Sampler PR 1100 implemented as designed by it's author; supported on Webui
  • Multi-modal Vision support in llama-mtmd-cli PR 798 and in llama-server PR 901
  • mikupad as an alternative WebUI PR 558
  • June 8 2025: Webui updated (legacy still available when --path ./examples/server/public_legacy is passed) PR 481
  • June 8 2025: RPC improvements PR 480
  • June 7 2025: Add an endpoint that lists all the saved prompt caches to server PR 502
  • June 6 2025: Make prompt cache saving and restoring MLA aware PR 497
  • June 3 2025: Added samplers, XTC PR 486, top-n σ PR 489.
  • May 22 2025: Refactor iqk_mul_mat.cpp which speeds up compilation time significantly. PR 435
  • May 17 2025: Option to enable or disable the CPU FA kernels PR 429.
  • May 12 2025: User can now control if/which operations with tensors held in RAM are offloaded to the GPU. See PR 405
  • May 12 2025: Compatibility issues with mainline llama.cpp GGUFs for DeepSeek models with MLA enabled were resolved in PR 394. The lower prompt processing performance resulting from using llama.cpp-style MLA GGUFs was recovered in PR 409.
  • April 21 2025: ik_llama.cpp builds and runs successfully on Android (using termux), see PR 336
  • March 1 2025: Smart Expert Reduction for faster DeepSeek inference PR 239
  • Feb 25 2025: Tensor overrides for better control where model weights are stored (GPU or CPU) PR 232
  • Feb 23 2025: sweep-bench - better performance benchmarking PR 225
  • Feb 19 2025: Q8_KV - new type for 8-bit KV-cache quantization PR 208
  • March 7 2025: Custom quantization mixes using regular expressions PR 244

Performance improvements

  • Better GPU offload strategy for MoE models when using hybrid HPU/CPU inference, see PR 520
  • Much faster rng sampling PR 1187
  • May 13 2025: Better CPU FA performance for DeepSeek-Lite. PR 410
  • May 11 2025: Slightly faster flash attention for DeepSeek models on CUDA, along with extending compatibility to Touring or newer GPUs. PR 408
  • May 4 2025: Significant token generation performance improvement on CUDA with Flash Attention for GQA models. For details and benchmarks. PR 370
  • April 17 2025: Better CPU Flash Attention token generation performance. PR 332
  • April 3 2025: Much faster MoE implementation on Metal. PR 307
  • March 25 2025: Better MoE performance on CUDA PR 283
  • March 23 2025: Better batched processing speed for DeepSeek models PR 282
  • March 18 2025: Reduce compute buffer size PR 237
  • March 10 2025: Better TG performance for MoE models on CUDA PR 248
  • Feb 23 2025: Fused FFN ops for faster MoE inference PR 229

Flash-MLA

  • May 7 2025: 🚀 FlashMLA-3 for DeepSeek models on CUDA. PR 386. Caveat: Ampere or newer Nvidia GPU required
  • March 21 2025: 🚀 FlashMLA-3: fastest CPU-only inference for DeepSeek models PR 273
  • March 17 2025: 🚀 FlashMLA-2 performance improvements PR 253
  • March 12 2025: Allow Q8_0 KV cache with FlashMLA-2 on CUDA PR 265
  • March 9 2025: 🚀 FlashMLA on CUDA PR 247
  • March 8 2025: 🚀 Faster FlashMLA CPU implementation PR 243
  • March 3 2025: 🚀 Introducing FlashMLA - MLA with Flash Attention PR 240
  • Feb 27 2025: MLA without transposed cache PR 235
  • Feb 13 2025: Allow Q8_0 quantized cache with MLA PR 206
  • Feb 11 2025: 🚀 Flash Attention support for DeepSeek models PR 200
  • Feb 9 2025: 🚀 MLA for DeepSeek models PR 188

Fixes

  • Fix bug in MMVQ kernel PR 446
  • Fix AVX2 implementation of IQ4_K, IQ4_KS, IQ5_K, IQ6_K PR 427
  • Fix standard attention on the CPU PR 421
  • Fix imatrix calculation for MLA models PR 411
  • Fix new CUDA FA on Touring PR 413
  • Fix SER. CPU: PR 415 CUDA: PR 416

Resources

There is no single point of reference describing all new ik_llama.cpp features. Pull requests often contain detailed information, so browsing the PRs is often the best way to learn about new features and how to use them. In addition

  • The Wiki page has performance comparisons to mainline llama.cpp
  • This guide is a good place to start if you came here because of DeepSeek models
  • This discussion is about running DeepSeek-V3/R1 on a 16 x 3090 setup
  • This discussion describes the new quantization types available in ik_llama.cpp

Testing

Function Calls Tests

To run the function calls test suite:

cd build
cmake --build . --target test-function-calls
./bin/test-function-calls

The test suite covers parser functionality, streaming, error handling, content cleaning, and server integration. All tests should pass to ensure production readiness.

Contributing

Contributions in form of pull requests, issue submissions (bug reports, feature requests), or general discussions, are welcome.

License

MIT

Description
llama.cpp fork with additional SOTA quants and improved performance
Readme MIT 185 MiB
Languages
C++ 55.4%
C 16.3%
Cuda 14.4%
Python 5.4%
Metal 3%
Other 5.5%