Kawrakow e23b2a7cc9 MXFP4 (#682)
* mxfp4: basics

* mxfp4: Zen4 GEMM

* mxfp4: repacked GEMM (AVX2/Zen4)

* mxfp4: AVX2 GEMM

* mxfp4: NEON GEMM

* mxfp4: repacked GEMM (NEON)

* mxfp4: Metal

* Fix quantized K cache without FA (#680)

* Prevent assert with quantized K cache and no FA

* Fix MMQ when running with quantized K cache without FA

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>

* Fix for Deepseek r1 parsing (#676)

* Implement function calling / tools for ik_llama.cpp for Kimi K2

* Implement basic tool choice

* Backport llama.cpp tool calls support

* Enhance function calls with improved chat parser and string utilities

- Add new chat.h/chat.cpp and chat-parser.h/chat-parser.cpp for better chat handling
- Improve function calls parsing with fallback to llama.cpp builder pattern
- Add string utility functions (starts_with, ends_with, find_partial_stop)
- Update README with function calls testing instructions
- Enhance Kimi K2 parser and function calls documentation
- Add comprehensive test suite for function calls
- Update CMakeLists.txt and Makefile for new components

* Enhance function calling with unified streaming and parser improvements

- Fix streaming content cleanup to prevent function syntax in output
- Unify content extraction patterns with llama.cpp approach
- Improve Kimi K2 parser robustness and partial content handling
- Add comprehensive test coverage for function call scenarios
- Optimize chat message parsing and diff computation

* Replace hardcoded values in kimi_k2_parser.hpp with named constants

- Add compile-time constants for all token format markers
- Add compile-time constants for XML format markers
- Add compile-time constants for simple format patterns
- Replace all hardcoded string literals with named constants
- Use compile-time length calculation to avoid manual counting
- Improve maintainability and reduce magic numbers throughout parser

* Fix duplicate common_chat_parse definition

- Remove duplicate implementation from chat-parser.cpp
- Keep single implementation in chat.cpp following llama.cpp patterns
- Resolves linker error: multiple definition of common_chat_parse

* Fix JSON assertion failure in function call parsing

- Add proper validation that 'function' field is an object before accessing nested keys
- Handle missing 'arguments' field gracefully with default "{}"
- Prevents crash when parsing malformed tool call JSON structures

* Add comprehensive Qwen3 XML tool calling support with unit tests

- Implement Qwen3 XML parser with <tool_call>{"name": "func", "arguments": {...}}</tool_call> format
- Add model detection and routing for Qwen3 vs Kimi-K2 formats
- Create 8 comprehensive unit tests covering parsing, streaming, error handling
- Fix token format cleaning bug in kimi_k2_parser.hpp processing order
- Remove progressive parsing code and related utilities
- Add tool injection support for Qwen3 format in server utils

* Add DeepSeek R1 function calling support with comprehensive unit tests

- Implement complete DeepSeek R1 tool call parsing in common_chat_parser.cpp
- Add DeepSeek R1 model detection and tool injection in deepseek_r1_tools.hpp
- Update function_calls.hpp with DeepSeek R1 integration and content extraction
- Update documentation to reflect support for Kimi-K2, Qwen3, and DeepSeek R1 models
- Add comprehensive unit tests for DeepSeek R1 reasoning, tool calls, and integration
- Port exact implementation patterns from original llama.cpp for compatibility

Key features:
- Native DeepSeek R1 format: <|tool▁calls▁begin|>function<|tool▁sep|>name```json{}```<|tool▁call▁end|><|tool▁calls▁end|>
- Reasoning content extraction from <think>...</think> tags
- Multiple tool calls support with separate call blocks
- Model detection for deepseek-r1, deepseek_r1 naming patterns
- Integration with incremental parsing and streaming support

* Add partial parsing support for JSON and regex

- json-partial.h/cpp: JSON partial parsing functionality
- regex-partial.h/cpp: Regex partial parsing functionality

* Add format_chat integration tests for Qwen3 tool injection

- Add test_qwen3_format_chat_integration() to validate tool injection pipeline
- Test tool injection conditions and system message enhancement
- Verify JSON formatting and anti-preamble instructions
- Add comprehensive test documentation

Tests confirm tool injection works correctly - conversational preamble
issue is not in ik_llama.cpp but likely in UI configuration.

* Fix Qwen3 tool call parsing - pass model name to parser

Server was not passing model name to parse_chat_message_incremental(),
causing Qwen3 to fall back to Kimi-K2 parser and return tool calls
as content instead of proper tool_calls array.

* Fix non-streaming path to use model-specific parsing

Non-streaming responses were hardcoded to use Kimi-K2 format,
causing Qwen3 XML tool calls to be returned as content instead
of proper tool_calls array. Now uses same model detection as
streaming path for consistency.

* Update Qwen3 function call handling in server and tests

- Enhanced server function call detection and response formatting
- Improved test coverage for Qwen3 tool call scenarios
- Refined XML parsing for better tool execution support

* Add DeepSeek-R1 function call parsing support

Implements comprehensive parsing for all 4 DeepSeek-R1 function call formats:
- Format 1: Standard function call syntax (already supported)
- Format 2: Alternative function call patterns (already supported)
- Format 3: Tools array format - function\n```json\n{"tools": [...]}
- Format 4: XML wrapped format - <tool_call>function</think>Name\n```json\n{...}```</tool_call>

Key changes:
- Added parse_deepseek_r1_tools_array() following original parse_prefixed_json_tool_call_array pattern
- Added parse_deepseek_r1_xml_wrapped() following Hermes-2-Pro XML wrapper patterns
- Integrated both parsers into exception handling chain for robust fallback
- Added comprehensive TDD test coverage for all formats
- Anonymized all confidential information while preserving functionality

Resolves tool_calls_count=0 issue where DeepSeek-R1 models generated valid tool calls
but server failed to parse them correctly.

* Update function_calls.md documentation for DeepSeek-R1 Format 4

- Added Format 4 (XML wrapped) documentation with examples
- Updated implementation notes with correct parser order (3→4→1→2)
- Marked all DeepSeek-R1 formats as working (July 2025 update)
- Updated test status for Format 3 and 4 as passing
- Added parse_deepseek_r1_xml_wrapped() function reference
- Corrected implementation file line numbers

* Fix merge conflict in test-function-calls.cpp

- Removed incomplete merge conflict marker from line 3027
- Ensured all tests compile and pass successfully
- All DeepSeek-R1 formats (1-4) working correctly
- All streaming and content cleaning tests passing

* Fix DeepSeek R1 parsing issue with responses wrapped in think tags

Restore missing consume_rest() call from working PR #648 implementation.
When responses don't contain tool calls, remaining content after reasoning
parsing must be preserved as displayable content.

Fixes issue where entire responses wrapped in <think> tags resulted in
empty content output.

* Implement proper reasoning handling following original llama.cpp patterns

- Add missing reasoning_format and reasoning_in_content fields to common_chat_syntax
- Update try_parse_reasoning to match original llama.cpp logic exactly
- Add TDD test case with reasoning_in_content=true for DeepSeek R1
- Following TDD: test should now pass with proper syntax configuration

Based on original llama.cpp implementation patterns.

* TDD SUCCESS: Fix DeepSeek R1 thinking tag termination issue

 Test passes with reasoning_in_content=true configuration
- Content properly preserved: '<think>content</think>' displays fully
- Reasoning field empty as expected
- Following TDD: test-first approach validates the fix

Next: Update server to automatically apply this configuration.

* Complete server integration fix for DeepSeek R1 thinking tag termination

- Server now automatically sets reasoning_in_content=true for DeepSeek R1 models
- Fixes issue where responses wrapped in <think> tags appear empty to users

* Add TDD test case for DeepSeek R1 thinking tag termination issue

- Test reproduces the exact failure scenario reported by user
- Validates that reasoning_in_content=true fixes the issue
- Demonstrates empty content problem and working solution

* Add remaining TDD test changes for DeepSeek R1 thinking tag fix

* Add debug output after upstream merge

* Remove temporary benchmark and debug files

- Remove tests/benchmark-progressive-parsing.cpp (development tool, not part of core functionality)
- Remove tests/reproduce_bug.sh (debugging script, not needed for PR)

* Port cpu moe options from mainline (#672)

* Port cpu moe options from mainline

* Use strdup and int32_t to follow coding guidelines

* maxfp4: CUDA dequantize

* mxfp4: CUDA GEMV

* mxfp4: CUDA MMQ

* mxfp4: minor CUDA tweaks

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Co-authored-by: Anton Sokolchenko <wsevendays@gmail.com>
Co-authored-by: Parsa <61601745+TheLegendOfKitty@users.noreply.github.com>
2025-08-09 08:40:18 +03:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-08-12 15:14:32 +02:00
2025-08-09 08:40:18 +03:00
2025-08-09 08:40:18 +03:00
2024-07-27 07:55:01 +02:00
2025-08-09 08:40:18 +03:00
2024-07-27 07:55:01 +02:00
2023-12-01 20:16:31 +02:00
2024-07-27 07:55:01 +02:00
2025-08-09 08:40:18 +03:00
2025-08-08 13:56:44 +03:00
2024-01-29 15:50:50 -05:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-07-22 14:53:50 +03:00
2025-07-23 18:14:51 +02:00
2025-06-08 17:27:00 +03:00
2024-08-12 15:14:32 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-07-23 19:38:54 +02:00
2024-07-27 07:55:01 +02:00

ik_llama.cpp: llama.cpp fork with better CPU performance

License: MIT

TL;DR

This repository is a fork of llama.cpp with better CPU and hybrid GPU/CPU performance, new SOTA quantization types, first-class Bitnet support, better DeepSeek performance via MLA, FlashMLA, fused MoE operations and tensor overrides for hybrid GPU/CPU inference, row-interleaved quant packing, etc.

Latest News

Model Support

LlaMA-3-Nemotron PR 377, Qwen3 PR 355, GLM-4 PR 344, Command-A PR 341, bitnet-b1.58-2B-4T PR 337, LLaMA-4 PR 321, Gemma3 PR 276, DeepSeek-V3 PR 176, Kimi-2 PR 609, dots.llm1 PR 573, Hunyuan PR 565

Quantization

Quantization additions

Trellis quants (IQ1_KT, IQ2_KT, IQ3_KT, IQ4_KT)

Information and the original CUDA implementation in PR 113. Additional implementations: Metal PR 475, Neon PR 471, CPU PR 441. IQ1_KT was added more recently in PR 616. Note: these are base on a novel, integer-base trellis, which allows to achieve reasonable CPU performance, see PR 529 and PRs quoted there for details.

IQK quants

Information can be found in Discussion 8.

Initial implementations (Zen4, AVX2, NEON): IQ5_KS_R4 PR 426, IQ5_KS PR 422, IQ4_KS_R4 PR 150, IQ5_K_R4 PR 149, IQ2_K_R4 PR 146, IQ3_K_R4 PR 145, IQ4_K_R4 PR 138, IQ4_KSS PR 89, IQ2_KS PR 85, IQ4_KS PR 83, IQ6_K PR 14, IQ2_K, IQ3_K and IQ5_K PR 7, IQ4_K PR 6

Cuda implementations: IQ4_KS_R4 and IQ5_KS_R4 PR 493, IQ1_S_R4 PR 492, IQ1_M_R4 PR 494. IQ4_KS_R4 and IQ5_KS_R4 PR 462, IQ2_K_R4, IQ3_K_R4, IQ4_K_R4, IQ5_K_R4 PR 461, IQ4_K, IQ5_K, IQ6_K PR 417, IQ2_KS, IQ2_K, IQ3_K PR 418

IQ2_KL is a more recent addition in PR 602

Quantization improvements

IQ1_M PR 327, IQ2_XS PR 312, Q2_K, Q4_K, Q5_K, Q4_1, Q5_1 PR 302, Q4_0, Q5_0, Q6_0, Q3_K, Q6_K, IQ4_XS, IQ4_NL PR 295

Quantization performance improvements

  • Much faster CPU prompt processing for all non-interleaved quants. Initial idea in PR 515 and PR 531, with many follow up PRs to apply to all quantization types for the 3 supported CPU platforms.
  • All quantization types now have quantized matrix multiplication CUDA kernels, see PR 557 and several others
  • Faster CPU prompt processing for Trellis quants and MoE models. PR 488
  • Trellis quants: faster CPU prompt processing PR 482.
  • Minor (~2%) iq2_ks TG performance improvement on CUDA PR 468
  • Faster IQ3_KT and IQ4_KT PR 453
  • Zen4: Faster PP for IQ2_KS, IQ4_KS, IQ5_KS PR 428
  • Fast GEMM/GEMV for IQ1_S PR 212

Features

  • Function call support PR 628
  • Webui: New Features for Conversations, Settings, and Chat Messages PR 618
  • Legacy quants conversion schemes in convert_hf_to_gguf.py PR 449, Q6_0 in PR 483
  • June 8 2025: Webui updated (legacy still available when --path ./examples/server/public_legacy is passed) PR 481
  • June 8 2025: RPC improvements PR 480
  • June 7 2025: Add an endpoint that lists all the saved prompt caches to server PR 502
  • June 6 2025: Make prompt cache saving and restoring MLA aware PR 497
  • June 3 2025: Added samplers, XTC PR 486, top-n σ PR 489.
  • May 22 2025: Refactor iqk_mul_mat.cpp which speeds up compilation time significantly. PR 435
  • May 17 2025: Option to enable or disable the CPU FA kernels PR 429.
  • May 12 2025: User can now control if/which operations with tensors held in RAM are offloaded to the GPU. See PR 405
  • May 12 2025: Compatibility issues with mainline llama.cpp GGUFs for DeepSeek models with MLA enabled were resolved in PR 394. The lower prompt processing performance resulting from using llama.cpp-style MLA GGUFs was recovered in PR 409.
  • April 21 2025: ik_llama.cpp builds and runs successfully on Android (using termux), see PR 336
  • March 1 2025: Smart Expert Reduction for faster DeepSeek inference PR 239
  • Feb 25 2025: Tensor overrides for better control where model weights are stored (GPU or CPU) PR 232
  • Feb 23 2025: sweep-bench - better performance benchmarking PR 225
  • Feb 19 2025: Q8_KV - new type for 8-bit KV-cache quantization PR 208
  • March 7 2025: Custom quantization mixes using regular expressions PR 244

Performance improvements

  • Better GPU offload strategy for MoE models when using hybrid HPU/CPU inference, see PR 520
  • May 13 2025: Better CPU FA performance for DeepSeek-Lite. PR 410
  • May 11 2025: Slightly faster flash attention for DeepSeek models on CUDA, along with extending compatibility to Touring or newer GPUs. PR 408
  • May 4 2025: Significant token generation performance improvement on CUDA with Flash Attention for GQA models. For details and benchmarks. PR 370
  • April 17 2025: Better CPU Flash Attention token generation performance. PR 332
  • April 3 2025: Much faster MoE implementation on Metal. PR 307
  • March 25 2025: Better MoE performance on CUDA PR 283
  • March 23 2025: Better batched processing speed for DeepSeek models PR 282
  • March 18 2025: Reduce compute buffer size PR 237
  • March 10 2025: Better TG performance for MoE models on CUDA PR 248
  • Feb 23 2025: Fused FFN ops for faster MoE inference PR 229

Flash-MLA

  • May 7 2025: 🚀 FlashMLA-3 for DeepSeek models on CUDA. PR 386. Caveat: Ampere or newer Nvidia GPU required
  • March 21 2025: 🚀 FlashMLA-3: fastest CPU-only inference for DeepSeek models PR 273
  • March 17 2025: 🚀 FlashMLA-2 performance improvements PR 253
  • March 12 2025: Allow Q8_0 KV cache with FlashMLA-2 on CUDA PR 265
  • March 9 2025: 🚀 FlashMLA on CUDA PR 247
  • March 8 2025: 🚀 Faster FlashMLA CPU implementation PR 243
  • March 3 2025: 🚀 Introducing FlashMLA - MLA with Flash Attention PR 240
  • Feb 27 2025: MLA without transposed cache PR 235
  • Feb 13 2025: Allow Q8_0 quantized cache with MLA PR 206
  • Feb 11 2025: 🚀 Flash Attention support for DeepSeek models PR 200
  • Feb 9 2025: 🚀 MLA for DeepSeek models PR 188

Fixes

  • Fix bug in MMVQ kernel PR 446
  • Fix AVX2 implementation of IQ4_K, IQ4_KS, IQ5_K, IQ6_K PR 427
  • Fix standard attention on the CPU PR 421
  • Fix imatrix calculation for MLA models PR 411
  • Fix new CUDA FA on Touring PR 413
  • Fix SER. CPU: PR 415 CUDA: PR 416

Resources

There is no single point of reference describing all new ik_llama.cpp features. Pull requests often contain detailed information, so browsing the PRs is often the best way to learn about new features and how to use them. In addition

  • The Wiki page has performance comparisons to mainline llama.cpp
  • This guide is a good place to start if you came here because of DeepSeek models
  • This discussion is about running DeepSeek-V3/R1 on a 16 x 3090 setup
  • This discussion describes the new quantization types available in ik_llama.cpp

Testing

Function Calls Tests

To run the function calls test suite:

cd build
cmake --build . --target test-function-calls
./bin/test-function-calls

The test suite covers parser functionality, streaming, error handling, content cleaning, and server integration. All tests should pass to ensure production readiness.

Contributing

Contributions in form of pull requests, issue submissions (bug reports, feature requests), or general discussions, are welcome.

License

MIT

Description
llama.cpp fork with additional SOTA quants and improved performance
Readme MIT 158 MiB
Languages
C++ 55.4%
C 16.4%
Cuda 14%
Python 5.5%
Metal 3%
Other 5.6%