firecoperana 07d08e15ad webui update (#1003)
webui: add system message in export conversation, support upload conversation with system message
Webui: show upload only when in new conversation
Webui: Add model name
webui: increase height of chat message window when clicking editing
Webui: autoclose settings dialog dropdown and maximze screen width when zoom in
webui: fix date issues and add more dates
webui: change error to toast.error.
server: add n_past and slot_id in props_simple
webui: add cache tokens, context and prompt speed in chat
webui: modernize ui
webui: change welcome message
webui: change speed display
webui: change run python icon
webui: add config to use server defaults for sampler
webui: put speed on left and context on right

webui: recognize AsciiDoc files as valid text files (#16850)

* webui: recognize AsciiDoc files as valid text files

* webui: add an updated static webui build

* webui: add the updated dependency list

* webui: re-add an updated static webui build

Add a setting to display message generation statistics (#16901)

* feat: Add setting to display message generation statistics

* chore: build static webui output

webui: add HTML/JS preview support to MarkdownContent with sandboxed iframe (#16757)

* webui: add HTML/JS preview support to MarkdownContent with sandboxed iframe dialog

Extended MarkdownContent to flag previewable code languages,
add a preview button alongside copy controls, manage preview
dialog state, and share styling for the new button group

Introduced CodePreviewDialog.svelte, a sandboxed iframe modal
for rendering HTML/JS previews with consistent dialog controls

* webui: fullscreen HTML preview dialog using bits-ui

* Update tools/server/webui/src/lib/components/app/misc/CodePreviewDialog.svelte

Co-authored-by: Aleksander Grygier <aleksander.grygier@gmail.com>

* Update tools/server/webui/src/lib/components/app/misc/MarkdownContent.svelte

Co-authored-by: Aleksander Grygier <aleksander.grygier@gmail.com>

* webui: pedantic style tweak for CodePreviewDialog close button

* webui: remove overengineered preview language logic

* chore: update webui static build

---------

Co-authored-by: Aleksander Grygier <aleksander.grygier@gmail.com>

webui: auto-refresh /props on inference start to resync model metadata (#16784)

* webui: auto-refresh /props on inference start to resync model metadata

- Add no-cache headers to /props and /slots
- Throttle slot checks to 30s
- Prevent concurrent fetches with promise guard
- Trigger refresh from chat streaming for legacy and ModelSelector
- Show dynamic serverWarning when using cached data

* fix: restore proper legacy behavior in webui by using unified /props refresh

Updated assistant message bubbles to show each message's stored model when available,
falling back to the current server model only when the per-message value is missing

When the model selector is disabled, now fetches /props and prioritizes that model name
over chunk metadata, then persists it with the streamed message so legacy mode properly
reflects the backend configuration

* fix: detect first valid SSE chunk and refresh server props once

* fix: removed the slots availability throttle constant and state

* webui: purge ai-generated cruft

* chore: update webui static build

feat(webui): improve LaTeX rendering with currency detection (#16508)

* webui : Revised LaTeX formula recognition

* webui : Further examples containg amounts

* webui : vitest for maskInlineLaTeX

* webui: Moved preprocessLaTeX to lib/utils

* webui: LaTeX in table-cells

* chore: update webui build output (use theirs)

* webui: backslash in LaTeX-preprocessing

* chore: update webui build output

* webui: look-behind backslash-check

* chore: update webui build output

* Apply suggestions from code review

Code maintenance (variable names, code formatting, string handling)

Co-authored-by: Aleksander Grygier <aleksander.grygier@gmail.com>

* webui: Moved constants to lib/constants.

* webui: package woff2 inside base64 data

* webui: LaTeX-line-break in display formula

* chore: update webui build output

* webui: Bugfix (font embedding)

* webui: Bugfix (font embedding)

* webui: vite embeds assets

* webui: don't suppress 404 (fonts)

* refactor: KaTeX integration with SCSS

Moves KaTeX styling to SCSS for better customization and font embedding.

This change includes:
- Adding `sass` as a dev dependency.
- Introducing a custom SCSS file to override KaTeX variables and disable TTF/WOFF fonts, relying solely on WOFF2 for embedding.
- Adjusting the Vite configuration to resolve `katex-fonts` alias and inject SCSS variables.

* fix: LaTeX processing within blockquotes

* webui: update webui build output

---------

Co-authored-by: Aleksander Grygier <aleksander.grygier@gmail.com>

server : add props.model_alias (#16943)

* server : add props.model_alias

webui: fix keyboard shortcuts for new chat & edit chat title (#17007)

Better UX for handling multiple attachments in WebUI (#17246)

webui: add OAI-Compat Harmony tool-call streaming visualization and persistence in chat UI (#16618)

* webui: add OAI-Compat Harmony tool-call live streaming visualization and persistence in chat UI

- Purely visual and diagnostic change, no effect on model context, prompt
  construction, or inference behavior

- Captured assistant tool call payloads during streaming and non-streaming
  completions, and persisted them in chat state and storage for downstream use

- Exposed parsed tool call labels beneath the assistant's model info line
  with graceful fallback when parsing fails

- Added tool call badges beneath assistant responses that expose JSON tooltips
  and copy their payloads when clicked, matching the existing model badge styling

- Added a user-facing setting to toggle tool call visibility to the Developer
  settings section directly under the model selector option

* webui: remove scroll listener causing unnecessary layout updates (model selector)

* Update tools/server/webui/src/lib/components/app/chat/ChatMessages/ChatMessageAssistant.svelte

Co-authored-by: Aleksander Grygier <aleksander.grygier@gmail.com>

* Update tools/server/webui/src/lib/components/app/chat/ChatMessages/ChatMessageAssistant.svelte

Co-authored-by: Aleksander Grygier <aleksander.grygier@gmail.com>

* chore: npm run format & update webui build output

* chore: update webui build output

---------

Co-authored-by: Aleksander Grygier <aleksander.grygier@gmail.com>

webui: Fix clickability around chat processing statistics UI (#17278)

* fix: Better pointer events handling in chat processing info elements

* chore: update webui build output

Fix merge error

webui: Add a "Continue" Action for Assistant Message (#16971)

* feat: Add "Continue" action for assistant messages

* feat: Continuation logic & prompt improvements

* chore: update webui build output

* feat: Improve logic for continuing the assistant message

* chore: update webui build output

* chore: Linting

* chore: update webui build output

* fix: Remove synthetic prompt logic, use the prefill feature by sending the conversation payload ending with assistant message

* chore: update webui build output

* feat: Enable "Continue" button based on config & non-reasoning model type

* chore: update webui build output

* chore: Update packages with `npm audit fix`

* fix: Remove redundant error

* chore: update webui build output

* chore: Update `.gitignore`

* fix: Add missing change

* feat: Add auto-resizing for Edit Assistant/User Message textareas

* chore: update webui build output

Improved file naming & structure for UI components (#17405)

* refactor: Component iles naming & structure

* chore: update webui build output

* refactor: Dialog titles + components namig

* chore: update webui build output

* refactor: Imports

* chore: update webui build output

webui: hide border of button

webui: update

webui: update

webui: update

add vision

webui: minor settings reorganization and add disable autoscroll option (#17452)

* webui: added a dedicated 'Display' settings section that groups visualization options

* webui: added a Display setting to toggle automatic chat scrolling

* chore: update webui build output

Co-authored-by: firecoperana <firecoperana>
2025-11-24 07:03:45 +01:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-11-24 06:51:16 +01:00
2025-11-24 07:03:45 +01:00
2025-11-24 06:55:14 +01:00
2024-07-27 07:55:01 +02:00
2025-11-14 06:58:19 +02:00
2025-11-24 06:51:16 +01:00
2023-12-01 20:16:31 +02:00
2025-09-26 18:22:47 +02:00
2024-07-27 07:55:01 +02:00
2025-11-24 06:55:14 +01:00
2025-11-24 06:51:16 +01:00
2024-01-29 15:50:50 -05:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-07-22 14:53:50 +03:00
2025-07-23 18:14:51 +02:00
2024-08-12 15:14:32 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-07-23 19:38:54 +02:00

ik_llama.cpp: llama.cpp fork with better CPU performance

License: MIT

TL;DR

This repository is a fork of llama.cpp with better CPU and hybrid GPU/CPU performance, new SOTA quantization types, first-class Bitnet support, better DeepSeek performance via MLA, FlashMLA, fused MoE operations and tensor overrides for hybrid GPU/CPU inference, row-interleaved quant packing, etc.

Latest News

Model Support

LlaMA-3-Nemotron PR 377, Qwen3 PR 355, GLM-4 PR 344, Command-A PR 341, bitnet-b1.58-2B-4T PR 337, LLaMA-4 PR 321, Gemma3 PR 276, DeepSeek-V3 PR 176, Kimi-2 PR 609, dots.llm1 PR 573, Hunyuan PR 565

Quantization

Quantization additions

Trellis quants (IQ1_KT, IQ2_KT, IQ3_KT, IQ4_KT)

Information and the original CUDA implementation in PR 113. Additional implementations: Metal PR 475, Neon PR 471, CPU PR 441. IQ1_KT was added more recently in PR 616. Note: these are base on a novel, integer-base trellis, which allows to achieve reasonable CPU performance, see PR 529 and PRs quoted there for details.

IQK quants

Information can be found in Discussion 8.

Initial implementations (Zen4, AVX2, NEON): IQ5_KS_R4 PR 426, IQ5_KS PR 422, IQ4_KS_R4 PR 150, IQ5_K_R4 PR 149, IQ2_K_R4 PR 146, IQ3_K_R4 PR 145, IQ4_K_R4 PR 138, IQ4_KSS PR 89, IQ2_KS PR 85, IQ4_KS PR 83, IQ6_K PR 14, IQ2_K, IQ3_K and IQ5_K PR 7, IQ4_K PR 6

Cuda implementations: IQ4_KS_R4 and IQ5_KS_R4 PR 493, IQ1_S_R4 PR 492, IQ1_M_R4 PR 494. IQ4_KS_R4 and IQ5_KS_R4 PR 462, IQ2_K_R4, IQ3_K_R4, IQ4_K_R4, IQ5_K_R4 PR 461, IQ4_K, IQ5_K, IQ6_K PR 417, IQ2_KS, IQ2_K, IQ3_K PR 418

IQ2_KL is a more recent addition in PR 602

Quantization improvements

IQ1_M PR 327, IQ2_XS PR 312, Q2_K, Q4_K, Q5_K, Q4_1, Q5_1 PR 302, Q4_0, Q5_0, Q6_0, Q3_K, Q6_K, IQ4_XS, IQ4_NL PR 295

Quantization performance improvements

  • Much faster CPU prompt processing for all non-interleaved quants. Initial idea in PR 515 and PR 531, with many follow up PRs to apply to all quantization types for the 3 supported CPU platforms.
  • All quantization types now have quantized matrix multiplication CUDA kernels, see PR 557 and several others
  • Faster CPU prompt processing for Trellis quants and MoE models. PR 488
  • Trellis quants: faster CPU prompt processing PR 482.
  • Minor (~2%) iq2_ks TG performance improvement on CUDA PR 468
  • Faster IQ3_KT and IQ4_KT PR 453
  • Zen4: Faster PP for IQ2_KS, IQ4_KS, IQ5_KS PR 428
  • Fast GEMM/GEMV for IQ1_S PR 212

Features

  • Function call support PR 628
  • Webui: New Features for Conversations, Settings, and Chat Messages PR 618
  • Legacy quants conversion schemes in convert_hf_to_gguf.py PR 449, Q6_0 in PR 483
  • June 8 2025: Webui updated (legacy still available when --path ./examples/server/public_legacy is passed) PR 481
  • June 8 2025: RPC improvements PR 480
  • June 7 2025: Add an endpoint that lists all the saved prompt caches to server PR 502
  • June 6 2025: Make prompt cache saving and restoring MLA aware PR 497
  • June 3 2025: Added samplers, XTC PR 486, top-n σ PR 489.
  • May 22 2025: Refactor iqk_mul_mat.cpp which speeds up compilation time significantly. PR 435
  • May 17 2025: Option to enable or disable the CPU FA kernels PR 429.
  • May 12 2025: User can now control if/which operations with tensors held in RAM are offloaded to the GPU. See PR 405
  • May 12 2025: Compatibility issues with mainline llama.cpp GGUFs for DeepSeek models with MLA enabled were resolved in PR 394. The lower prompt processing performance resulting from using llama.cpp-style MLA GGUFs was recovered in PR 409.
  • April 21 2025: ik_llama.cpp builds and runs successfully on Android (using termux), see PR 336
  • March 1 2025: Smart Expert Reduction for faster DeepSeek inference PR 239
  • Feb 25 2025: Tensor overrides for better control where model weights are stored (GPU or CPU) PR 232
  • Feb 23 2025: sweep-bench - better performance benchmarking PR 225
  • Feb 19 2025: Q8_KV - new type for 8-bit KV-cache quantization PR 208
  • March 7 2025: Custom quantization mixes using regular expressions PR 244

Performance improvements

  • Better GPU offload strategy for MoE models when using hybrid HPU/CPU inference, see PR 520
  • May 13 2025: Better CPU FA performance for DeepSeek-Lite. PR 410
  • May 11 2025: Slightly faster flash attention for DeepSeek models on CUDA, along with extending compatibility to Touring or newer GPUs. PR 408
  • May 4 2025: Significant token generation performance improvement on CUDA with Flash Attention for GQA models. For details and benchmarks. PR 370
  • April 17 2025: Better CPU Flash Attention token generation performance. PR 332
  • April 3 2025: Much faster MoE implementation on Metal. PR 307
  • March 25 2025: Better MoE performance on CUDA PR 283
  • March 23 2025: Better batched processing speed for DeepSeek models PR 282
  • March 18 2025: Reduce compute buffer size PR 237
  • March 10 2025: Better TG performance for MoE models on CUDA PR 248
  • Feb 23 2025: Fused FFN ops for faster MoE inference PR 229

Flash-MLA

  • May 7 2025: 🚀 FlashMLA-3 for DeepSeek models on CUDA. PR 386. Caveat: Ampere or newer Nvidia GPU required
  • March 21 2025: 🚀 FlashMLA-3: fastest CPU-only inference for DeepSeek models PR 273
  • March 17 2025: 🚀 FlashMLA-2 performance improvements PR 253
  • March 12 2025: Allow Q8_0 KV cache with FlashMLA-2 on CUDA PR 265
  • March 9 2025: 🚀 FlashMLA on CUDA PR 247
  • March 8 2025: 🚀 Faster FlashMLA CPU implementation PR 243
  • March 3 2025: 🚀 Introducing FlashMLA - MLA with Flash Attention PR 240
  • Feb 27 2025: MLA without transposed cache PR 235
  • Feb 13 2025: Allow Q8_0 quantized cache with MLA PR 206
  • Feb 11 2025: 🚀 Flash Attention support for DeepSeek models PR 200
  • Feb 9 2025: 🚀 MLA for DeepSeek models PR 188

Fixes

  • Fix bug in MMVQ kernel PR 446
  • Fix AVX2 implementation of IQ4_K, IQ4_KS, IQ5_K, IQ6_K PR 427
  • Fix standard attention on the CPU PR 421
  • Fix imatrix calculation for MLA models PR 411
  • Fix new CUDA FA on Touring PR 413
  • Fix SER. CPU: PR 415 CUDA: PR 416

Resources

There is no single point of reference describing all new ik_llama.cpp features. Pull requests often contain detailed information, so browsing the PRs is often the best way to learn about new features and how to use them. In addition

  • The Wiki page has performance comparisons to mainline llama.cpp
  • This guide is a good place to start if you came here because of DeepSeek models
  • This discussion is about running DeepSeek-V3/R1 on a 16 x 3090 setup
  • This discussion describes the new quantization types available in ik_llama.cpp

Testing

Function Calls Tests

To run the function calls test suite:

cd build
cmake --build . --target test-function-calls
./bin/test-function-calls

The test suite covers parser functionality, streaming, error handling, content cleaning, and server integration. All tests should pass to ensure production readiness.

Contributing

Contributions in form of pull requests, issue submissions (bug reports, feature requests), or general discussions, are welcome.

License

MIT

Description
llama.cpp fork with additional SOTA quants and improved performance
Readme MIT 158 MiB
Languages
C++ 55.4%
C 16.4%
Cuda 14%
Python 5.5%
Metal 3%
Other 5.6%