* Revive fused delta-net
* Add command line argument for fused delta net
* Simplify/improve CUDA delta-net
* Add -fdn to llama-bench
* More CUDA fused delta net optimizations
* CPU optimizations
* Much faster fused delta-net on the CPU
It seems it is faster than the chunked implementation!
* Change meaning of fdn from bool flag to threshold value
* Use eps = 1e-6
* Give some nodes a name
* Don't re-apply L2 norm - it has already been done
* This seems quite a bit better
* More tweaks
* Restore per context buffer size log
Not everybody uses models split in 2000 parts, and those who do,
actually want to see the biffer sizes.
* server: enable checkpoint for recurrent models
create checkpoint after cancel
fix ban string and rm context during rewind
add checkpoint interval
only save recurrent cache
* save checkpoint during pp
---------
Co-authored-by: firecoperana <firecoperana>
* Revive fused delta-net
* Add command line argument for fused delta net
* Simplify/improve CUDA delta-net
* Add -fdn to llama-bench
* More CUDA fused delta net optimizations
* CPU optimizations
* Much faster fused delta-net on the CPU
It seems it is faster than the chunked implementation!
* Change meaning of fdn from bool flag to threshold value
* Use eps = 1e-6
* Give some nodes a name
* Display the size of the tensors overriden during the tensor loading
Ex:
`Tensor blk.60.ffn_gate_exps.weight buffer type overriden to CPU
Tensor blk.60.ffn_up_exps.weight buffer type overriden to CPU`
become
`Tensor blk.60.ffn_up_exps.weight (size = 668467200 bytes) buffer type overriden to CPU
Tensor blk.60.ffn_gate_exps.weight (size = 668467200 bytes) buffer type overriden to CPU`
And pass in debug the later displayed size of the unnamed buffer overrides.
Ex : `llm_load_tensors: CPU buffer size = XXX.XX MiB`
That double display is cluttering the screen without being very informative.
* change bytes display to MiB.
Co-authored-by: Kawrakow <iwankawrakow@gmail.com>
---------
Co-authored-by: Kawrakow <iwankawrakow@gmail.com>
* Partial Requant feature for llama-quantize
- Inspired by the recently portcopied --dry-run feature.
- Allows to partially requantize a split quantized .gguf by requantizing only the missing splits in the destination directory.
- Works both for GGUF which are split tensors by tensors, or by group of several tensors (though this one is not very much tested beyond 2 tensors by split).
- Vibe coded.
* Create output directory if it doesn't exist in llama-quantize
* Create output directory if it doesn't exist in gguf-split
* Add exit when directory fails to be created on Windows
* Use std::filesystem
* cleanup
When multiple prompts are sent in a single /v1/completions request,
each response needs to carry the correct index so the client can
match results to their corresponding prompts. The index field was
not being set on partial responses, final responses, or embedding
responses, causing batch results to all report index 0.
Set res->index = slot.task->index in send_partial_response,
send_final_response, and send_embedding.
Generated with [Devin](https://cli.devin.ai/docs)
Co-authored-by: Joshua Jolley <jjolley@clearwateranalytics.com>
Co-authored-by: Devin <noreply@cognition.ai>
* Fix tool call for Qwen3.5
Loosely based on mainline changes from:
* https://github.com/ggml-org/llama.cpp/pull/19635
* https://github.com/ggml-org/llama.cpp/pull/19765
Also need to change the grammar to allow the model to make multiple
tool calls in a row. This was likely broken for Qwen3 Coder prior to
this commit.
* Fix the grammar for the subsequent parameters after the first one
* wip: port MTP architecture
Ports the Multi-Token Prediction (MTP) architecture to the older `llama.cpp` codebase used by `ikllama`.
Changes include:
- Updating `llama_batch` to support `mtp_params`.
- Modifying `llama_decode_internal` (and `encode`) to handle MTP operations (Warmup, Update, Draft).
- Adding public APIs for MTP state management (`llama_set_draft_input_hidden_state`).
- Adapting the embedding extraction logic to skip MTP update passes.
* Refactors `server_slot` to support generic speculative decoding (MTP or Draft Model).
* core: enable hybrid outputs (logits + embeddings) for MTP support
* fix(mtp): correct KV-cache slot finding for updates
* fix(mtp): persist hidden states to prevent context corruption during drafting
* refactor(mtp): clean unused code
* fix(mtp): update server to new functions name
* fix(mtp): fix graph and save hidden state
* mtp: refactor integration, context params and kv cache search
* mtp: fix hidden state extraction and speculative acceptance flow
* server: fix MTP warmup for long prompts and reset token buffer
* llama: refactor MTP operation state to context parameters
* server: fix n_past calculation in MTP acceptance
* llama: fix mtp enable flags
* speculative: refactor MTP to use common_speculative interface
* context: remove unused signatures
* clip: fix deprecated enum-enum conversion warning
* common: fix format string crash in help message
* context: fix mtp activation logic
* raw parameters.md
* fix small typos in common.cpp
* Update build args in parameters.md
* Update parameters.md
- format as table
- sections
* Update README.md
- quickstart
- build and run
* Update parameters.md
other tools examples
* add PR links
* multiple updates to parameters.md
- description
- add jargon section
- add suggestions from feedbacks
* don't imply that only linux is supported in README.md
* add alias to parameters.md
* Update README.md with recent models and features
* Update parameters.md with latest features
* address suggestions
- no-ooae
- placeholder for common commands
- no-kv-offload
- llama-sweep-bench
- placeholder for unique parameters
* specify Linux distro in README.md
* adaptive p: upadte internal state only if not rewinding
* adaptive p: conditional update for speculative decoding
* adaptive p: refactor to rewind instead of update
* adaptive p fix: better comments
* fix rewind check
* add record to handle multi-token rewind
* better comment
* Optimizing q3next TG
* Fused add -> softplus -> mul on CUDA
* Remove forgotten debug log
* Increase ggml context size
Required for Qwen3-Next with batch/u-batch size of 4096
* WIP
* Avoid some contiguous ops
* Avoid some repeats
* Avoid some more repeats