Wrap the two slot-level sample/accept call sites in
try/catch (std::exception). On exception: log, send_error to the
task, release the slot, continue serving. Matches the existing
try/catch around common_sampler_init in the same file.
Without this, llama_grammar_accept_token throwing
"Unexpected empty grammar stack after accepting piece: <pad> (0)"
(reproducible on Gemma 4 + json_schema + ctx_shift, see #1725)
unwinds out of update_slots -> queue start_loop -> main, hits
std::terminate, and aborts the whole server process.
* server: spec checkpoints for recurrent models
* fix: save/restore sampler state during speculative checkpoint
When speculative decoding rejects draft tokens and restores the
recurrent state checkpoint, the sampler (RNG, grammar, prev tokens)
must also be restored to maintain consistency. Without this, the
sampler state reflects the rejected draft tokens, leading to
potential divergence.
Uses common_sampler_clone() to snapshot the sampler before the
speculative batch decode, and restores it on rejection.
* server: snapshot recurrent state in tensor
* reset ngram mod state for rejected tokens
* server: refactor checkpoint state logic
* speculative: fix sampler for checkpoints
* recurrent model: implement recurrent kernel checkpoint
* recurrent model: refactor api
* spec: free rbudget before overwriting
Previously, the end-of-turn token would be added to prompt cache by
`slot.cache_tokens.push_back(slot.sampled)`, without going through
`llama_decode()` first. As such, the token doesn't exist yet in the
actual KV cache. Then, in the next turn, processing of this EOT token
will be skipped since it is already in the prompt cache.
The expected sequence (with Qwen3.5 chat template):
</tool_call><|im_end|>
<|im_start|>user
<tool_response>
The actual sequence that the model sees:
</tool_call>
<|im_start|>user
<tool_response>
As the conversation goes on, the model starts losing the ability to
generate an EOT token after `</tool_call>`, since it hasn't seen this
pattern in the context. With `--parallel-tool-calls`, the grammar
allows `</tool_call>` to be followed by either another tool call or an
EOT token. So, this eventually leads to the model making infinite tool
calls.
We fix this by deferring all the processing of the EOT token to the
next turn.
Fixes#1661
Two related issues that manifest as 'llama_decode ret=-3' on hybrid
architectures (e.g. Qwen3.5/3.6 MoE, Qwen3-Next), matching the symptom
reported in #1576.
1) server_context::apply_checkpoint() was written around transformer KV
semantics (pos_min / pos_max per-token window). For hybrid and pure
recurrent models the per-token pos_min threshold does not apply: the
recurrent state is a single snapshot, and the server-side checkpoint
is a whole-prefix record. The old selector 'cur.pos_min < pos_min_thold'
can succeed on a checkpoint whose pos_max is past the current n_past,
and — more commonly — fall through to do_reset = true, which zeros
slot.n_past / slot.n_past_prompt. Zeroing in-place while the recurrent
state in the context is still populated makes the next decode batch
disagree with the live state, returning ret=-3.
This change gates the checkpoint path on
llama_model_has_recurrent(llama_get_model(slot.ctx)):
- selector uses pos_max <= slot.n_past && pos_max < pos_next
(whole-prefix match, leaves at least one token to decode);
- on miss, slot state is preserved rather than zeroed, letting
update_slots() continue from the already-valid n_past_prompt;
- the erase loop drops any checkpoint whose pos_max > pos_next,
matching the rewind semantics for recurrent state.
Transformer behavior is unchanged.
2) stop_internal_decode is a file-static global in src/llama.cpp, set by
llama_decode_stop() (called on client disconnect) and polled inside
the decode loop to bail out with ret=-3. The flag is only cleared on
one conditional path in server_slot::release(), so a stop signal that
arrives after the interrupted llama_decode() has already returned
bleeds into the NEXT decode call and causes an immediate ret=-3 with
no work performed. Clear it at the top of the public llama_decode()
entry so the signal is scoped to the in-flight decode it was meant
for.
Build-verified: llama-server with GGML_CUDA=ON, -DCMAKE_CUDA_ARCHITECTURES=86
(sm_86), IQK flash-attn + matmul enabled. No new APIs introduced —
llama_model_has_recurrent is already public and already used elsewhere in
server-context.cpp.
Closes#1576
* Autoparser - complete refactoring of parser architecture
Autoparser: add optional argument reshuffle capability
Autoparser: True streaming (#20177)
* Relax atomicity constraint for nicer, more pleasent, True Streaming parsing
* Whitespace
* Remove redundant atomics
Revert to OAI-compatible args (#20213)
* Revert to OAI-compatible args
* Apply workaround::func_args_not_string
Fix structured outputs (#20223)
* Fix structured outputs
* Update common/chat-auto-parser-generator.cpp
Co-authored-by: Aldehir Rojas <hello@alde.dev>
---------
Co-authored-by: Aldehir Rojas <hello@alde.dev>
Fix compile bug (#20203)
* Fix compile bug
* Update common/chat-auto-parser-helpers.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
# Conflicts:
# common/chat-auto-parser-helpers.cpp
common : gracefully handle incomplete output (#20191)
* common : handle incomplete UTF-8 at end of input in PEG parser
* cont : if reached end prematurely, emit needs_more_input to propagate partial output
* cont: refactor peg parse context to add lenient flag
* cont : remove partial flag, keep lenient flag
PEG parser for LFM2 (#20251)
* PEG parser for LFM2
* Simplify using python_value()
common: map developer role to system (#20215)
* Map developer role to system
* Simplify
common: consolidate PEG string parsers (#20263)
* common : consolidate PEG string parsers
* cont : fix json_string_content()
examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)
* Fix logic for retrieving schema items in `json_schema_to_grammar.py`
If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.
I think if `schema['items']` is `{}`, them items should just be `{}`
* Apply suggestion from @CISC
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Add tests for arrays with empty items
Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Reduce level of content parser warning message to avoid log spam on non-debug verbosity (#20347)
do not return if template parse failed
add arg to enable parallel tool call
common : fix incorrect uses of stoul (#20313)
# Conflicts:
# common/arg.cpp
# src/llama-grammar.cpp
examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)
* Fix logic for retrieving schema items in `json_schema_to_grammar.py`
If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.
I think if `schema['items']` is `{}`, them items should just be `{}`
* Apply suggestion from @CISC
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Add tests for arrays with empty items
Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Add support for MiroThinker with new jinja template
common/parser: handle reasoning budget (#20297)
* v1
* Finished!
* Handlie cli
* Reasoning sampler
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* Less explosive terminology :)
* Add utf-8 case and tests
* common : migrate reasoning budget sampler to common
* cont : clean up
* cont : expose state and allow passing as initial state
* cont : remove unused imports
* cont : update state machine doc string
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Alde Rojas <hello@alde.dev>
common/parser: use nlohmann::ordered_json to preserve parameter order (#20385)
common/parser: add GigaChatV3/3.1 models support (#19931)
Co-authored-by: Mishusha <pmv26021975@gmail.com>
common/parser: gracefully handle undetected tool parser, print error message. (#20286)
fix: prevent nullptr dereference (#20552)
common : fix iterator::end() dereference (#20445)
# Conflicts:
# common/regex-partial.cpp
jinja : add capability check for object args (#20612)
common/parser: add `--skip-chat-parsing` to force a pure content parser. (#20289)
* Add `--force-pure-content` to force a pure content parser.
* Update common/arg.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
common : rework gpt-oss parser (#20393)
* common : rework gpt-oss parser
* cont : fix gpt-oss tests
* cont : add structured output test
* cont : rename final to final_msg
common : fix gpt-oss content removal (#20745)
common/parser: add proper reasoning tag prefill reading (#20424)
* Implement proper prefill extraction
* Refactor cli parameters, update docs, move reasoning budget sampler part to common/reasoning-budget.cpp
* Update tools/server/server-task.cpp
* refactor: move grammars to variant, remove grammar_external, handle exception internally
* Make code less C++y
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
chat : handle tool calls with no required args in TAG_WITH_TAGGED format (#20764)
* chat : handle tool calls with no required args in TAG_WITH_TAGGED format
* Update tests/test-chat.cpp [no ci]
Co-authored-by: Aldehir Rojas <hello@alde.dev>
---------
Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
Co-authored-by: Aldehir Rojas <hello@alde.dev>
common/parser : fix out_of_range crash in throw path (#20424 regression) (#20777)
* chat : fix out_of_range crash in throw path (#20424 regression)
#20424 introduced effective_input = generation_prompt + input, but the
throw path uses input.substr(result.end) where result.end is a position
within effective_input. Every thinking model with a non-empty
generation_prompt crashes with std::out_of_range instead of the intended
error message.
Test crashes on unpatched master, passes with fix:
cmake -B build -DLLAMA_BUILD_TESTS=ON -DLLAMA_BUILD_TOOLS=OFF
cmake --build build --target test-chat
./build/bin/test-chat
* Update test-chat.cpp
* Update test-chat.cpp
* Update test-chat.cpp
---------
Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
jinja : fix heap OOB read in value equality comparison (#20782)
Address GHSA-q9j6-4hhc-rq9p and GHSA-2q4c-9gq5-5vfp.
The three-iterator overload of std::equal in value_array_t::equivalent()
and value_object_t::equivalent() reads past the end of the shorter
container when comparing arrays or objects of different lengths.
Use the four-iterator overload (C++14) which checks both range lengths.
Found-by: Pwno
common : fix typo in debug log ('extracft' -> 'extract') (#20807)
common/parser: fix nasty bug causing subtle corruption of generation prompt (#20825)
jinja : refactor token advancement (#20864)
* refactor token advancement
* exercise sub-expressions
common/autoparser : detect reasoning markers when enable_thinking changes system prompt (#20859)
common : replace wrap_for_generation with a prefix convenience function and fix gpt-oss (#20912)
jinja: fix macro with kwargs (#20960)
* jinja: fix macro with kwargs
* Apply suggestions from code review
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* fix newline problem
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
common : inhibit lazy grammar sampler while reasoning is active (#20970)
* common : inhibit grammar while reasoning budget is active
* cont : update force_pos in accept
* cont : fix tests
* cont : tweak should apply logic
* cont : return early not using grammar sampler
* Add tests
* cont : prevent backend sampling when reasoning budget enabled
* cont : fix typo
---------
Co-authored-by: Piotr Wilkin <piotr.wilkin@syndatis.com>
# Conflicts:
# common/reasoning-budget.h
# common/sampling.cpp
# tools/cli/cli.cpp
# tools/server/server-common.cpp
# tools/server/server-task.cpp
common/parser: fix reasoning whitespace bugs + extra parser tests (#21085)
* fix whitespace reasoning issues + add reconstruction tests
* Proper fix
* fix Nemotron autoparser test expectations to include newline in marker
common : add reasoning_format = none support to gpt-oss (#21094)
common/json-schema: fix: handle non-capturing groups (?:...) in JSON schema pattern converter (#21124)
The regex-to-grammar converter in _visit_pattern() crashes with SIGSEGV
when a JSON schema "pattern" field contains a non-capturing group (?:...).
Root cause: when the parser sees '(' followed by '?', it pushes a warning
but does not advance past '?:'. The recursive transform() call then
interprets '?' as a quantifier and calls seq.back() on an empty vector,
causing undefined behavior.
This commonly occurs when serving OpenAI-compatible tool calls from
clients that include complex regex patterns in their JSON schemas (e.g.,
date validation patterns like ^(?:(?:\d\d[2468][048]|...)-02-29|...)$).
The fix:
- Skip '?:' after '(' to treat non-capturing groups as regular groups
- For unsupported syntax (?=, ?!, etc.), skip to matching ')' safely,
handling escaped characters to avoid miscounting parenthesis depth
- Adjust the ')' unbalanced-parentheses check using direct char
comparisons instead of substr
- Add test cases for non-capturing groups (C++ only, as the JS/Python
implementations do not yet support this syntax)
common/parser: fix handling of tool definition with missing properties key (#21128)
jinja : handle empty expressions correctly (#20913)
* Reject empty computed member expressions before returning slices[0] from parse_member_expression_arguments().
* Treat empty computed member expressions with Jinja2 undefined semantics
Treat empty computed member expressions like `a[]` as undefined instead of
raising a parser error, to match Jinja2 behavior.
- return a noop expression for empty computed member arguments
- return undefined when a computed member key evaluates to undefined
- add Jinja tests covering `a[]|default('fallback')` and `a[] is undefined`
* Handle undefined computed member properties
Move undefined-property handling to the common member access path, and add a test covering `a[undefined] is undefined`.
* Use default undefined value in member access
Initialize val and then return it when property is undefined.
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* empty statement parses to blank_expression instead of noop_statement
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
common : gpt-oss handle builtin and unsolicited tool calls (#21213)
fix: tool call parsing for LFM2 and LFM2.5 models (#21242)
* fix: tool call parsing for LFM2 and LFM2.5 models'
* refactor: add test / break out lfm2 and lfm2.5 parsing logic
# Conflicts:
# common/chat.cpp
Relax prefill parser to allow space. (#21240)
* Relax prefill parser to allow space.
* Move changes from prefix() to parser generation
* Only allow spaces if we're not having a pure content parser next
common : add commentary rules for gpt-oss-20b (#21286)
add reasoning budget
model, mtmd: fix gguf conversion for audio/vision mmproj (#21309)
* fix gguf conversion for audio/vision mmproj
* fix test
# Conflicts:
# convert_hf_to_gguf.py
# examples/eval-callback/eval-callback.cpp
# examples/mtmd/CMakeLists.txt
# examples/mtmd/clip-impl.h
# examples/mtmd/mtmd.cpp
# gguf-py/gguf/constants.py
# gguf-py/gguf/gguf_writer.py
# gguf-py/gguf/tensor_mapping.py
# src/CMakeLists.txt
# src/llama-arch.cpp
# src/llama-arch.h
# src/llama-model.cpp
# src/llama-model.h
# src/llama-vocab.cpp
# src/models/models.h
# tests/test-llama-archs.cpp
# tools/mtmd/clip-graph.h
# tools/mtmd/clip-model.h
# tools/mtmd/clip.cpp
# tools/mtmd/models/models.h
fix: gemma 4 template (#21326)
chat : avoid including json in chat.h (#21306)
jinja: coerce input for string-specific filters (#21370)
common : fix tool call type detection for nullable and enum schemas (#21327)
* common : fix tool call type detection for nullable and enum schemas
* common, tests : fix grammar delegation for nullable/enum schemas and add tests
Fix enum type inference to scan all enum values (not just index 0) so
schemas like {"enum": [0, "celsius"]} correctly detect string type.
Fix schema_delegates in peg-parser to handle nullable type arrays
(["string", "null"]) and typeless enum schemas in raw mode, allowing
the tagged parser to use raw text instead of JSON-formatted strings.
Add test cases for Qwen3-Coder (TAG_WITH_TAGGED format):
- nullable string ["string", "null"]
- nullable string with null first ["null", "string"]
- nullable integer ["integer", "null"]
- enum without explicit type key
common/parser: fix call ID detection (Mistral parser mostly) + atomicity for tag-json parsers (#21230)
* Fix call ID detection (Mistral parser mostly) + atomicity for tag-json parsers
* Rename
* Update common/chat-auto-parser-generator.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
common : add gemma 4 specialized parser (#21418)
* common : add gemma4 dedicated parser
* cont : add '<|tool_response>' as eog
* cont : emit JSON from Gemma4 tool call AST
* cont : more fixes
* cont : refactor convert function
* cont : refine rules and mapping
* cont : add more tests
* cont : clean up
* cont : remove autoparser gemma4 implementation
* cont : more cleanup
* cont : rename gemma4.jinja to match the others
* cont : add custom template to support interleaved thinking
* cont : preserve reasoning in model turns
* cont : fix initializer error
* cont : fix unused vars
* cont : fix accidental static
* cont : fix specialized_template signature
* fix extra semicolon
* remove debug line and extra space [no ci]
fix reasoning budget
parser: fix MiniMax handling (#21573)
jinja : support ensure_ascii=true, string repetition and int/float self-filtering (#21623)
* feat: jinja engine improvements for reka-edge
Port three Jinja engine improvements needed for the reka-edge model:
1. Python-style string repetition ("ab" * 3 → "ababab")
2. ensure_ascii=true support for tojson filter (escapes non-ASCII to \uXXXX)
3. int() builtin on value_int_t (identity, needed for Reka Edge template)
* fix: escape invalid utf8 bytes when ensure_ascii=true
The json_ensure_ascii_preserving_format function does not correctly
handle an edge case where if UTF-8 parsing fails, it adds the non-ascii
character back to the output as a raw byte.
This commit fixes that by adding the unicode standard replacement
character \\ufffd to the output instead. This is the standard behavior
for various programming languages like Python, Rust, Go, etc.
* chore: address PR comments
1. Add todo comment for supporting string repetition for array/tuples
2. Add support for float identity operation
3. Move invalid ascii test case to test_fuzzing
* chore: accept suggestion for common/jinja/value.cpp
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
common : simplify autoparser tagged parser rules (#21216)
* common : simplify autoparser tagged parser rules
* cont : remove upper limit on optional args
* cont : revert changes to parsing at the end
* cont : undo arbitrary ordering of optional args
* cont : fix uninitialized required parameters
* revert to simplify merge
* re-apply patches
* restore flexible optional arg ordering tests
common : fix ambiguous grammar rule in gemma4 (#21661)
* common : fix ambiguous grammar rule in gemma4
* cont : fix missing comma...
common : enable reasoning budget sampler for gemma4 (#21697)
* fix: enable reasoning budget sampler for gemma4
Add thinking_start_tag and thinking_end_tag to
common_chat_params_init_gemma4(). Without these, the reasoning
budget sampler never activates for gemma4.
Make the newline after "thought" optional in the PEG parser to
handle budget=0 (sampler forces end tag before the newline).
Add test case for empty thinking block.
Fixes#21487
* use p.space() instead of p.optional(p.literal("\n")) in gemma4 thought parser
common : better align to the updated official gemma4 template (#21704)
fix: Fix broken structured output when using $refs in json_schema (#21699)
chat: dedicated DeepSeek v3.2 parser + "official" template (#21785)
Hide render_message_to_json warning
common/gemma4 : handle parsing edge cases (#21760)
common: skip reasoning budget sampler when no budget is requested (#21870)
* common: skip reasoning budget sampler when no budget is requested
After I added thinking_start_tag / thinking_end_tag for gemma4 in #21697, the reasoning budget sampler gets unconditionally created even when no budget is configured (the default -1). The same applies to kimi_k2, lfm2, lfm2_5, and ministral_3 which also set these tags. The budget gets converted to INT_MAX, so the sampler never actually forces any tokens but still runs per-token checks (start tag matching in IDLE state, token-to-piece conversion + UTF-8 checks in COUNTING state).
More importantly, the mere existence of the sampler (non-null rbudget) disables backend sampling. Backend sampling lets the GPU select tokens directly, avoiding a full logits transfer from GPU to CPU every token. This could explain the 30% speed regression reported in #21784 (98 t/s to 70 t/s on Vulkan).
So I added a reasoning_budget_tokens >= 0 check to the sampler creation condition. When the budget is unlimited, the sampler is not created, backend sampling stays enabled, and no per-token overhead is added. When a budget is explicitly set (0, 128, 1024, etc.), the sampler is created and works as before.
* common: preserve rbudget when grammar is lazy
Following up on the review feedback on #21870: keep the reasoning budget sampler when grammar_lazy is true, so the thinking-block grammar suppression from #20970 still works when tools are in use. This way, we only skip the sampler when both no budget is set AND grammar is not lazy.
autoparser: support case of JSON_NATIVE with per-call markers (test case: Reka-Edge) (#21892)
* fix grammar
* fix add sampled token
---------
Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
Co-authored-by: firecoperana <firecoperana>
* wip: separate llama_context for MTP with graph reuse
* wip: fix KV cache desync with separate MTP context
* refactor: remove dead mtp logic code, encapsulate KV mirroring
* mtp-context: derive args directly from the main model's context
* mtp: fix kv cache positions
* clean small comments
* minor refactor for context shift
* wip: build spec tuner for spefic args
* wip: test different reward system
* spec-tune: fix the reward to find best params given a good TPS
* spec-tune: refactor logic for its own file
* minor clean for comments and modules
* wip: port MTP architecture
Ports the Multi-Token Prediction (MTP) architecture to the older `llama.cpp` codebase used by `ikllama`.
Changes include:
- Updating `llama_batch` to support `mtp_params`.
- Modifying `llama_decode_internal` (and `encode`) to handle MTP operations (Warmup, Update, Draft).
- Adding public APIs for MTP state management (`llama_set_draft_input_hidden_state`).
- Adapting the embedding extraction logic to skip MTP update passes.
* Refactors `server_slot` to support generic speculative decoding (MTP or Draft Model).
* core: enable hybrid outputs (logits + embeddings) for MTP support
* fix(mtp): correct KV-cache slot finding for updates
* fix(mtp): persist hidden states to prevent context corruption during drafting
* refactor(mtp): clean unused code
* fix(mtp): update server to new functions name
* fix(mtp): fix graph and save hidden state
* mtp: refactor integration, context params and kv cache search
* mtp: fix hidden state extraction and speculative acceptance flow
* server: fix MTP warmup for long prompts and reset token buffer
* llama: refactor MTP operation state to context parameters
* server: fix n_past calculation in MTP acceptance
* llama: fix mtp enable flags
* speculative: refactor MTP to use common_speculative interface
* context: remove unused signatures
* clip: fix deprecated enum-enum conversion warning
* common: fix format string crash in help message
* context: fix mtp activation logic
* llamat: always use the extracted embedding
* llama: get all embeddings to kv cache
* llama: revert logit to not run mtp for not supported arch
* llama: allocate all the n_outputs for MTP
* wip
* server-context: get only the last embedding for hidden state
* ggml-backend: fix array of bounds in debug build
* server-context: run mt kv update to each prompt batch
* revert segmentation fault fixes
* glm-mtp(feat): optimize graph embedding and recursive drafting
* simpler n_rewind logic, delete old comments
* use more consistent names, add updt_w_cur to json schema
* align comments
* refactor review logic, update struct/variable names
* revert cosmetic changes
* check enable/disable in llama_prep_adaptive_p_impl()
* delete extra whitespaces after statement
* show target in debug prints
* more concise debug print
* delete old comments
* update with loop instead of move()
* comment out all adaptive p debug prints
* more debug prints
* move review() variables: common_sampler struct -> common_sampler_review() args
* match n_unsent type
* fix merge bugs, delete adaptive p references in buffer_and_check_string_ban()
* restore accidental erasure
* Revert "adaptive p: collect probability before logit bias"
This reverts commit 1434878461.
* server : support multi-modal context checkpoints and prompt caching
do not create checkpoint right after image processing
improve mtmd check for slot ops
fix context shift
do not abort if template parse failed
* change to debug message when detecting ban token
---------
Co-authored-by: firecoperana <firecoperana>
---------
Co-authored-by: Piotr Wilkin <piotr.wilkin@syndatis.com>
common : add nemotron 3 parsing (#18077)
common : add parser for ministral/mistral large 3/devstral 2 (#17713)
common : default content to an empty string (#18485)
chat: make tool description and parameters optional per OpenAI spec (#18478)
Per the OpenAI API specification, both 'description' and 'parameters'
fields in tool function definitions are optional. Previously, the parser
would throw an exception if these fields were missing.
Attempts to fix#17667
common : implement new jinja template engine (#18462)
---------
Co-authored-by: Alde Rojas <hello@alde.dev>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
jinja: correct member access rule (#18905)
jinja : fix lexing of float literals with sign (#18901)
jinja : add missing tojson filter for bool (#18900)
jinja : attribute support for join, map and sort (#18883)
jinja : fix object item order (and properly implement dictsort) (#18904)
tests : add test-jinja -py option for cross-checking (#18906)
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
ci : run test-jinja -py on high perf [no ci] (#18916)
jinja : fix undefined keys and attributes and int/float as bool (#18924)
jinja: support none|string (#18995)
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
jinja : implement mixed type object keys (#18955)
---------
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
jinja : undefined should be treated as sequence/iterable (return string/array) by filters/tests (#19147)
`tojson` is not a supported `undefined` filter
keep it DRY and fix some types
jinja : do not pass empty tools and add some none filters (#19176)
jinja : add unordered_map include to value.h [no ci] (#19205)
jinja : add missing 'in' test to template engine (#19004) (#19239)
The jinja template parser was missing the 'in' test from
global_builtins(), causing templates using reject("in", ...),
select("in", ...), or 'x is in(y)' to fail with
"selectattr: unknown test 'in'".
This broke tool-calling for Qwen3-Coder and any other model
whose chat template uses the 'in' test.
Added test_is_in supporting array, string, and object containment
checks, mirroring the existing 'in' operator logic in runtime.cpp.
Includes test cases for all three containment types plus
reject/select filter usage.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
---------
Co-authored-by: Sid Mohan <sidmohan0@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>
Add Jinja support for "indent" string filter (#19529)
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
add vendor
refactor chat
server : support preserving reasoning_content in assistant message (#18994)
chat : fix translategemma crash on common_chat_format_example (#19019)
chat: fix language input for translategemma (#19052)
Co-authored-by: Aldehir Rojas <hello@alde.dev>
---------
Co-authored-by: Aldehir Rojas <hello@alde.dev>
chat: fix case where template accepts type content only (#19419)
mtmd : chat : Fix extra \n between text and media marker (#19595)
Thanks to @tugot17 for detecting and reporting the issue.
For vision models (e.g. LFM2.5-VL-1.6B and Qwen/Qwen3-VL-4B-Instruct) `llama-mtmd-cli` produces identical output to HF implementation.
However `llama-server` doesn't. I traced it down to extra newline
inserted after `<__media__>`.
This happens in `to_json_oaicompat`, that treats media markers as text
and joins all parts with `\n` separator.
PR introduces new type `media_marker` and uses it for media markers.
Extra logic is added to prevent insertion of newlines before and after
media markers.
With this change number of input tokens is identical to HF
implementation and as a result the output is also identical.
I explored other ways to address the issue
* remove completely `\n` between text parts in `to_json_oaicompat`
* merge text messages in server-common.cpp before sending them to `to_json_oaicompat`
Please propose alternative ways of fixing this issue.
Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
---------
Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
common : merge qwen3-coder and nemotron nano 3 parsers (#19765)
common : fix improper trimming in XML parser on complete message (#19805)
Co-authored-by: Jules LEIDELINGER <11395311+julio75012@users.noreply.github.com>
jinja: correct stats for tojson and string filters (#19785)
jinja : correct default size for string slices (#19913)
common : handle unicode during partial json parsing (#16526)
common : fix json schema with '\' in literals (#17307)
add back qwen_coder_xml and mirothinker
Co-authored-by: Aldehir Rojas <hello@alde.dev>
* fix adaptive p sampler rewinding too far back
* update comments
* correct default value for total_weight, more comments
* new variables/names
* update comment for n_rewind
* move null pointer check back to common_sampler_review()
* refactor weighted_sum and total_weight to vector<pair>, better boundary check in llama_review_adaptive_p_impl()
* server: enable checkpoint for recurrent models
create checkpoint after cancel
fix ban string and rm context during rewind
add checkpoint interval
only save recurrent cache
* save checkpoint during pp
---------
Co-authored-by: firecoperana <firecoperana>
When multiple prompts are sent in a single /v1/completions request,
each response needs to carry the correct index so the client can
match results to their corresponding prompts. The index field was
not being set on partial responses, final responses, or embedding
responses, causing batch results to all report index 0.
Set res->index = slot.task->index in send_partial_response,
send_final_response, and send_embedding.
Generated with [Devin](https://cli.devin.ai/docs)
Co-authored-by: Joshua Jolley <jjolley@clearwateranalytics.com>
Co-authored-by: Devin <noreply@cognition.ai>
* wip: port MTP architecture
Ports the Multi-Token Prediction (MTP) architecture to the older `llama.cpp` codebase used by `ikllama`.
Changes include:
- Updating `llama_batch` to support `mtp_params`.
- Modifying `llama_decode_internal` (and `encode`) to handle MTP operations (Warmup, Update, Draft).
- Adding public APIs for MTP state management (`llama_set_draft_input_hidden_state`).
- Adapting the embedding extraction logic to skip MTP update passes.
* Refactors `server_slot` to support generic speculative decoding (MTP or Draft Model).
* core: enable hybrid outputs (logits + embeddings) for MTP support
* fix(mtp): correct KV-cache slot finding for updates
* fix(mtp): persist hidden states to prevent context corruption during drafting
* refactor(mtp): clean unused code
* fix(mtp): update server to new functions name
* fix(mtp): fix graph and save hidden state
* mtp: refactor integration, context params and kv cache search
* mtp: fix hidden state extraction and speculative acceptance flow
* server: fix MTP warmup for long prompts and reset token buffer
* llama: refactor MTP operation state to context parameters
* server: fix n_past calculation in MTP acceptance
* llama: fix mtp enable flags
* speculative: refactor MTP to use common_speculative interface
* context: remove unused signatures
* clip: fix deprecated enum-enum conversion warning
* common: fix format string crash in help message
* context: fix mtp activation logic
* adaptive p: upadte internal state only if not rewinding
* adaptive p: conditional update for speculative decoding
* adaptive p: refactor to rewind instead of update
* adaptive p fix: better comments
* fix rewind check
* add record to handle multi-token rewind
* better comment
* spec : add self speculative decoding and ngram-mod and refactor
common : use common_ prefix for common library function
llama : use LLAMA_TOKEN_NULL
spec : add self speculative decoding (no draft model required) + refactor
spec : add ngram-mod
spec : various improvements ton ngram-map + docs
spec : fix the check-rate logic of ngram-simple
common : add common_speculative_is_compat()
spec : simplify time measurement using common_time_meas
refactor common_sampler_init
refactor common_token_to_piece
refactor and fix cur_p bug
clean up
* spec : remove check rate
* spec: show warnings instead of abort
---------
Co-authored-by: firecoperana <firecoperana>
Co-authored-by: Sascha Rogmann <59577610+srogmann@users.noreply.github.com>
This implements the ability to load, unload, and scale control vectors
(representation engineering) mid-inference, following the existing
task-queue pattern used by LoRA adapters.
New Endpoints:
- GET /control-vectors
- POST /control-vectors/load
- POST /control-vectors/unload
- POST /control-vectors/apply (handles scaling)
Technical Notes:
- Centralizes vector aggregation logic to share implementation between
load, unload, and apply tasks.
- Vectors are applied globally to the model context.
- Enforces dimension validation on load to safely reject incompatible
vectors.
Co-authored-by: Gapeleon <gapeleon@users.noreply.github.com>
* server: improve speed of speculative decoding
change logs
rpc: add recompute
spec dec fix
* Fix n_batch_size not set to context size for draft model
---------
Co-authored-by: firecoperana <firecoperana>