Commit Graph

47 Commits

Author SHA1 Message Date
firecoperana
9f1deefa71 server: revert checkpoint fix (#1716)
Co-authored-by: firecoperana <firecoperana>
2026-05-02 16:09:28 +03:00
Kawrakow
a8aecbf159 Disable k-shift for split mode graph (#1714) 2026-04-30 18:03:29 +02:00
Samuel Oliveira Alves
67e6346225 Support for Qwen 3.5 MTP (dense models only) (#1698)
* qwen-mtp: add dense mtp for one draft

* add support for smaller qwen mtp commit

* qwen-mtp: fix graph for qwen dense variants

* Squashed commit of the following:

commit a92a154b38c7fddc84460f8852c900f8d6ce907e
Author: SamuelOliveirads <samueloliveira32df@gmail.com>
Date:   Mon Apr 20 13:30:21 2026 -0300

    recurrent model: refactor api

commit dfac8f19f6
Author: SamuelOliveirads <samueloliveira32df@gmail.com>
Date:   Mon Apr 20 12:22:29 2026 -0300

    recurrent model: implement recurrent kernel checkpoint

commit 9c44b117f9
Author: SamuelOliveirads <samueloliveira32df@gmail.com>
Date:   Sat Apr 18 11:52:39 2026 -0300

    speculative: fix sampler for checkpoints

commit e7006393bc
Author: SamuelOliveirads <samueloliveira32df@gmail.com>
Date:   Fri Apr 17 14:08:25 2026 -0300

    server: refactor checkpoint state logic

commit 57eabf04df
Merge: dc4797b7 64234e3c
Author: SamuelOliveirads <samueloliveira32df@gmail.com>
Date:   Fri Apr 17 13:53:41 2026 -0300

    Merge branch 'main' into fix/hybrid-cache-speculative

commit dc4797b723
Author: SamuelOliveirads <samueloliveira32df@gmail.com>
Date:   Fri Apr 17 13:12:40 2026 -0300

    reset ngram mod state for rejected tokens

commit 8ff2d943a3
Author: SamuelOliveirads <samueloliveira32df@gmail.com>
Date:   Fri Apr 17 13:08:04 2026 -0300

    server: snapshot recurrent state in tensor

commit d93dfb5e6b
Author: SamuelOliveirads <samueloliveira32df@gmail.com>
Date:   Thu Apr 16 22:36:37 2026 -0300

    fix: save/restore sampler state during speculative checkpoint

    When speculative decoding rejects draft tokens and restores the
    recurrent state checkpoint, the sampler (RNG, grammar, prev tokens)
    must also be restored to maintain consistency. Without this, the
    sampler state reflects the rejected draft tokens, leading to
    potential divergence.

    Uses common_sampler_clone() to snapshot the sampler before the
    speculative batch decode, and restores it on rejection.

commit d670cf85cd
Author: SamuelOliveirads <samueloliveira32df@gmail.com>
Date:   Thu Apr 16 21:53:52 2026 -0300

    server: spec checkpoints for recurrent models

* server: fix leak context between requests

* qwen3: allow mtp to run with split graph

* qwen3 mtp: selects rows before the ffn
2026-04-28 07:47:50 +02:00
Samuel Oliveira Alves
ea94afe777 Speculative checkpoints for recurrent models (#1669)
* server: spec checkpoints for recurrent models

* fix: save/restore sampler state during speculative checkpoint

When speculative decoding rejects draft tokens and restores the
recurrent state checkpoint, the sampler (RNG, grammar, prev tokens)
must also be restored to maintain consistency. Without this, the
sampler state reflects the rejected draft tokens, leading to
potential divergence.

Uses common_sampler_clone() to snapshot the sampler before the
speculative batch decode, and restores it on rejection.

* server: snapshot recurrent state in tensor

* reset ngram mod state for rejected tokens

* server: refactor checkpoint state logic

* speculative: fix sampler for checkpoints

* recurrent model: implement recurrent kernel checkpoint

* recurrent model: refactor api

* spec: free rbudget before overwriting
2026-04-24 09:59:30 +02:00
Yap Sok Ann
1c13288164 Fix infinite tool calls loop with --parallel-tool-calls (#1679)
Previously, the end-of-turn token would be added to prompt cache by
`slot.cache_tokens.push_back(slot.sampled)`, without going through
`llama_decode()` first. As such, the token doesn't exist yet in the
actual KV cache. Then, in the next turn, processing of this EOT token
will be skipped since it is already in the prompt cache.

The expected sequence (with Qwen3.5 chat template):

    </tool_call><|im_end|>
    <|im_start|>user
    <tool_response>

The actual sequence that the model sees:

    </tool_call>
    <|im_start|>user
    <tool_response>

As the conversation goes on, the model starts losing the ability to
generate an EOT token after `</tool_call>`, since it hasn't seen this
pattern in the context. With `--parallel-tool-calls`, the grammar
allows `</tool_call>` to be followed by either another tool call or an
EOT token. So, this eventually leads to the model making infinite tool
calls.

We fix this by deferring all the processing of the EOT token to the
next turn.

Fixes #1661
2026-04-24 09:48:21 +02:00
markaalonzo
48819dadaf server: fix ret=-3 on hybrid/recurrent prompt cache, and clear sticky stop flag (#1673)
Two related issues that manifest as 'llama_decode ret=-3' on hybrid
architectures (e.g. Qwen3.5/3.6 MoE, Qwen3-Next), matching the symptom
reported in #1576.

1) server_context::apply_checkpoint() was written around transformer KV
   semantics (pos_min / pos_max per-token window). For hybrid and pure
   recurrent models the per-token pos_min threshold does not apply: the
   recurrent state is a single snapshot, and the server-side checkpoint
   is a whole-prefix record. The old selector 'cur.pos_min < pos_min_thold'
   can succeed on a checkpoint whose pos_max is past the current n_past,
   and — more commonly — fall through to do_reset = true, which zeros
   slot.n_past / slot.n_past_prompt. Zeroing in-place while the recurrent
   state in the context is still populated makes the next decode batch
   disagree with the live state, returning ret=-3.

   This change gates the checkpoint path on
   llama_model_has_recurrent(llama_get_model(slot.ctx)):
   - selector uses pos_max <= slot.n_past && pos_max < pos_next
     (whole-prefix match, leaves at least one token to decode);
   - on miss, slot state is preserved rather than zeroed, letting
     update_slots() continue from the already-valid n_past_prompt;
   - the erase loop drops any checkpoint whose pos_max > pos_next,
     matching the rewind semantics for recurrent state.

   Transformer behavior is unchanged.

2) stop_internal_decode is a file-static global in src/llama.cpp, set by
   llama_decode_stop() (called on client disconnect) and polled inside
   the decode loop to bail out with ret=-3. The flag is only cleared on
   one conditional path in server_slot::release(), so a stop signal that
   arrives after the interrupted llama_decode() has already returned
   bleeds into the NEXT decode call and causes an immediate ret=-3 with
   no work performed. Clear it at the top of the public llama_decode()
   entry so the signal is scoped to the in-flight decode it was meant
   for.

Build-verified: llama-server with GGML_CUDA=ON, -DCMAKE_CUDA_ARCHITECTURES=86
(sm_86), IQK flash-attn + matmul enabled. No new APIs introduced —
llama_model_has_recurrent is already public and already used elsewhere in
server-context.cpp.

Closes #1576
2026-04-23 09:19:17 +02:00
firecoperana
e0596bf614 Autoparser - complete refactoring of parser architecture (#1376)
* Autoparser - complete refactoring of parser architecture

Autoparser: add optional argument reshuffle capability

Autoparser: True streaming (#20177)

* Relax atomicity constraint for nicer, more pleasent, True Streaming parsing

* Whitespace

* Remove redundant atomics

Revert to OAI-compatible args (#20213)

* Revert to OAI-compatible args

* Apply workaround::func_args_not_string

Fix structured outputs (#20223)

* Fix structured outputs

* Update common/chat-auto-parser-generator.cpp

Co-authored-by: Aldehir Rojas <hello@alde.dev>

---------

Co-authored-by: Aldehir Rojas <hello@alde.dev>

Fix compile bug (#20203)

* Fix compile bug

* Update common/chat-auto-parser-helpers.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
# Conflicts:
#	common/chat-auto-parser-helpers.cpp

common : gracefully handle incomplete output (#20191)

* common : handle incomplete UTF-8 at end of input in PEG parser

* cont : if reached end prematurely, emit needs_more_input to propagate partial output

* cont: refactor peg parse context to add lenient flag

* cont : remove partial flag, keep lenient flag

PEG parser for LFM2 (#20251)

* PEG parser for LFM2

* Simplify using python_value()

common: map developer role to system (#20215)

* Map developer role to system
* Simplify

common: consolidate PEG string parsers (#20263)

* common : consolidate PEG string parsers
* cont : fix json_string_content()

examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)

* Fix logic for retrieving schema items in `json_schema_to_grammar.py`

If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.

I think if `schema['items']` is `{}`, them items should just be `{}`

* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Add tests for arrays with empty items

Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

Reduce level of content parser warning message to avoid log spam on non-debug verbosity (#20347)

do not return if template parse failed

add arg to enable parallel tool call

common : fix incorrect uses of stoul (#20313)
# Conflicts:
#	common/arg.cpp
#	src/llama-grammar.cpp

examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)

* Fix logic for retrieving schema items in `json_schema_to_grammar.py`

If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.

I think if `schema['items']` is `{}`, them items should just be `{}`

* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Add tests for arrays with empty items

Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

Add support for MiroThinker with new jinja template

common/parser: handle reasoning budget (#20297)

* v1

* Finished!

* Handlie cli

* Reasoning sampler

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Less explosive terminology :)

* Add utf-8 case and tests

* common : migrate reasoning budget sampler to common

* cont : clean up

* cont : expose state and allow passing as initial state

* cont : remove unused imports

* cont : update state machine doc string

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Alde Rojas <hello@alde.dev>

common/parser: use nlohmann::ordered_json to preserve parameter order (#20385)

common/parser: add GigaChatV3/3.1 models support (#19931)

Co-authored-by: Mishusha <pmv26021975@gmail.com>

common/parser: gracefully handle undetected tool parser, print error message. (#20286)

fix: prevent nullptr dereference (#20552)

common : fix iterator::end() dereference (#20445)
# Conflicts:
#	common/regex-partial.cpp

jinja : add capability check for object args (#20612)

common/parser: add `--skip-chat-parsing` to force a pure content parser. (#20289)

* Add `--force-pure-content` to force a pure content parser.

* Update common/arg.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : rework gpt-oss parser (#20393)

* common : rework gpt-oss parser

* cont : fix gpt-oss tests

* cont : add structured output test

* cont : rename final to final_msg

common : fix gpt-oss content removal (#20745)

common/parser: add proper reasoning tag prefill reading (#20424)

* Implement proper prefill extraction

* Refactor cli parameters, update docs, move reasoning budget sampler part to common/reasoning-budget.cpp

* Update tools/server/server-task.cpp

* refactor: move grammars to variant, remove grammar_external, handle exception internally

* Make code less C++y

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

chat : handle tool calls with no required args in TAG_WITH_TAGGED format (#20764)

* chat : handle tool calls with no required args in TAG_WITH_TAGGED format

* Update tests/test-chat.cpp [no ci]

Co-authored-by: Aldehir Rojas <hello@alde.dev>

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
Co-authored-by: Aldehir Rojas <hello@alde.dev>

common/parser : fix out_of_range crash in throw path (#20424 regression) (#20777)

* chat : fix out_of_range crash in throw path (#20424 regression)

#20424 introduced effective_input = generation_prompt + input, but the
throw path uses input.substr(result.end) where result.end is a position
within effective_input. Every thinking model with a non-empty
generation_prompt crashes with std::out_of_range instead of the intended
error message.

Test crashes on unpatched master, passes with fix:

  cmake -B build -DLLAMA_BUILD_TESTS=ON -DLLAMA_BUILD_TOOLS=OFF
  cmake --build build --target test-chat
  ./build/bin/test-chat

* Update test-chat.cpp

* Update test-chat.cpp

* Update test-chat.cpp

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>

jinja : fix heap OOB read in value equality comparison (#20782)

Address GHSA-q9j6-4hhc-rq9p and GHSA-2q4c-9gq5-5vfp.

The three-iterator overload of std::equal in value_array_t::equivalent()
and value_object_t::equivalent() reads past the end of the shorter
container when comparing arrays or objects of different lengths.

Use the four-iterator overload (C++14) which checks both range lengths.

Found-by: Pwno

common : fix typo in debug log ('extracft' -> 'extract') (#20807)

common/parser: fix nasty bug causing subtle corruption of generation prompt (#20825)

jinja : refactor token advancement (#20864)

* refactor token advancement

* exercise sub-expressions

common/autoparser : detect reasoning markers when enable_thinking changes system prompt (#20859)

common : replace wrap_for_generation with a prefix convenience function and fix gpt-oss (#20912)

jinja: fix macro with kwargs (#20960)

* jinja: fix macro with kwargs

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* fix newline problem

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : inhibit lazy grammar sampler while reasoning is active (#20970)

* common : inhibit grammar while reasoning budget is active

* cont : update force_pos in accept

* cont : fix tests

* cont : tweak should apply logic

* cont : return early not using grammar sampler

* Add tests

* cont : prevent backend sampling when reasoning budget enabled

* cont : fix typo

---------

Co-authored-by: Piotr Wilkin <piotr.wilkin@syndatis.com>
# Conflicts:
#	common/reasoning-budget.h
#	common/sampling.cpp
#	tools/cli/cli.cpp
#	tools/server/server-common.cpp
#	tools/server/server-task.cpp

common/parser: fix reasoning whitespace bugs + extra parser tests (#21085)

* fix whitespace reasoning issues + add reconstruction tests

* Proper fix

* fix Nemotron autoparser test expectations to include newline in marker

common : add reasoning_format = none support to gpt-oss (#21094)

common/json-schema: fix: handle non-capturing groups (?:...) in JSON schema pattern converter (#21124)

The regex-to-grammar converter in _visit_pattern() crashes with SIGSEGV
when a JSON schema "pattern" field contains a non-capturing group (?:...).

Root cause: when the parser sees '(' followed by '?', it pushes a warning
but does not advance past '?:'. The recursive transform() call then
interprets '?' as a quantifier and calls seq.back() on an empty vector,
causing undefined behavior.

This commonly occurs when serving OpenAI-compatible tool calls from
clients that include complex regex patterns in their JSON schemas (e.g.,
date validation patterns like ^(?:(?:\d\d[2468][048]|...)-02-29|...)$).

The fix:
- Skip '?:' after '(' to treat non-capturing groups as regular groups
- For unsupported syntax (?=, ?!, etc.), skip to matching ')' safely,
  handling escaped characters to avoid miscounting parenthesis depth
- Adjust the ')' unbalanced-parentheses check using direct char
  comparisons instead of substr
- Add test cases for non-capturing groups (C++ only, as the JS/Python
  implementations do not yet support this syntax)

common/parser: fix handling of tool definition with missing properties key (#21128)

jinja : handle empty expressions correctly (#20913)

* Reject empty computed member expressions before returning slices[0] from parse_member_expression_arguments().

* Treat empty computed member expressions with Jinja2 undefined semantics

Treat empty computed member expressions like `a[]` as undefined instead of
raising a parser error, to match Jinja2 behavior.

- return a noop expression for empty computed member arguments
- return undefined when a computed member key evaluates to undefined
- add Jinja tests covering `a[]|default('fallback')` and `a[] is undefined`

* Handle undefined computed member properties

Move undefined-property handling to the common member access path, and add a test covering `a[undefined] is undefined`.

* Use default undefined value in member access

Initialize val and then return it when property is undefined.

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* empty statement parses to blank_expression instead of noop_statement

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : gpt-oss handle builtin and unsolicited tool calls (#21213)

fix: tool call parsing for LFM2 and LFM2.5 models (#21242)

* fix: tool call parsing for LFM2 and LFM2.5 models'

* refactor: add test / break out lfm2 and lfm2.5 parsing logic
# Conflicts:
#	common/chat.cpp

Relax prefill parser to allow space. (#21240)

* Relax prefill parser to allow space.

* Move changes from prefix() to parser generation

* Only allow spaces if we're not having a pure content parser next

common : add commentary rules for gpt-oss-20b (#21286)

add reasoning budget

model, mtmd: fix gguf conversion for audio/vision mmproj (#21309)

* fix gguf conversion for audio/vision mmproj

* fix test
# Conflicts:
#	convert_hf_to_gguf.py
#	examples/eval-callback/eval-callback.cpp
#	examples/mtmd/CMakeLists.txt
#	examples/mtmd/clip-impl.h
#	examples/mtmd/mtmd.cpp
#	gguf-py/gguf/constants.py
#	gguf-py/gguf/gguf_writer.py
#	gguf-py/gguf/tensor_mapping.py
#	src/CMakeLists.txt
#	src/llama-arch.cpp
#	src/llama-arch.h
#	src/llama-model.cpp
#	src/llama-model.h
#	src/llama-vocab.cpp
#	src/models/models.h
#	tests/test-llama-archs.cpp
#	tools/mtmd/clip-graph.h
#	tools/mtmd/clip-model.h
#	tools/mtmd/clip.cpp
#	tools/mtmd/models/models.h

fix: gemma 4 template (#21326)

chat : avoid including json in chat.h (#21306)

jinja: coerce input for string-specific filters (#21370)

common : fix tool call type detection for nullable and enum schemas (#21327)

* common : fix tool call type detection for nullable and enum schemas

* common, tests : fix grammar delegation for nullable/enum schemas and add tests

Fix enum type inference to scan all enum values (not just index 0) so
schemas like {"enum": [0, "celsius"]} correctly detect string type.

Fix schema_delegates in peg-parser to handle nullable type arrays
(["string", "null"]) and typeless enum schemas in raw mode, allowing
the tagged parser to use raw text instead of JSON-formatted strings.

Add test cases for Qwen3-Coder (TAG_WITH_TAGGED format):
- nullable string ["string", "null"]
- nullable string with null first ["null", "string"]
- nullable integer ["integer", "null"]
- enum without explicit type key

common/parser: fix call ID detection (Mistral parser mostly) + atomicity for tag-json parsers (#21230)

* Fix call ID detection (Mistral parser mostly) + atomicity for tag-json parsers

* Rename

* Update common/chat-auto-parser-generator.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : add gemma 4 specialized parser (#21418)

* common : add gemma4 dedicated parser

* cont : add '<|tool_response>' as eog

* cont : emit JSON from Gemma4 tool call AST

* cont : more fixes

* cont : refactor convert function

* cont : refine rules and mapping

* cont : add more tests

* cont : clean up

* cont : remove autoparser gemma4 implementation

* cont : more cleanup

* cont : rename gemma4.jinja to match the others

* cont : add custom template to support interleaved thinking

* cont : preserve reasoning in model turns

* cont : fix initializer error

* cont : fix unused vars

* cont : fix accidental static

* cont : fix specialized_template signature

* fix extra semicolon

* remove debug line and extra space [no ci]

fix reasoning budget

parser: fix MiniMax handling (#21573)

jinja : support ensure_ascii=true, string repetition and int/float self-filtering (#21623)

* feat: jinja engine improvements for reka-edge

Port three Jinja engine improvements needed for the reka-edge model:
1. Python-style string repetition ("ab" * 3 → "ababab")
2. ensure_ascii=true support for tojson filter (escapes non-ASCII to \uXXXX)
3. int() builtin on value_int_t (identity, needed for Reka Edge template)

* fix: escape invalid utf8 bytes when ensure_ascii=true

The json_ensure_ascii_preserving_format function does not correctly
handle an edge case where if UTF-8 parsing fails, it adds the non-ascii
character back to the output as a raw byte.

This commit fixes that by adding the unicode standard replacement
character \\ufffd to the output instead. This is the standard behavior
for various programming languages like Python, Rust, Go, etc.

* chore: address PR comments

1. Add todo comment for supporting string repetition for array/tuples
2. Add support for float identity operation
3. Move invalid ascii test case to test_fuzzing

* chore: accept suggestion for common/jinja/value.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : simplify autoparser tagged parser rules (#21216)

* common : simplify autoparser tagged parser rules

* cont : remove upper limit on optional args

* cont : revert changes to parsing at the end

* cont : undo arbitrary ordering of optional args

* cont : fix uninitialized required parameters

* revert to simplify merge

* re-apply patches

* restore flexible optional arg ordering tests

common : fix ambiguous grammar rule in gemma4 (#21661)

* common : fix ambiguous grammar rule in gemma4

* cont : fix missing comma...

common : enable reasoning budget sampler for gemma4 (#21697)

* fix: enable reasoning budget sampler for gemma4

Add thinking_start_tag and thinking_end_tag to
common_chat_params_init_gemma4(). Without these, the reasoning
budget sampler never activates for gemma4.

Make the newline after "thought" optional in the PEG parser to
handle budget=0 (sampler forces end tag before the newline).

Add test case for empty thinking block.

Fixes #21487

* use p.space() instead of p.optional(p.literal("\n")) in gemma4 thought parser

common : better align to the updated official gemma4 template (#21704)

fix: Fix broken structured output when using $refs in json_schema (#21699)

chat: dedicated DeepSeek v3.2 parser + "official" template (#21785)

Hide render_message_to_json warning

common/gemma4 : handle parsing edge cases (#21760)

common: skip reasoning budget sampler when no budget is requested (#21870)

* common: skip reasoning budget sampler when no budget is requested

After I added thinking_start_tag / thinking_end_tag for gemma4 in #21697, the reasoning budget sampler gets unconditionally created even when no budget is configured (the default -1). The same applies to kimi_k2, lfm2, lfm2_5, and ministral_3 which also set these tags. The budget gets converted to INT_MAX, so the sampler never actually forces any tokens but still runs per-token checks (start tag matching in IDLE state, token-to-piece conversion + UTF-8 checks in COUNTING state).

More importantly, the mere existence of the sampler (non-null rbudget) disables backend sampling. Backend sampling lets the GPU select tokens directly, avoiding a full logits transfer from GPU to CPU every token. This could explain the 30% speed regression reported in #21784 (98 t/s to 70 t/s on Vulkan).

So I added a reasoning_budget_tokens >= 0 check to the sampler creation condition. When the budget is unlimited, the sampler is not created, backend sampling stays enabled, and no per-token overhead is added. When a budget is explicitly set (0, 128, 1024, etc.), the sampler is created and works as before.

* common: preserve rbudget when grammar is lazy

Following up on the review feedback on #21870: keep the reasoning budget sampler when grammar_lazy is true, so the thinking-block grammar suppression from #20970 still works when tools are in use. This way, we only skip the sampler when both no budget is set AND grammar is not lazy.

autoparser: support case of JSON_NATIVE with per-call markers (test case: Reka-Edge) (#21892)

* fix grammar

* fix add sampled token

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
Co-authored-by: firecoperana <firecoperana>
2026-04-22 10:04:13 +02:00
firecoperana
7b6507ddac server: fix usage stats (#1647)
Co-authored-by: firecoperana <firecoperana>
2026-04-17 07:27:47 +02:00
Kawrakow
539d1cf989 Disallow speculation for hybrid/recurrent models (#1645) 2026-04-16 17:21:44 +02:00
Samuel Oliveira Alves
470d3a3b5b Add support for parallel graphs to GLM MTP (#1637)
* mtp: fix split graph assert

* Add mtp split graph mode

* remove unused ffn function for unsupported mtp

* revert cuda context syncronization
2026-04-16 08:05:34 +02:00
dungquixote42
869b83bc49 Add Unicode allowlist (#1597)
* initial commit

* cleanup

* fix whitelist arg parsing and simplify keyword search state

* rename white* to allow*

* add vocab_pieces init function, rename update functions, delete accidentally added file

* delete temporary bias code

* auto-generate fill function with script data inside

* deduplicate allowlist unicode rule parsing

* minor cleanup

* delete unnecessary header

* refactor allowlist to support sequential rule sets via keywords

* add early exit for zero-rules case

* delete accidentally added file
2026-04-10 18:22:57 +02:00
Samuel Oliveira Alves
557b674f63 Add llama_context to MTP (#1601)
* wip: separate llama_context for MTP with graph reuse

* wip: fix KV cache desync with separate MTP context

* refactor: remove dead mtp logic code, encapsulate KV mirroring

* mtp-context: derive args directly from the main model's context

* mtp: fix kv cache positions

* clean small comments

* minor refactor for context shift
2026-04-09 15:33:56 +02:00
Samuel Oliveira Alves
3de81530c5 Allow tuning of the best args for speculative decoding. (#1595)
* wip: build spec tuner for spefic args

* wip: test different reward system

* spec-tune: fix the reward to find best params given a good TPS

* spec-tune: refactor logic for its own file

* minor clean for comments and modules
2026-04-08 08:02:42 +02:00
firecoperana
5e8bb724ce server: support slot save/restore/erase for mtmd tokens and checkpoints (#1584)
Co-authored-by: firecoperana <firecoperana>
2026-04-05 08:41:04 +02:00
hksdpc255
46f9f0fb31 fix #1524 (#1543) 2026-03-29 18:50:09 +02:00
Samuel Oliveira Alves
1f3e832cb3 Improve mtp acceptance rate (#1499)
* wip: port MTP architecture

Ports the Multi-Token Prediction (MTP) architecture to the older `llama.cpp` codebase used by `ikllama`.

Changes include:
- Updating `llama_batch` to support `mtp_params`.
- Modifying `llama_decode_internal` (and `encode`) to handle MTP operations (Warmup, Update, Draft).
- Adding public APIs for MTP state management (`llama_set_draft_input_hidden_state`).
- Adapting the embedding extraction logic to skip MTP update passes.

* Refactors `server_slot` to support generic speculative decoding (MTP or Draft Model).

* core: enable hybrid outputs (logits + embeddings) for MTP support

* fix(mtp): correct KV-cache slot finding for updates

* fix(mtp): persist hidden states to prevent context corruption during drafting

* refactor(mtp): clean unused code

* fix(mtp): update server to new functions name

* fix(mtp): fix graph and save hidden state

* mtp: refactor integration, context params and kv cache search

* mtp: fix hidden state extraction and speculative acceptance flow

* server: fix MTP warmup for long prompts and reset token buffer

* llama: refactor MTP operation state to context parameters

* server: fix n_past calculation in MTP acceptance

* llama: fix mtp enable flags

* speculative: refactor MTP to use common_speculative interface

* context: remove unused signatures

* clip: fix deprecated enum-enum conversion warning

* common: fix format string crash in help message

* context: fix mtp activation logic

* llamat: always use the extracted embedding

* llama: get all embeddings to kv cache

* llama: revert logit to not run mtp for not supported arch

* llama: allocate all the n_outputs for MTP

* wip

* server-context: get only the last embedding for hidden state

* ggml-backend: fix array of bounds in debug build

* server-context: run mt kv update to each prompt batch

* revert segmentation fault fixes

* glm-mtp(feat): optimize graph embedding and recursive drafting
2026-03-25 10:20:22 +01:00
firecoperana
cdf9142aa5 fix grammar stack empty error for qwen3.5 (#1490)
* fix grammar stack empty error for qwen3.5

* Add to --help

---------

Co-authored-by: firecoperana <firecoperana>
2026-03-24 07:48:20 +01:00
firecoperana
0c9bc3ed28 server: support --minilog to log request message for completions/response/anthropic and response (#1477)
Co-authored-by: firecoperana <firecoperana>
2026-03-20 16:13:43 +01:00
firecoperana
f9b7fe9749 llama: add --dry-run option (#1462)
Co-authored-by: firecoperana <firecoperana>
2026-03-18 17:20:17 +01:00
dungquixote42
be2940f57a Adaptive P sampler: update review logic, delete old code comments, put prep stage after logit bias (#1386)
* simpler n_rewind logic, delete old comments

* use more consistent names, add updt_w_cur to json schema

* align comments

* refactor review logic, update struct/variable names

* revert cosmetic changes

* check enable/disable in llama_prep_adaptive_p_impl()

* delete extra whitespaces after statement

* show target in debug prints

* more concise debug print

* delete old comments

* update with loop instead of move()

* comment out all adaptive p debug prints

* more debug prints

* move review() variables: common_sampler struct -> common_sampler_review() args

* match n_unsent type

* fix merge bugs, delete adaptive p references in buffer_and_check_string_ban()

* restore accidental erasure

* Revert "adaptive p: collect probability before logit bias"

This reverts commit 1434878461.
2026-03-14 12:34:12 +01:00
firecoperana
433531ddae server : support multi-modal context checkpoints and prompt caching (#1398)
* server : support multi-modal context checkpoints and prompt caching

do not create checkpoint right after image processing

improve mtmd check for slot ops

fix context shift

do not abort if template parse failed

* change to debug message when detecting ban token

---------

Co-authored-by: firecoperana <firecoperana>
2026-03-13 08:07:57 +01:00
SneedwareInc
4a247593dc Make string ban more robust and add regex ban (#1243)
* Test new ctx_sampling->n_rewind system

* CRLF quickfix

* Adaptive p check

* merge banned_n

* Fix attempt 1

* Fix attempt 2
2026-03-11 15:30:27 +01:00
firecoperana
ab1d74074b common : introduce composable PEG parser combinators for chat parsing and new jinja template engine (#1369)
---------

Co-authored-by: Piotr Wilkin <piotr.wilkin@syndatis.com>

common : add nemotron 3 parsing (#18077)

common : add parser for ministral/mistral large 3/devstral 2 (#17713)

common : default content to an empty string (#18485)

chat: make tool description and parameters optional per OpenAI spec (#18478)

Per the OpenAI API specification, both 'description' and 'parameters'
fields in tool function definitions are optional. Previously, the parser
would throw an exception if these fields were missing.

Attempts to fix #17667

common : implement new jinja template engine (#18462)
---------

Co-authored-by: Alde Rojas <hello@alde.dev>
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

jinja: correct member access rule (#18905)

jinja : fix lexing of float literals with sign (#18901)

jinja : add missing tojson filter for bool (#18900)

jinja : attribute support for join, map and sort (#18883)

jinja : fix object item order (and properly implement dictsort) (#18904)

tests : add test-jinja -py option for cross-checking (#18906)

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

ci : run test-jinja -py on high perf [no ci] (#18916)

jinja : fix undefined keys and attributes and int/float as bool (#18924)

jinja: support none|string (#18995)

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

jinja : implement mixed type object keys (#18955)

---------

Co-authored-by: Xuan Son Nguyen <son@huggingface.co>

jinja : undefined should be treated as sequence/iterable (return string/array) by filters/tests (#19147)

`tojson` is not a supported `undefined` filter

keep it DRY and fix some types

jinja : do not pass empty tools and add some none filters (#19176)

jinja : add unordered_map include to value.h [no ci] (#19205)

jinja : add missing 'in' test to template engine (#19004) (#19239)

The jinja template parser was missing the 'in' test from
global_builtins(), causing templates using reject("in", ...),
select("in", ...), or 'x is in(y)' to fail with
"selectattr: unknown test 'in'".

This broke tool-calling for Qwen3-Coder and any other model
whose chat template uses the 'in' test.

Added test_is_in supporting array, string, and object containment
checks, mirroring the existing 'in' operator logic in runtime.cpp.

Includes test cases for all three containment types plus
reject/select filter usage.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Sid Mohan <sidmohan0@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Xuan Son Nguyen <son@huggingface.co>

Add Jinja support for "indent" string filter (#19529)

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

add vendor

refactor chat

server : support preserving reasoning_content in assistant message (#18994)

chat : fix translategemma crash on common_chat_format_example (#19019)

chat: fix language input for translategemma (#19052)

Co-authored-by: Aldehir Rojas <hello@alde.dev>

---------

Co-authored-by: Aldehir Rojas <hello@alde.dev>

chat: fix case where template accepts type content only (#19419)

mtmd : chat : Fix extra \n between text and media marker (#19595)

Thanks to @tugot17 for detecting and reporting the issue.

For vision models (e.g. LFM2.5-VL-1.6B and Qwen/Qwen3-VL-4B-Instruct) `llama-mtmd-cli` produces identical output to HF implementation.

However `llama-server` doesn't. I traced it down to extra newline
inserted after `<__media__>`.

This happens in `to_json_oaicompat`, that treats media markers as text
and joins all parts with `\n` separator.

PR introduces new type `media_marker` and uses it for media markers.
Extra logic is added to prevent insertion of newlines before and after
media markers.

With this change number of input tokens is identical to HF
implementation and as a result the output is also identical.

I explored other ways to address the issue
* remove completely `\n` between text parts in `to_json_oaicompat`
* merge text messages in server-common.cpp before sending them to `to_json_oaicompat`

Please propose alternative ways of fixing this issue.

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>

common : merge qwen3-coder and nemotron nano 3 parsers (#19765)

common : fix improper trimming in XML parser on complete message (#19805)

Co-authored-by: Jules LEIDELINGER <11395311+julio75012@users.noreply.github.com>

jinja: correct stats for tojson and string filters (#19785)

jinja : correct default size for string slices (#19913)

common : handle unicode during partial json parsing (#16526)

common : fix json schema with '\' in literals (#17307)

add back qwen_coder_xml and mirothinker

Co-authored-by: Aldehir Rojas <hello@alde.dev>
2026-03-09 11:03:33 +01:00
dungquixote42
a903409a5e fix adaptive p sampler rewinding too far back (#1359)
* fix adaptive p sampler rewinding too far back

* update comments

* correct default value for total_weight, more comments

* new variables/names

* update comment for n_rewind

* move null pointer check back to common_sampler_review()

* refactor weighted_sum and total_weight to vector<pair>, better boundary check in llama_review_adaptive_p_impl()
2026-03-04 13:26:25 +01:00
Kawrakow
fd16a418de Fix clang warnings on macOS (#1354)
Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2026-03-03 16:27:16 +01:00
firecoperana
8f9e19d57c server: add checkpoint tolerance and fix grammar_trigger init (#1346)
Co-authored-by: firecoperana <firecoperana>
2026-03-02 07:45:32 +01:00
firecoperana
3fac78c48b server: enable checkpoint for recurrent models (#1310)
* server: enable checkpoint for recurrent models

create checkpoint after cancel

fix ban string and rm context during rewind

add checkpoint interval

only save recurrent cache

* save checkpoint during pp

---------

Co-authored-by: firecoperana <firecoperana>
2026-02-26 06:51:18 +01:00
Joshua Jolley
68431b049a server: propagate task index to response objects for batch requests (#1303)
When multiple prompts are sent in a single /v1/completions request,
each response needs to carry the correct index so the client can
match results to their corresponding prompts. The index field was
not being set on partial responses, final responses, or embedding
responses, causing batch results to all report index 0.

Set res->index = slot.task->index in send_partial_response,
send_final_response, and send_embedding.

Generated with [Devin](https://cli.devin.ai/docs)

Co-authored-by: Joshua Jolley <jjolley@clearwateranalytics.com>
Co-authored-by: Devin <noreply@cognition.ai>
2026-02-24 15:39:38 +01:00
Samuel Oliveira Alves
09a88c9ae5 Add MTP decoding support for GLM-4.x MoE (#1270)
* wip: port MTP architecture

Ports the Multi-Token Prediction (MTP) architecture to the older `llama.cpp` codebase used by `ikllama`.

Changes include:
- Updating `llama_batch` to support `mtp_params`.
- Modifying `llama_decode_internal` (and `encode`) to handle MTP operations (Warmup, Update, Draft).
- Adding public APIs for MTP state management (`llama_set_draft_input_hidden_state`).
- Adapting the embedding extraction logic to skip MTP update passes.

* Refactors `server_slot` to support generic speculative decoding (MTP or Draft Model).

* core: enable hybrid outputs (logits + embeddings) for MTP support

* fix(mtp): correct KV-cache slot finding for updates

* fix(mtp): persist hidden states to prevent context corruption during drafting

* refactor(mtp): clean unused code

* fix(mtp): update server to new functions name

* fix(mtp): fix graph and save hidden state

* mtp: refactor integration, context params and kv cache search

* mtp: fix hidden state extraction and speculative acceptance flow

* server: fix MTP warmup for long prompts and reset token buffer

* llama: refactor MTP operation state to context parameters

* server: fix n_past calculation in MTP acceptance

* llama: fix mtp enable flags

* speculative: refactor MTP to use common_speculative interface

* context: remove unused signatures

* clip: fix deprecated enum-enum conversion warning

* common: fix format string crash in help message

* context: fix mtp activation logic
2026-02-22 18:14:39 +01:00
firecoperana
66323b92f7 Qwen3.5-MoE: fix regenerating message error (#1295)
Co-authored-by: firecoperana <firecoperana>
2026-02-21 18:24:12 +01:00
dungquixote42
0f411b02e2 Fix adaptive p sampler bug with string ban (#1287)
* adaptive p: upadte internal state only if not rewinding

* adaptive p: conditional update for speculative decoding

* adaptive p: refactor to rewind instead of update

* adaptive p fix: better comments

* fix rewind check

* add record to handle multi-token rewind

* better comment
2026-02-20 07:11:36 +01:00
rkozuch
b855bf92de Fix slot prompt updating. (#1285)
Co-authored-by: Rkozuch <you@example.com>
2026-02-19 08:15:49 +01:00
Samuel Oliveira Alves
88f98c891d server: add string ban in speculative path (#1274) 2026-02-17 12:33:28 +01:00
RodriMora
102f77b7d3 server: add /v1/responses support (#1184)
* server: add /v1/responses support

* server: fix Responses API model fallback and SSE branching
2026-02-14 08:30:18 +01:00
firecoperana
1cb7e1bf39 spec : add self speculative decoding, ngram and refactor (#1261)
* spec : add self speculative decoding and ngram-mod and refactor

common : use common_ prefix for common library function

llama : use LLAMA_TOKEN_NULL

spec : add self speculative decoding (no draft model required) + refactor

spec : add ngram-mod

spec : various improvements ton ngram-map + docs

spec : fix the check-rate logic of ngram-simple

common : add common_speculative_is_compat()

spec : simplify time measurement using common_time_meas

refactor common_sampler_init

refactor common_token_to_piece

refactor and fix cur_p bug

clean up

* spec : remove check rate

* spec: show warnings instead of abort

---------

Co-authored-by: firecoperana <firecoperana>
Co-authored-by: Sascha Rogmann <59577610+srogmann@users.noreply.github.com>
2026-02-13 19:04:55 +01:00
firecoperana
f1ccf340dd fix model name missing in final response (#1250)
Co-authored-by: firecoperana <firecoperana>
2026-02-07 18:31:39 +02:00
firecoperana
8d952ff183 Server: add string ban (#1185)
* server: add string ban

* increase rewind limit

* init n_buffer

---------

Co-authored-by: firecoperana <firecoperana>
2026-02-05 08:12:34 +02:00
gapeleon
17d101863d server: add dynamic control vector management endpoints (#1223)
This implements the ability to load, unload, and scale control vectors
(representation engineering) mid-inference, following the existing
task-queue pattern used by LoRA adapters.

New Endpoints:
- GET  /control-vectors
- POST /control-vectors/load
- POST /control-vectors/unload
- POST /control-vectors/apply (handles scaling)

Technical Notes:
- Centralizes vector aggregation logic to share implementation between
  load, unload, and apply tasks.
- Vectors are applied globally to the model context.
- Enforces dimension validation on load to safely reject incompatible
  vectors.

Co-authored-by: Gapeleon <gapeleon@users.noreply.github.com>
2026-02-04 16:07:18 +02:00
firecoperana
d71a3ec315 Server: refactor and rename functions (#1151)
* Server: rename functions and refactor code

rename functions

refactor update slots

rename params_base

rename timings

* change

* Revert kv cache name changes

* Revert 2

* fix test build error

---------

Co-authored-by: firecoperana <firecoperana>
2026-01-18 08:16:57 +02:00
firecoperana
672df48ed1 server: keep logit bias unchanged when client does not set it (#1144)
Co-authored-by: firecoperana <firecoperana>
2026-01-13 18:08:09 +02:00
hksdpc255
e1c4c4a495 Fix Anthropic Messages API (#1136)
* server: stop processing the prompt when client disconnects

implement generator-based API for task results

Update httplib.h to 0.27.0

Fix embedding error

Stop prompt processing when disconnected

* Port upstream https://github.com/ggml-org/llama.cpp/pull/18551

* add back anthropic

* Fix merge issue caused by github webui

---------

Co-authored-by: firecoperana <firecoperana>
2026-01-13 08:37:29 +02:00
firecoperana
1a461525d5 server: stop processing the prompt when client disconnects (#1134)
implement generator-based API for task results

Update httplib.h to 0.27.0

Fix embedding error

Stop prompt processing when disconnected

Co-authored-by: firecoperana <firecoperana>
2026-01-13 07:56:59 +02:00
Kawrakow
d3e3ad40f9 Compiler warning and white space 2026-01-12 19:06:17 +02:00
firecoperana
c03ee1a4d2 server: improve speed of speculative decoding (#1119)
* server: improve speed of speculative decoding

change logs

rpc: add recompute

spec dec fix

* Fix n_batch_size not set to context size for draft model

---------

Co-authored-by: firecoperana <firecoperana>
2026-01-10 08:01:22 +02:00
dungquixote42
52ad1c6421 Implement Adaptive-P Sampler (#1100)
* initial implementation of adaptive-p sampler

* explicitly mark candidates unsorted + cleanup qualifiers

* cosmetic update

* reorg prototypes

* lockstep with mainline

* add _impl for _init + reorg

* add LLAMA_API to prototypes

* update sharpness to 10

* lockstep: rng seed

* delete llama_sampling member in llama_sampler_adaptive_p

* fix LLAMA_API return type

* lockstep: rng seed cont

* actually correct implementation

* lockstep: sorting behavior

* const -> constexpr for known constants

* add missing space

* fix softmax usage in adaptive p sampler

* cosmetic changes

* implement do-not-sort version of softmax

* simpify rng seed, add static to constexpr

* refactor: remove iface + use shared rng + use actually original probabilities

* adaptive-p: add dedicated rng back in

* fix initial max_logit + add float vector to adaptive p sampler context + stochastic sampling

* adaptive-p: fuse first softmax with transformation

* adaptive-p: implement binary search selection

* adaptive-p: update comment
2026-01-10 07:58:53 +02:00
firecoperana
2a633c4357 server: exclude thinking tokens when finding the slot (#1079)
refactor find slot

enable by default

Fix load prompt

rename variables

Co-authored-by: firecoperana <firecoperana>
2025-12-22 09:46:45 +01:00
firecoperana
0e91b89cd3 Refactor chat and server file (#1062)
* Add alternative log functions

* chat: fix int overflow, prevent size calculation in float/double (#17357)

* chat: fix int overflow, prevent size calculation in float/double

* Update common/chat.cpp

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

---------

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

* common : move all common_chat_parse_* to chat-parser.cpp. (#17481)

# Conflicts:
#	common/chat.cpp

* server: split server.cpp code into server/common/task/queue/context

* Fix compiler warning

* Clean up code

* common: use native MultiByteToWideChar

* move server prompt to server task

* Clean code

* delete utils.hpp

---------

Co-authored-by: firecoperana <firecoperana>
Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Co-authored-by: DAN™ <dranger003@gmail.com>
2025-12-15 08:27:20 +01:00