Files
ik_llama.cpp/examples/server/server-task.h
firecoperana e0596bf614 Autoparser - complete refactoring of parser architecture (#1376)
* Autoparser - complete refactoring of parser architecture

Autoparser: add optional argument reshuffle capability

Autoparser: True streaming (#20177)

* Relax atomicity constraint for nicer, more pleasent, True Streaming parsing

* Whitespace

* Remove redundant atomics

Revert to OAI-compatible args (#20213)

* Revert to OAI-compatible args

* Apply workaround::func_args_not_string

Fix structured outputs (#20223)

* Fix structured outputs

* Update common/chat-auto-parser-generator.cpp

Co-authored-by: Aldehir Rojas <hello@alde.dev>

---------

Co-authored-by: Aldehir Rojas <hello@alde.dev>

Fix compile bug (#20203)

* Fix compile bug

* Update common/chat-auto-parser-helpers.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
# Conflicts:
#	common/chat-auto-parser-helpers.cpp

common : gracefully handle incomplete output (#20191)

* common : handle incomplete UTF-8 at end of input in PEG parser

* cont : if reached end prematurely, emit needs_more_input to propagate partial output

* cont: refactor peg parse context to add lenient flag

* cont : remove partial flag, keep lenient flag

PEG parser for LFM2 (#20251)

* PEG parser for LFM2

* Simplify using python_value()

common: map developer role to system (#20215)

* Map developer role to system
* Simplify

common: consolidate PEG string parsers (#20263)

* common : consolidate PEG string parsers
* cont : fix json_string_content()

examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)

* Fix logic for retrieving schema items in `json_schema_to_grammar.py`

If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.

I think if `schema['items']` is `{}`, them items should just be `{}`

* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Add tests for arrays with empty items

Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

Reduce level of content parser warning message to avoid log spam on non-debug verbosity (#20347)

do not return if template parse failed

add arg to enable parallel tool call

common : fix incorrect uses of stoul (#20313)
# Conflicts:
#	common/arg.cpp
#	src/llama-grammar.cpp

examples : fix empty items in json_schema_to_grammar.py [no ci] (#19968)

* Fix logic for retrieving schema items in `json_schema_to_grammar.py`

If `schema['items']` is `{}` and `prefixItems not in schema', as `{}` is Falsy, the original code here will raise an error.

I think if `schema['items']` is `{}`, them items should just be `{}`

* Apply suggestion from @CISC

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Add tests for arrays with empty items

Add two unit tests to `tests/test-json-schema-to-grammar.cpp` that validate handling of arrays when 'items' is an empty schema and when 'prefixItems' is present alongside an empty 'items'. Both tests expect the same generated grammar, ensuring the JSON Schema->grammar conversion treats an empty 'items' schema (and the presence of 'prefixItems') correctly and covering this edge case.

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

Add support for MiroThinker with new jinja template

common/parser: handle reasoning budget (#20297)

* v1

* Finished!

* Handlie cli

* Reasoning sampler

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* Less explosive terminology :)

* Add utf-8 case and tests

* common : migrate reasoning budget sampler to common

* cont : clean up

* cont : expose state and allow passing as initial state

* cont : remove unused imports

* cont : update state machine doc string

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
Co-authored-by: Alde Rojas <hello@alde.dev>

common/parser: use nlohmann::ordered_json to preserve parameter order (#20385)

common/parser: add GigaChatV3/3.1 models support (#19931)

Co-authored-by: Mishusha <pmv26021975@gmail.com>

common/parser: gracefully handle undetected tool parser, print error message. (#20286)

fix: prevent nullptr dereference (#20552)

common : fix iterator::end() dereference (#20445)
# Conflicts:
#	common/regex-partial.cpp

jinja : add capability check for object args (#20612)

common/parser: add `--skip-chat-parsing` to force a pure content parser. (#20289)

* Add `--force-pure-content` to force a pure content parser.

* Update common/arg.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : rework gpt-oss parser (#20393)

* common : rework gpt-oss parser

* cont : fix gpt-oss tests

* cont : add structured output test

* cont : rename final to final_msg

common : fix gpt-oss content removal (#20745)

common/parser: add proper reasoning tag prefill reading (#20424)

* Implement proper prefill extraction

* Refactor cli parameters, update docs, move reasoning budget sampler part to common/reasoning-budget.cpp

* Update tools/server/server-task.cpp

* refactor: move grammars to variant, remove grammar_external, handle exception internally

* Make code less C++y

Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>

chat : handle tool calls with no required args in TAG_WITH_TAGGED format (#20764)

* chat : handle tool calls with no required args in TAG_WITH_TAGGED format

* Update tests/test-chat.cpp [no ci]

Co-authored-by: Aldehir Rojas <hello@alde.dev>

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
Co-authored-by: Aldehir Rojas <hello@alde.dev>

common/parser : fix out_of_range crash in throw path (#20424 regression) (#20777)

* chat : fix out_of_range crash in throw path (#20424 regression)

#20424 introduced effective_input = generation_prompt + input, but the
throw path uses input.substr(result.end) where result.end is a position
within effective_input. Every thinking model with a non-empty
generation_prompt crashes with std::out_of_range instead of the intended
error message.

Test crashes on unpatched master, passes with fix:

  cmake -B build -DLLAMA_BUILD_TESTS=ON -DLLAMA_BUILD_TOOLS=OFF
  cmake --build build --target test-chat
  ./build/bin/test-chat

* Update test-chat.cpp

* Update test-chat.cpp

* Update test-chat.cpp

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>

jinja : fix heap OOB read in value equality comparison (#20782)

Address GHSA-q9j6-4hhc-rq9p and GHSA-2q4c-9gq5-5vfp.

The three-iterator overload of std::equal in value_array_t::equivalent()
and value_object_t::equivalent() reads past the end of the shorter
container when comparing arrays or objects of different lengths.

Use the four-iterator overload (C++14) which checks both range lengths.

Found-by: Pwno

common : fix typo in debug log ('extracft' -> 'extract') (#20807)

common/parser: fix nasty bug causing subtle corruption of generation prompt (#20825)

jinja : refactor token advancement (#20864)

* refactor token advancement

* exercise sub-expressions

common/autoparser : detect reasoning markers when enable_thinking changes system prompt (#20859)

common : replace wrap_for_generation with a prefix convenience function and fix gpt-oss (#20912)

jinja: fix macro with kwargs (#20960)

* jinja: fix macro with kwargs

* Apply suggestions from code review

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* fix newline problem

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : inhibit lazy grammar sampler while reasoning is active (#20970)

* common : inhibit grammar while reasoning budget is active

* cont : update force_pos in accept

* cont : fix tests

* cont : tweak should apply logic

* cont : return early not using grammar sampler

* Add tests

* cont : prevent backend sampling when reasoning budget enabled

* cont : fix typo

---------

Co-authored-by: Piotr Wilkin <piotr.wilkin@syndatis.com>
# Conflicts:
#	common/reasoning-budget.h
#	common/sampling.cpp
#	tools/cli/cli.cpp
#	tools/server/server-common.cpp
#	tools/server/server-task.cpp

common/parser: fix reasoning whitespace bugs + extra parser tests (#21085)

* fix whitespace reasoning issues + add reconstruction tests

* Proper fix

* fix Nemotron autoparser test expectations to include newline in marker

common : add reasoning_format = none support to gpt-oss (#21094)

common/json-schema: fix: handle non-capturing groups (?:...) in JSON schema pattern converter (#21124)

The regex-to-grammar converter in _visit_pattern() crashes with SIGSEGV
when a JSON schema "pattern" field contains a non-capturing group (?:...).

Root cause: when the parser sees '(' followed by '?', it pushes a warning
but does not advance past '?:'. The recursive transform() call then
interprets '?' as a quantifier and calls seq.back() on an empty vector,
causing undefined behavior.

This commonly occurs when serving OpenAI-compatible tool calls from
clients that include complex regex patterns in their JSON schemas (e.g.,
date validation patterns like ^(?:(?:\d\d[2468][048]|...)-02-29|...)$).

The fix:
- Skip '?:' after '(' to treat non-capturing groups as regular groups
- For unsupported syntax (?=, ?!, etc.), skip to matching ')' safely,
  handling escaped characters to avoid miscounting parenthesis depth
- Adjust the ')' unbalanced-parentheses check using direct char
  comparisons instead of substr
- Add test cases for non-capturing groups (C++ only, as the JS/Python
  implementations do not yet support this syntax)

common/parser: fix handling of tool definition with missing properties key (#21128)

jinja : handle empty expressions correctly (#20913)

* Reject empty computed member expressions before returning slices[0] from parse_member_expression_arguments().

* Treat empty computed member expressions with Jinja2 undefined semantics

Treat empty computed member expressions like `a[]` as undefined instead of
raising a parser error, to match Jinja2 behavior.

- return a noop expression for empty computed member arguments
- return undefined when a computed member key evaluates to undefined
- add Jinja tests covering `a[]|default('fallback')` and `a[] is undefined`

* Handle undefined computed member properties

Move undefined-property handling to the common member access path, and add a test covering `a[undefined] is undefined`.

* Use default undefined value in member access

Initialize val and then return it when property is undefined.

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

* empty statement parses to blank_expression instead of noop_statement

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : gpt-oss handle builtin and unsolicited tool calls (#21213)

fix: tool call parsing for LFM2 and LFM2.5 models (#21242)

* fix: tool call parsing for LFM2 and LFM2.5 models'

* refactor: add test / break out lfm2 and lfm2.5 parsing logic
# Conflicts:
#	common/chat.cpp

Relax prefill parser to allow space. (#21240)

* Relax prefill parser to allow space.

* Move changes from prefix() to parser generation

* Only allow spaces if we're not having a pure content parser next

common : add commentary rules for gpt-oss-20b (#21286)

add reasoning budget

model, mtmd: fix gguf conversion for audio/vision mmproj (#21309)

* fix gguf conversion for audio/vision mmproj

* fix test
# Conflicts:
#	convert_hf_to_gguf.py
#	examples/eval-callback/eval-callback.cpp
#	examples/mtmd/CMakeLists.txt
#	examples/mtmd/clip-impl.h
#	examples/mtmd/mtmd.cpp
#	gguf-py/gguf/constants.py
#	gguf-py/gguf/gguf_writer.py
#	gguf-py/gguf/tensor_mapping.py
#	src/CMakeLists.txt
#	src/llama-arch.cpp
#	src/llama-arch.h
#	src/llama-model.cpp
#	src/llama-model.h
#	src/llama-vocab.cpp
#	src/models/models.h
#	tests/test-llama-archs.cpp
#	tools/mtmd/clip-graph.h
#	tools/mtmd/clip-model.h
#	tools/mtmd/clip.cpp
#	tools/mtmd/models/models.h

fix: gemma 4 template (#21326)

chat : avoid including json in chat.h (#21306)

jinja: coerce input for string-specific filters (#21370)

common : fix tool call type detection for nullable and enum schemas (#21327)

* common : fix tool call type detection for nullable and enum schemas

* common, tests : fix grammar delegation for nullable/enum schemas and add tests

Fix enum type inference to scan all enum values (not just index 0) so
schemas like {"enum": [0, "celsius"]} correctly detect string type.

Fix schema_delegates in peg-parser to handle nullable type arrays
(["string", "null"]) and typeless enum schemas in raw mode, allowing
the tagged parser to use raw text instead of JSON-formatted strings.

Add test cases for Qwen3-Coder (TAG_WITH_TAGGED format):
- nullable string ["string", "null"]
- nullable string with null first ["null", "string"]
- nullable integer ["integer", "null"]
- enum without explicit type key

common/parser: fix call ID detection (Mistral parser mostly) + atomicity for tag-json parsers (#21230)

* Fix call ID detection (Mistral parser mostly) + atomicity for tag-json parsers

* Rename

* Update common/chat-auto-parser-generator.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : add gemma 4 specialized parser (#21418)

* common : add gemma4 dedicated parser

* cont : add '<|tool_response>' as eog

* cont : emit JSON from Gemma4 tool call AST

* cont : more fixes

* cont : refactor convert function

* cont : refine rules and mapping

* cont : add more tests

* cont : clean up

* cont : remove autoparser gemma4 implementation

* cont : more cleanup

* cont : rename gemma4.jinja to match the others

* cont : add custom template to support interleaved thinking

* cont : preserve reasoning in model turns

* cont : fix initializer error

* cont : fix unused vars

* cont : fix accidental static

* cont : fix specialized_template signature

* fix extra semicolon

* remove debug line and extra space [no ci]

fix reasoning budget

parser: fix MiniMax handling (#21573)

jinja : support ensure_ascii=true, string repetition and int/float self-filtering (#21623)

* feat: jinja engine improvements for reka-edge

Port three Jinja engine improvements needed for the reka-edge model:
1. Python-style string repetition ("ab" * 3 → "ababab")
2. ensure_ascii=true support for tojson filter (escapes non-ASCII to \uXXXX)
3. int() builtin on value_int_t (identity, needed for Reka Edge template)

* fix: escape invalid utf8 bytes when ensure_ascii=true

The json_ensure_ascii_preserving_format function does not correctly
handle an edge case where if UTF-8 parsing fails, it adds the non-ascii
character back to the output as a raw byte.

This commit fixes that by adding the unicode standard replacement
character \\ufffd to the output instead. This is the standard behavior
for various programming languages like Python, Rust, Go, etc.

* chore: address PR comments

1. Add todo comment for supporting string repetition for array/tuples
2. Add support for float identity operation
3. Move invalid ascii test case to test_fuzzing

* chore: accept suggestion for common/jinja/value.cpp

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

---------

Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>

common : simplify autoparser tagged parser rules (#21216)

* common : simplify autoparser tagged parser rules

* cont : remove upper limit on optional args

* cont : revert changes to parsing at the end

* cont : undo arbitrary ordering of optional args

* cont : fix uninitialized required parameters

* revert to simplify merge

* re-apply patches

* restore flexible optional arg ordering tests

common : fix ambiguous grammar rule in gemma4 (#21661)

* common : fix ambiguous grammar rule in gemma4

* cont : fix missing comma...

common : enable reasoning budget sampler for gemma4 (#21697)

* fix: enable reasoning budget sampler for gemma4

Add thinking_start_tag and thinking_end_tag to
common_chat_params_init_gemma4(). Without these, the reasoning
budget sampler never activates for gemma4.

Make the newline after "thought" optional in the PEG parser to
handle budget=0 (sampler forces end tag before the newline).

Add test case for empty thinking block.

Fixes #21487

* use p.space() instead of p.optional(p.literal("\n")) in gemma4 thought parser

common : better align to the updated official gemma4 template (#21704)

fix: Fix broken structured output when using $refs in json_schema (#21699)

chat: dedicated DeepSeek v3.2 parser + "official" template (#21785)

Hide render_message_to_json warning

common/gemma4 : handle parsing edge cases (#21760)

common: skip reasoning budget sampler when no budget is requested (#21870)

* common: skip reasoning budget sampler when no budget is requested

After I added thinking_start_tag / thinking_end_tag for gemma4 in #21697, the reasoning budget sampler gets unconditionally created even when no budget is configured (the default -1). The same applies to kimi_k2, lfm2, lfm2_5, and ministral_3 which also set these tags. The budget gets converted to INT_MAX, so the sampler never actually forces any tokens but still runs per-token checks (start tag matching in IDLE state, token-to-piece conversion + UTF-8 checks in COUNTING state).

More importantly, the mere existence of the sampler (non-null rbudget) disables backend sampling. Backend sampling lets the GPU select tokens directly, avoiding a full logits transfer from GPU to CPU every token. This could explain the 30% speed regression reported in #21784 (98 t/s to 70 t/s on Vulkan).

So I added a reasoning_budget_tokens >= 0 check to the sampler creation condition. When the budget is unlimited, the sampler is not created, backend sampling stays enabled, and no per-token overhead is added. When a budget is explicitly set (0, 128, 1024, etc.), the sampler is created and works as before.

* common: preserve rbudget when grammar is lazy

Following up on the review feedback on #21870: keep the reasoning budget sampler when grammar_lazy is true, so the thinking-block grammar suppression from #20970 still works when tools are in use. This way, we only skip the sampler when both no budget is set AND grammar is not lazy.

autoparser: support case of JSON_NATIVE with per-call markers (test case: Reka-Edge) (#21892)

* fix grammar

* fix add sampled token

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
Co-authored-by: firecoperana <firecoperana>
2026-04-22 10:04:13 +02:00

449 lines
11 KiB
C++

#pragma once
#include "common.h"
#include "llama.h"
#include <string>
#include <unordered_set>
#include <list>
// TODO: prevent including the whole server-common.h as we only use server_tokens
#include "server-common.h"
using json = nlohmann::ordered_json;
enum stop_type {
STOP_TYPE_NONE,
STOP_TYPE_EOS,
STOP_TYPE_WORD,
STOP_TYPE_LIMIT,
};
enum server_task_type {
SERVER_TASK_TYPE_COMPLETION,
SERVER_TASK_TYPE_EMBEDDING,
SERVER_TASK_TYPE_RERANK,
SERVER_TASK_TYPE_INFILL,
SERVER_TASK_TYPE_CANCEL,
SERVER_TASK_TYPE_NEXT_RESPONSE,
SERVER_TASK_TYPE_METRICS,
SERVER_TASK_TYPE_SLOT_SAVE,
SERVER_TASK_TYPE_SLOT_RESTORE,
SERVER_TASK_TYPE_SLOT_ERASE,
SERVER_TASK_TYPE_SET_LORA,
SERVER_TASK_TYPE_LOAD_CONTROL_VECTOR,
SERVER_TASK_TYPE_UNLOAD_CONTROL_VECTOR,
SERVER_TASK_TYPE_SET_CONTROL_VECTOR,
};
enum oaicompat_type {
OAICOMPAT_TYPE_NONE,
OAICOMPAT_TYPE_CHAT,
OAICOMPAT_TYPE_COMPLETION,
OAICOMPAT_TYPE_EMBEDDING,
OAICOMPAT_TYPE_ANTHROPIC,
OAICOMPAT_TYPE_RESP,
};
struct slot_params {
bool stream = true;
bool include_usage = false;
bool cache_prompt = true; // remember the prompt to avoid reprocessing all prompt
int32_t n_keep = 0; // number of tokens to keep from initial prompt
int32_t n_discard = 0; // number of tokens after n_keep that may be discarded when shifting context, 0 defaults to half
int32_t n_predict = -1; // new tokens to predict
thinking_tokens think_tokens;
std::vector<std::string> antiprompt;
bool timings_per_token = false;
bool post_sampling_probs = false;
json input_prefix;
json input_suffix;
// speculative decoding parameters
struct common_params_speculative speculative;
// OAI-compat fields
oaicompat_type oaicompat = OAICOMPAT_TYPE_NONE;
std::string oaicompat_model;
std::string oaicompat_cmpl_id;
common_chat_parser_params chat_parser_params;
// Embeddings
int32_t embd_normalize = 2; // (-1=none, 0=max absolute int16, 1=taxicab, 2=Euclidean/L2, >2=p-norm)
};
struct server_task {
int id = -1; // to be filled by server_queue
int id_multi = -1;
int index = -1; // used when there are multiple prompts (batch request)
// used by SERVER_TASK_TYPE_CANCEL
int id_target = -1;
int id_slot = -1;
// used by SERVER_TASK_TYPE_INFERENCE
struct slot_params params;
server_tokens tokens;
server_task_type type;
json data;
bool infill = false;
bool embedding = false;
server_task() = default;
server_task(server_task_type type) : type(type) {}
int32_t n_tokens() const {
return tokens.size();
}
// utility function
static std::unordered_set<int> get_list_id(const std::vector<server_task>& tasks) {
std::unordered_set<int> ids(tasks.size());
for (size_t i = 0; i < tasks.size(); i++) {
ids.insert(tasks[i].id);
}
return ids;
}
};
struct result_timings {
int32_t prompt_n = -1;
double prompt_ms;
double prompt_per_token_ms;
double prompt_per_second;
int32_t predicted_n = -1;
double predicted_ms;
double predicted_per_token_ms;
double predicted_per_second;
int32_t n_ctx = 0;
int32_t n_past = 0;
// Optional speculative metrics - only included when > 0
int32_t draft_n = 0;
int32_t draft_n_accepted = 0;
json to_json() const;
};
struct server_task_result {
int id = -1;
int id_multi = -1;
json data;
bool stop;
bool error;
bool final_result = false;
result_timings timings;
// OAI-compat fields
//bool verbose = false;
oaicompat_type oaicompat = OAICOMPAT_TYPE_NONE;
std::string oaicompat_model;
std::string oaicompat_cmpl_id;
common_chat_format oaicompat_chat_format = COMMON_CHAT_FORMAT_CONTENT_ONLY;
common_chat_msg oaicompat_msg;
std::vector<common_chat_msg_diff> oaicompat_msg_diffs;
int index = 0;
std::string content;
std::vector<llama_token> tokens;
bool stream;
bool include_usage;
std::string prompt;
//slot_params generation_params;
bool truncated;
int32_t n_decoded;
int32_t n_prompt_tokens;
int32_t n_prompt_tokens_cache;
int32_t n_tokens_cached;
bool has_new_line;
std::string stopping_word;
bool post_sampling_probs = false;
std::vector<completion_token_output> probs_output;
std::vector<std::string> response_fields;
//slot_params generation_params;
bool verbose = false;
virtual bool is_error() {
// only used by server_task_result_error
return false;
}
virtual bool is_stop() {
// only used by server_task_result_cmpl_*
// in stream mode, final responses are considered stop
return true;
}
virtual json to_json() {
return {};
};
int get_index() {
return index;
}
};
struct server_task_result_cmpl_partial : server_task_result {
bool anthropic_has_reasoning = false;
bool anthropic_thinking_block_started = false;
bool anthropic_text_block_started = false;
bool oai_resp_thinking_block_started = false;
bool oai_resp_text_block_started = false;
std::string oai_resp_id;
std::string oai_resp_reasoning_id;
std::string oai_resp_message_id;
std::string oai_resp_fc_id;
virtual bool is_stop() override {
return false; // in stream mode, partial responses are not considered stop
}
json to_json_non_oaicompat_partial();
json to_json_oaicompat_partial();
json to_json_anthropic_partial();
json to_json_oaicompat_chat_partial();
json to_json_oaicompat_resp_partial();
virtual json to_json() override {
switch (oaicompat) {
case OAICOMPAT_TYPE_NONE:
return to_json_non_oaicompat_partial();
case OAICOMPAT_TYPE_COMPLETION:
return to_json_oaicompat_partial();
case OAICOMPAT_TYPE_CHAT:
return to_json_oaicompat_chat_partial();
case OAICOMPAT_TYPE_ANTHROPIC:
return to_json_anthropic_partial();
case OAICOMPAT_TYPE_RESP:
return to_json_oaicompat_resp_partial();
default:
GGML_ASSERT(false && "Invalid oaicompat_type");
};
}
};
struct server_task_result_cmpl_final : server_task_result {
std::string oai_resp_id;
std::string oai_resp_reasoning_id;
std::string oai_resp_message_id;
bool anthropic_thinking_block_started = false;
bool anthropic_text_block_started = false;
virtual bool is_stop() override {
return true;
}
json to_json_non_oaicompat_final();
json usage_json_oaicompat();
json to_json_oaicompat_final();
json to_json_oaicompat_chat_final();
json to_json_anthropic_final();
json to_json_anthropic_stream();
json to_json_oaicompat_chat_stream();
json to_json_oaicompat_resp_final();
json to_json_oaicompat_resp_stream();
virtual json to_json() override {
switch (oaicompat) {
case OAICOMPAT_TYPE_NONE:
return to_json_non_oaicompat_final();
case OAICOMPAT_TYPE_COMPLETION:
return to_json_oaicompat_final();
case OAICOMPAT_TYPE_CHAT:
return stream ? to_json_oaicompat_chat_stream() : to_json_oaicompat_chat_final();
case OAICOMPAT_TYPE_ANTHROPIC:
return stream ? to_json_anthropic_stream() : to_json_anthropic_final();
case OAICOMPAT_TYPE_RESP:
return stream ? to_json_oaicompat_resp_stream() : to_json_oaicompat_resp_final();
default:
GGML_ASSERT(false && "Invalid oaicompat_type");
}
}
};
struct server_task_result_error : server_task_result {
int index = 0;
error_type err_type = ERROR_TYPE_SERVER;
std::string err_msg;
// for ERROR_TYPE_EXCEED_CONTEXT_SIZE
int32_t n_prompt_tokens = 0;
int32_t n_ctx = 0;
virtual bool is_error() override {
return true;
}
virtual json to_json() override {
json res = format_error_response(err_msg, err_type);
return res;
}
};
struct server_task_result_embd : server_task_result {
int index = 0;
std::vector<std::vector<float>> embedding;
int32_t n_tokens;
// OAI-compat fields
oaicompat_type oaicompat = OAICOMPAT_TYPE_NONE;
virtual json to_json() override {
return oaicompat == OAICOMPAT_TYPE_EMBEDDING
? to_json_oaicompat()
: to_json_non_oaicompat();
}
json to_json_non_oaicompat() {
return json{
{"index", index},
{"embedding", embedding},
};
}
json to_json_oaicompat() {
return json{
{"index", index},
{"embedding", embedding[0]},
{"tokens_evaluated", n_tokens},
};
}
};
// using shared_ptr for polymorphism of server_task_result
using server_task_result_ptr = std::unique_ptr<server_task_result>;
struct server_prompt_checkpoint {
llama_pos pos_min;
llama_pos pos_max;
llama_pos pos_min_prompt;
llama_pos pos_max_prompt;
std::vector<uint8_t> data;
size_t size() const {
return data.size();
}
json to_json() {
json j;
j["pos_min"] = pos_min;
j["pos_max"] = pos_max;
j["pos_min_prompt"] = pos_min_prompt;
j["pos_max_prompt"] = pos_max_prompt;
return j;
}
void from_json(const json & j) {
pos_min = j.value<llama_pos>("pos_min", 0);
pos_max = j.value<llama_pos>("pos_max", 0);
pos_min_prompt = j.value<llama_pos>("pos_min_prompt", 0);
pos_max_prompt = j.value<llama_pos>("pos_max_prompt", 0);
}
};
struct server_prompt {
server_tokens tokens;
int n_kept_prompt;
int n_discarded_prompt;
thinking_tokens think_tokens;
std::vector<uint8_t> data;
std::list<server_prompt_checkpoint> checkpoints;
size_t size() const;
int n_tokens() const {
return tokens.size();
}
server_prompt clone() const {
return server_prompt{
tokens.clone(),
n_kept_prompt,
n_discarded_prompt,
think_tokens,
data,
checkpoints
};
}
json to_json()
{
json j;
j["tokens"] = tokens.to_json();
j["n_kept_prompt"] = n_kept_prompt;
j["n_discarded_prompt"] = n_discarded_prompt;
return j;
}
void from_json(const json & j) {
tokens.from_json(j.at("tokens"));
n_kept_prompt = j.value<llama_pos>("n_kept_prompt", 0);
n_discarded_prompt = j.value<llama_pos>("n_discarded_prompt", 0);
n_kept_prompt = j.value<llama_pos>("n_kept_prompt", 0);
}
};
struct server_prompt_cache {
server_prompt_cache(llama_context* ctx, int32_t limit_size_mib, size_t limit_tokens) {
this->ctx = ctx;
this->limit_size = 1024ull * 1024ull * (limit_size_mib < 0 ? 0 : limit_size_mib);
this->limit_tokens = limit_tokens;
}
std::list<server_prompt> states;
// in bytes, 0 = no limit
size_t limit_size = 0;
// in tokens, 0 = no limit
size_t limit_tokens = 0;
llama_context* ctx;
size_t size() const;
size_t n_tokens() const;
server_prompt* alloc(const server_prompt& prompt, size_t state_size);
bool load(server_prompt& prompt, const server_tokens& tokens_new, llama_context* ctx, int32_t id_slot);
void update();
};