Function calling support for Kimi-K2 (#628)

* Implement function calling / tools for ik_llama.cpp for Kimi K2

* Implement basic tool choice

* Backport llama.cpp tool calls support

* Enhance function calls with improved chat parser and string utilities

- Add new chat.h/chat.cpp and chat-parser.h/chat-parser.cpp for better chat handling
- Improve function calls parsing with fallback to llama.cpp builder pattern
- Add string utility functions (starts_with, ends_with, find_partial_stop)
- Update README with function calls testing instructions
- Enhance Kimi K2 parser and function calls documentation
- Add comprehensive test suite for function calls
- Update CMakeLists.txt and Makefile for new components

* Enhance function calling with unified streaming and parser improvements

- Fix streaming content cleanup to prevent function syntax in output
- Unify content extraction patterns with llama.cpp approach
- Improve Kimi K2 parser robustness and partial content handling
- Add comprehensive test coverage for function call scenarios
- Optimize chat message parsing and diff computation

* Replace hardcoded values in kimi_k2_parser.hpp with named constants

- Add compile-time constants for all token format markers
- Add compile-time constants for XML format markers
- Add compile-time constants for simple format patterns
- Replace all hardcoded string literals with named constants
- Use compile-time length calculation to avoid manual counting
- Improve maintainability and reduce magic numbers throughout parser

* Fix duplicate common_chat_parse definition

- Remove duplicate implementation from chat-parser.cpp
- Keep single implementation in chat.cpp following llama.cpp patterns
- Resolves linker error: multiple definition of common_chat_parse

* Fix JSON assertion failure in function call parsing

- Add proper validation that 'function' field is an object before accessing nested keys
- Handle missing 'arguments' field gracefully with default "{}"
- Prevents crash when parsing malformed tool call JSON structures

* Add comprehensive Qwen3 XML tool calling support with unit tests

- Implement Qwen3 XML parser with <tool_call>{"name": "func", "arguments": {...}}</tool_call> format
- Add model detection and routing for Qwen3 vs Kimi-K2 formats
- Create 8 comprehensive unit tests covering parsing, streaming, error handling
- Fix token format cleaning bug in kimi_k2_parser.hpp processing order
- Remove progressive parsing code and related utilities
- Add tool injection support for Qwen3 format in server utils

* Add DeepSeek R1 function calling support with comprehensive unit tests

- Implement complete DeepSeek R1 tool call parsing in common_chat_parser.cpp
- Add DeepSeek R1 model detection and tool injection in deepseek_r1_tools.hpp
- Update function_calls.hpp with DeepSeek R1 integration and content extraction
- Update documentation to reflect support for Kimi-K2, Qwen3, and DeepSeek R1 models
- Add comprehensive unit tests for DeepSeek R1 reasoning, tool calls, and integration
- Port exact implementation patterns from original llama.cpp for compatibility

Key features:
- Native DeepSeek R1 format: <|tool▁calls▁begin|>function<|tool▁sep|>name```json{}```<|tool▁call▁end|><|tool▁calls▁end|>
- Reasoning content extraction from <think>...</think> tags
- Multiple tool calls support with separate call blocks
- Model detection for deepseek-r1, deepseek_r1 naming patterns
- Integration with incremental parsing and streaming support

* Add partial parsing support for JSON and regex

- json-partial.h/cpp: JSON partial parsing functionality
- regex-partial.h/cpp: Regex partial parsing functionality

* Add format_chat integration tests for Qwen3 tool injection

- Add test_qwen3_format_chat_integration() to validate tool injection pipeline
- Test tool injection conditions and system message enhancement
- Verify JSON formatting and anti-preamble instructions
- Add comprehensive test documentation

Tests confirm tool injection works correctly - conversational preamble
issue is not in ik_llama.cpp but likely in UI configuration.

* Fix Qwen3 tool call parsing - pass model name to parser

Server was not passing model name to parse_chat_message_incremental(),
causing Qwen3 to fall back to Kimi-K2 parser and return tool calls
as content instead of proper tool_calls array.

* Fix non-streaming path to use model-specific parsing

Non-streaming responses were hardcoded to use Kimi-K2 format,
causing Qwen3 XML tool calls to be returned as content instead
of proper tool_calls array. Now uses same model detection as
streaming path for consistency.
This commit is contained in:
Anton Sokolchenko
2025-07-23 18:11:42 +02:00
committed by GitHub
parent 0451f10a42
commit 3701fb1686
26 changed files with 6978 additions and 9 deletions

View File

@@ -6,6 +6,9 @@
// Change JSON_ASSERT from assert() to GGML_ASSERT:
#define JSON_ASSERT GGML_ASSERT
#include "json.hpp"
#include "kimi_k2_tools.hpp"
#include "qwen3_tools.hpp"
#include "deepseek_r1_tools.hpp"
#include <string>
#include <vector>
#include <sstream>
@@ -26,6 +29,12 @@ enum error_type {
ERROR_TYPE_NOT_SUPPORTED, // custom error
};
enum tool_choice_type {
TOOL_CHOICE_AUTO,
TOOL_CHOICE_REQUIRED,
TOOL_CHOICE_NONE,
};
extern bool server_verbose;
extern bool server_log_json;
@@ -116,9 +125,12 @@ static inline void server_log(const char * level, const char * function, int lin
//
// Format given chat. If tmpl is empty, we take the template from model metadata
inline std::string format_chat(const struct llama_model * model, const std::string & tmpl, const std::vector<json> & messages) {
inline std::string format_chat(const struct llama_model * model, const std::string & tmpl, const std::vector<json> & messages, const json & tools = json::array(), const std::string & model_name = "") {
std::vector<llama_chat_msg> chat;
// Inject tools into the first system message, or create one if none exists
bool tools_injected = false;
for (size_t i = 0; i < messages.size(); ++i) {
const auto & curr_msg = messages[i];
@@ -140,6 +152,48 @@ inline std::string format_chat(const struct llama_model * model, const std::stri
} else {
throw std::runtime_error("Missing 'content' (ref: https://github.com/ggerganov/llama.cpp/issues/8367)");
}
// Inject tools into the first system message, or create one if none exists
// Only applies to Kimi-K2 models (checked by kimi_k2_should_inject_tools)
if (kimi_k2_should_inject_tools(tools, model_name) && !tools_injected) {
if (role == "system") {
// Add tools to existing system message
content = kimi_k2_inject_tools_to_system(content, tools);
tools_injected = true;
} else if (i == 0) {
// Create system message with tools if no system message exists
std::string tools_prompt = kimi_k2_create_system_with_tools(tools);
chat.push_back({"system", tools_prompt});
tools_injected = true;
}
}
// Inject tools for Qwen3 models (XML Hermes format)
if (qwen3_should_inject_tools(tools, model_name) && !tools_injected) {
if (role == "system") {
// Add tools to existing system message
content = qwen3_inject_tools_to_system(content, tools);
tools_injected = true;
} else if (i == 0) {
// Create system message with tools if no system message exists
std::string tools_prompt = qwen3_create_system_with_tools(tools);
chat.push_back({"system", tools_prompt});
tools_injected = true;
}
}
// Inject tools for DeepSeek R1 models
if (deepseek_r1_should_inject_tools(tools, model_name) && !tools_injected) {
if (role == "system") {
// Add tools to existing system message
content = deepseek_r1_inject_tools_to_system(content, tools);
tools_injected = true;
} else if (i == 0) {
// Create system message with tools if no system message exists
std::string tools_prompt = deepseek_r1_create_system_with_tools(tools);
chat.push_back({"system", tools_prompt});
tools_injected = true;
}
}
chat.push_back({role, content});
}
@@ -342,6 +396,28 @@ static json probs_vector_to_json(const llama_context * ctx, const std::vector<co
return out;
}
//
// Function calling support
//
#include "function_calls.hpp"
//
// tool_choice utils
//
static tool_choice_type tool_choice_parse_oaicompat(const std::string & tool_choice) {
if (tool_choice == "auto") {
return TOOL_CHOICE_AUTO;
}
if (tool_choice == "none") {
return TOOL_CHOICE_NONE;
}
if (tool_choice == "required") {
return TOOL_CHOICE_REQUIRED;
}
throw std::runtime_error("Invalid tool_choice: " + tool_choice);
}
//
// OAI utils
//
@@ -354,8 +430,49 @@ static json oaicompat_completion_params_parse(
llama_params["__oaicompat"] = true;
// Apply chat template to the list of messages
llama_params["prompt"] = format_chat(model, chat_template, body.at("messages"));
// Extract tools from the request body
json tools = json_value(body, "tools", json::array());
// Debug: Log system prompt when tools are detected
if (!tools.empty() && server_verbose) {
LOG_VERBOSE("Tool calls detected in request", {
{"tool_count", tools.size()},
{"model", json_value(body, "model", std::string(DEFAULT_OAICOMPAT_MODEL))}
});
// Extract and log system prompt from messages
if (body.contains("messages") && body["messages"].is_array()) {
for (const auto& msg : body["messages"]) {
if (msg.contains("role") && msg["role"] == "system" && msg.contains("content")) {
std::string content_str;
if (msg["content"].is_string()) {
content_str = msg["content"];
} else if (msg["content"].is_array()) {
// Handle content blocks format
for (const auto& block : msg["content"]) {
if (block.contains("type") && block["type"] == "text" && block.contains("text")) {
if (!content_str.empty()) content_str += " ";
content_str += block["text"];
}
}
}
if (!content_str.empty()) {
LOG_VERBOSE("System prompt with tools", {
{"system_prompt", content_str.substr(0, 500) + (content_str.length() > 500 ? "..." : "")}
});
}
break; // Only log first system message
}
}
}
}
// Extract model name from the request body
std::string model_name = json_value(body, "model", std::string(DEFAULT_OAICOMPAT_MODEL));
// Apply chat template to the list of messages with tools
llama_params["prompt"] = format_chat(model, chat_template, body.at("messages"), tools, model_name);
// Handle "stop" field
if (body.contains("stop") && body.at("stop").is_string()) {
@@ -389,8 +506,16 @@ static json oaicompat_completion_params_parse(
throw std::runtime_error("top_logprobs requires logprobs to be set to true");
}
// Params supported by OAI but unsupported by llama.cpp
static const std::vector<std::string> unsupported_params { "tools", "tool_choice" };
// Handle tool_choice parameter
if (body.contains("tool_choice")) {
auto tool_choice_str = json_value(body, "tool_choice", std::string("auto"));
auto tool_choice = tool_choice_parse_oaicompat(tool_choice_str);
llama_params["tool_choice"] = static_cast<int>(tool_choice);
}
// Accept tools and tool_choice parameters for function calling support
// Other unsupported params still rejected
static const std::vector<std::string> unsupported_params { };
for (auto & param : unsupported_params) {
if (body.contains(param)) {
throw std::runtime_error("Unsupported param: " + param);