Files
ik_llama.cpp/examples/server/qwen3_tools.hpp
Anton Sokolchenko 9ee72225dc Function calling support for Kimi-K2 (#628)
* Implement function calling / tools for ik_llama.cpp for Kimi K2

* Implement basic tool choice

* Backport llama.cpp tool calls support

* Enhance function calls with improved chat parser and string utilities

- Add new chat.h/chat.cpp and chat-parser.h/chat-parser.cpp for better chat handling
- Improve function calls parsing with fallback to llama.cpp builder pattern
- Add string utility functions (starts_with, ends_with, find_partial_stop)
- Update README with function calls testing instructions
- Enhance Kimi K2 parser and function calls documentation
- Add comprehensive test suite for function calls
- Update CMakeLists.txt and Makefile for new components

* Enhance function calling with unified streaming and parser improvements

- Fix streaming content cleanup to prevent function syntax in output
- Unify content extraction patterns with llama.cpp approach
- Improve Kimi K2 parser robustness and partial content handling
- Add comprehensive test coverage for function call scenarios
- Optimize chat message parsing and diff computation

* Replace hardcoded values in kimi_k2_parser.hpp with named constants

- Add compile-time constants for all token format markers
- Add compile-time constants for XML format markers
- Add compile-time constants for simple format patterns
- Replace all hardcoded string literals with named constants
- Use compile-time length calculation to avoid manual counting
- Improve maintainability and reduce magic numbers throughout parser

* Fix duplicate common_chat_parse definition

- Remove duplicate implementation from chat-parser.cpp
- Keep single implementation in chat.cpp following llama.cpp patterns
- Resolves linker error: multiple definition of common_chat_parse

* Fix JSON assertion failure in function call parsing

- Add proper validation that 'function' field is an object before accessing nested keys
- Handle missing 'arguments' field gracefully with default "{}"
- Prevents crash when parsing malformed tool call JSON structures

* Add comprehensive Qwen3 XML tool calling support with unit tests

- Implement Qwen3 XML parser with <tool_call>{"name": "func", "arguments": {...}}</tool_call> format
- Add model detection and routing for Qwen3 vs Kimi-K2 formats
- Create 8 comprehensive unit tests covering parsing, streaming, error handling
- Fix token format cleaning bug in kimi_k2_parser.hpp processing order
- Remove progressive parsing code and related utilities
- Add tool injection support for Qwen3 format in server utils

* Add DeepSeek R1 function calling support with comprehensive unit tests

- Implement complete DeepSeek R1 tool call parsing in common_chat_parser.cpp
- Add DeepSeek R1 model detection and tool injection in deepseek_r1_tools.hpp
- Update function_calls.hpp with DeepSeek R1 integration and content extraction
- Update documentation to reflect support for Kimi-K2, Qwen3, and DeepSeek R1 models
- Add comprehensive unit tests for DeepSeek R1 reasoning, tool calls, and integration
- Port exact implementation patterns from original llama.cpp for compatibility

Key features:
- Native DeepSeek R1 format: <|tool▁calls▁begin|>function<|tool▁sep|>name```json{}```<|tool▁call▁end|><|tool▁calls▁end|>
- Reasoning content extraction from <think>...</think> tags
- Multiple tool calls support with separate call blocks
- Model detection for deepseek-r1, deepseek_r1 naming patterns
- Integration with incremental parsing and streaming support

* Add partial parsing support for JSON and regex

- json-partial.h/cpp: JSON partial parsing functionality
- regex-partial.h/cpp: Regex partial parsing functionality

* Add format_chat integration tests for Qwen3 tool injection

- Add test_qwen3_format_chat_integration() to validate tool injection pipeline
- Test tool injection conditions and system message enhancement
- Verify JSON formatting and anti-preamble instructions
- Add comprehensive test documentation

Tests confirm tool injection works correctly - conversational preamble
issue is not in ik_llama.cpp but likely in UI configuration.

* Fix Qwen3 tool call parsing - pass model name to parser

Server was not passing model name to parse_chat_message_incremental(),
causing Qwen3 to fall back to Kimi-K2 parser and return tool calls
as content instead of proper tool_calls array.

* Fix non-streaming path to use model-specific parsing

Non-streaming responses were hardcoded to use Kimi-K2 format,
causing Qwen3 XML tool calls to be returned as content instead
of proper tool_calls array. Now uses same model detection as
streaming path for consistency.
2025-07-23 18:11:42 +02:00

70 lines
2.5 KiB
C++

#pragma once
#include "json.hpp"
#include <string>
#include <vector>
#include <algorithm>
#include <cctype>
using json = nlohmann::ordered_json;
//
// Qwen3 specific tool handling (using Hermes XML format)
// Based on original llama.cpp Qwen-Qwen3-0.6B.jinja template
//
// Check if the model is Qwen3
inline bool is_qwen3_model(const std::string & model_name) {
if (model_name.empty()) {
return false;
}
// Convert to lowercase for case-insensitive comparison
std::string lower_model = model_name;
std::transform(lower_model.begin(), lower_model.end(), lower_model.begin(), ::tolower);
// Check if the model name contains "qwen3" or "qwen-3"
return lower_model.find("qwen3") != std::string::npos ||
lower_model.find("qwen-3") != std::string::npos ||
lower_model.find("qwen_3") != std::string::npos;
}
// Generate Qwen3 tool format instructions (XML format like Hermes)
inline std::string qwen3_tool_format_instructions() {
return "\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n"
"<tool_call>\n"
"{\"name\": <function-name>, \"arguments\": <args-json-object>}\n"
"</tool_call>";
}
// Generate tools description for Qwen3 (XML format matching original template)
inline std::string qwen3_tools_description(const json & tools) {
std::string tools_desc = "# Tools\n\n"
"You may call one or more functions to assist with the user query.\n\n"
"You are provided with function signatures within <tools></tools> XML tags:\n"
"<tools>";
for (const auto & tool : tools) {
tools_desc += "\n" + tool.dump();
}
tools_desc += "\n</tools>";
return tools_desc;
}
// Inject tools into existing system message content
inline std::string qwen3_inject_tools_to_system(const std::string & content, const json & tools) {
return content + "\n\n" + qwen3_tools_description(tools) + qwen3_tool_format_instructions();
}
// Create a new system message with tools for Qwen3
inline std::string qwen3_create_system_with_tools(const json & tools) {
std::string tools_prompt = qwen3_tools_description(tools);
tools_prompt += qwen3_tool_format_instructions();
return tools_prompt;
}
// Check if tools injection is needed for Qwen3
inline bool qwen3_should_inject_tools(const json & tools, const std::string & model_name) {
return !tools.empty() && tools.is_array() && is_qwen3_model(model_name);
}