[docs] Update Native Kimi-K2-Thinking documentation and kt-kernel parameters (#1671)

This commit is contained in:
Jiaqi Liao
2025-12-05 22:46:16 +08:00
committed by GitHub
parent 47da806cde
commit 721b6c4c94
6 changed files with 292 additions and 238 deletions

View File

@@ -17,6 +17,7 @@ KTransformers is a research project focused on efficient inference and fine-tuni
## 🔥 Updates
* **Dec 5, 2025**: Support Native Kimi-K2-Thinking inference ([Tutorial](./doc/en/Kimi-K2-Thinking-Native.md))
* **Nov 6, 2025**: Support Kimi-K2-Thinking inference ([Tutorial](./doc/en/Kimi-K2-Thinking.md)) and fine-tune ([Tutorial](./doc/en/SFT_Installation_Guide_KimiK2.md))
* **Nov 4, 2025**: KTransformers Fine-Tuning × LLaMA-Factory Integration. ([Tutorial](./doc/en/KTransformers-Fine-Tuning_User-Guide.md))
* **Oct 27, 2025**: Support Ascend NPU. ([Tutorial](./doc/zh/DeepseekR1_V3_tutorial_zh_for_Ascend_NPU.md))

View File

@@ -17,6 +17,7 @@ KTransformers 是一个专注于通过 CPU-GPU 异构计算实现大语言模型
## 🔥 更新
* **2025 年 12 月 5 日**:支持原生 Kimi-K2-Thinking 推理([教程](./doc/en/Kimi-K2-Thinking-Native.md)
* **2025 年 11 月 6 日**:支持 Kimi-K2-Thinking 推理([教程](./doc/en/Kimi-K2-Thinking.md))和微调([教程](./doc/en/SFT_Installation_Guide_KimiK2.md)
* **2025 年 11 月 4 日**KTransformers 微调 × LLaMA-Factory 集成([教程](./doc/en/KTransformers-Fine-Tuning_User-Guide.md)
* **2025 年 10 月 27 日**:支持昇腾 NPU[教程](./doc/zh/DeepseekR1_V3_tutorial_zh_for_Ascend_NPU.md)

View File

@@ -1 +1,216 @@
需要先写如何安装运行,然后写一个性能,然后链接到如何使用 claude code 接入的文档。
# Running Kimi-K2-Thinking with SGLang and KT-Kernel
This tutorial demonstrates how to run Kimi-K2 model inference using SGLang integrated with KT-Kernel for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU.
## Table of Contents
- [Hardware Requirements](#hardware-requirements)
- [Prerequisites](#prerequisites)
- [Step 1: Download Model Weights](#step-1-download-model-weights)
- [Step 2: Launch SGLang Server](#step-2-launch-sglang-server)
- [Step 3: Send Inference Requests](#step-3-send-inference-requests)
## Hardware Requirements
**Minimum Configuration:**
- **GPU**: NVIDIA RTX 4090 48GB (or equivalent with at least 48GB VRAM available)
- **RAM**: At least 650GB system memory
- **Storage**: ~600GB for model weights (native INT4 weight, same weight dir for CPU and GPU)
**Tested Configuration:**
- **GPU**: 1/2/4/8x NVIDIA RTX 4090/L20 48GB
- **CPU**: 2x Intel(R) Xeon(R) Platinum 8488C
- **RAM**: 2TB DDR5 4800MHz
- **OS**: Linux (Ubuntu 20.04+ recommended)
## Prerequisites
Before starting, ensure you have:
1. **KT-Kernel installed** - Follow the [installation guide](./kt-kernel_intro.md#installation)
2. **SGLang installed** - Follow [SGLang integration steps](./kt-kernel_intro.md#integration-with-sglang)
Note: Currently, please clone our custom SGLang repository:
```
git clone https://github.com/kvcache-ai/sglang.git
git checkout kimi_k2
cd sglang && pip install -e "python[all]"
```
1. **CUDA toolkit** - Compatible with your GPU (CUDA 11.8+ recommended)
2. **Hugging Face CLI** - For downloading models:
```bash
pip install huggingface-hub
```
## Step 1: Download Model Weights
```bash
# Create a directory for models
mkdir -p /path/to/models
cd /path/to/models
# Download Kimi-K2-Thinking (INT4 for both CPU and GPU)
huggingface-cli download moonshotai/Kimi-K2-Thinking \
--local-dir /path/to/kimi-k2-thinking
```
**Note:** Replace `/path/to/models` with your actual storage path throughout this tutorial.
## Step 2: Launch SGLang Server
Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.
### Launch Command (2x RTX 4090 Example)
```bash
python -m sglang.launch_server \
--host 0.0.0.0 \
--port 30001 \
--model /path/to/kimi-k2-thinking \
--kt-weight-path /path/to/kimi-k2-thinking \
--kt-cpuinfer 96 \
--kt-threadpool-count 2 \
--kt-num-gpu-experts 8 \
--kt-method RAWINT4 \
--kt-gpu-prefill-token-threshold 400 \
--kt-max-deferred-experts-per-token 1 \
--trust-remote-code \
--mem-fraction-static 0.94 \
--served-model-name Kimi-K2-Thinking \
--enable-mixed-chunk \
--tensor-parallel-size 2 \
--enable-p2p-check \
--disable-shared-experts-fusion \
--chunked-prefill-size 65536 \
--max-total-tokens 65536 \
--attention-backend flashinfer
```
It takes about 2~3 minutes to start the server.
See [KT-Kernel Parameters](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel#kt-kernel-parameters) for detailed parameter tuning guidelines.
### Key Parameters
| Parameter | Description |
|-----------|-------------|
| `--kt-method RAWINT4` | CPU and GPU use the same INT4 weight. Set `--model` and `--kt-weight-path` to the same directory. |
| `--kt-num-gpu-experts` | Number of experts kept on GPU for decoding. |
| `--kt-gpu-prefill-token-threshold` | Token count threshold for prefill strategy. Below: hybrid CPU+GPU. Above: layerwise GPU prefill. |
| `--chunked-prefill-size` | Maximum tokens per prefill batch. |
| `--max-total-tokens` | Maximum total tokens in KV cache. |
### About `--kt-gpu-prefill-token-threshold`
This parameter controls the prefill strategy:
- **$\leq$ threshold**: Uses hybrid CPU+GPU prefill. No extra VRAM needed, but performance degrades slowly as token count increases.
- **> threshold**: Uses layerwise GPU prefill. Performance scales exponentially up to `chunked-prefill-size`, but requires 9GB+ extra VRAM.
### Troubleshooting OOM
Layerwise prefill requires extra VRAM (~9GB + incremental cost with prefill length). If you encounter OOM, adjust these parameters based on your use case and hardware (refer to the recommended parameters table below):
| Parameter | VRAM Impact |
|-----------|-------------|
| `--kt-num-gpu-experts` | Reduces expert weight VRAM usage |
| `--chunked-prefill-size` | Reduces prefill extra VRAM allocation |
| `--max-total-tokens` | Reduces KV cache VRAM usage |
**Tip:** Test with an input of length `chunked-prefill-size` to verify your configuration won't OOM during prefill.
### Recommended Parameters
| GPU Config | `kt-num-gpu-experts` | `max-total-tokens` | `chunked-prefill-size` |
|------------|----------------------|---------------------|------------------------|
| 1x RTX 4090 (48GB) | 1 | 32768 | 32768 |
| 2x RTX 4090 (48GB) | 8 | 65536 | 65536 |
| 4x RTX 4090 (48GB) | 30 | 80000 | 65536 |
| 8x RTX 4090 (48GB) | 80 | 100000 | 65536 |
### Performance
The following performance benchmarks were measured with single concurrency at maximum prefill length (32768 tokens):
| GPU Config | Prefill Throughput |
|------------|-------------------|
| 1x RTX 4090 (48GB) | 290 tokens/s |
| 2x RTX 4090 (48GB) | 529 tokens/s |
| 4x RTX 4090 (48GB) | 775 tokens/s |
| 8x RTX 4090 (48GB) | 1060 tokens/s |
## Step 3: Send Inference Requests
Once the server is running, you can send inference requests using the OpenAI-compatible API.
### Basic Chat Completion Request
```bash
curl -s http://localhost:30001/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Kimi-K2-Thinking",
"stream": false,
"messages": [
{"role": "user", "content": "hi"}
]
}'
```
### Example Response
```json
{
"id": "cd0905562bf44513947284f80cc5634b",
"object": "chat.completion",
"created": 1764921457,
"model": "Kimi-K2-Thinking",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": " <think> The user says \"hi\". This is a very simple greeting. I should respond in a friendly and helpful manner. Since I'm an AI assistant, I should be professional but approachable.\n\nPossible responses:\n1. \"Hello! How can I help you today?\"\n2. \"Hi there! What can I do for you?\"\n3. \"Hello! It's nice to hear from you. What would you like to talk about?\"\n4. \"Hi! I'm here to assist you with any questions you might have.\"\n\nI think option 1 is the most standard and professional. It's direct, friendly, and opens the door for the user to ask their question. I should keep it concise.\n\nLet me go with: \"Hello! How can I help you today?\" </think> Hello! How can I help you today?",
"reasoning_content": null,
"tool_calls": null
},
"logprobs": null,
"finish_reason": "stop",
"matched_stop": 163586
}
],
"usage": {
"prompt_tokens": 26,
"total_tokens": 189,
"completion_tokens": 163,
"prompt_tokens_details": null,
"reasoning_tokens": 0
},
"metadata": {
"weight_version": "default"
}
}
```
## Advance Use Case: Running Claude Code with Native Kimi-K2-Thinking Local Backend
Add the following parameters to the SGLang launch command above to enable tool calling support:
```bash
--tool-call-parser kimi_k2 --reasoning-parser kimi_k2
```
With these parameters enabled, you can use [claude-code-router](https://github.com/musistudio/claude-code-router) to connect Kimi-K2-Thinking as a local backend for [Claude Code](https://github.com/anthropics/claude-code).
## Additional Resources
- [KT-Kernel Documentation](../../../kt-kernel/README.md)
- [SGLang GitHub](https://github.com/sgl-project/sglang)
- [Claude Code Router](https://github.com/musistudio/claude-code-router) - Route Claude Code to custom backends

View File

@@ -1,195 +0,0 @@
# Running Kimi-K2-Thinking with SGLang and KT-Kernel
This tutorial demonstrates how to run Kimi-K2 model inference using SGLang integrated with KT-Kernel for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU.
## Table of Contents
- [Hardware Requirements](#hardware-requirements)
- [Prerequisites](#prerequisites)
- [Step 1: Download Model Weights](#step-1-download-model-weights)
- [Step 2: Launch SGLang Server](#step-2-launch-sglang-server)
- [Step 3: Send Inference Requests](#step-3-send-inference-requests)
## Hardware Requirements
**Minimum Configuration:**
- **GPU**: NVIDIA RTX 4090 48GB (or equivalent with at least 48GB VRAM available)
- **RAM**: At least 650GB system memory
- **Storage**: ~600GB for model weights (native INT4 weight, same weight dir for CPU and GPU)
**Tested Configuration:**
- **GPU**: 1/2/4/8x NVIDIA RTX 4090/L20 48GB
- **CPU**: 2x Intel(R) Xeon(R) Platinum 8488C
- **RAM**: 2TB DDR5 4800MHz
- **OS**: Linux (Ubuntu 20.04+ recommended)
## Prerequisites
Before starting, ensure you have:
1. **KT-Kernel installed** - Follow the [installation guide](./kt-kernel_intro.md#installation)
2. **SGLang installed** - Follow [SGLang integration steps](./kt-kernel_intro.md#integration-with-sglang)
Note: Currently, please clone our custom SGLang repository:
```
git clone https://github.com/kvcache-ai/sglang.git
git checkout kimi_k2
cd sglang && pip install -e "python[all]"
```
1. **CUDA toolkit** - Compatible with your GPU (CUDA 11.8+ recommended)
2. **Hugging Face CLI** - For downloading models:
```bash
pip install huggingface-hub
```
## Step 1: Download Model Weights
```bash
# Create a directory for models
mkdir -p /path/to/models
cd /path/to/models
# Download Kimi-K2-Thinking (INT4 for both CPU and GPU)
huggingface-cli download moonshotai/Kimi-K2-Thinking \
--local-dir /path/to/kimi-k2-thinking
```
**Note:** Replace `/path/to/models` with your actual storage path throughout this tutorial.
## Step 2: Launch SGLang Server
Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.
### Launch Command (2x RTX 4090 Example)
```bash
python -m sglang.launch_server \
--host 0.0.0.0 \
--port 30001 \
--model /path/to/kimi-k2-thinking \
--kt-weight-path /path/to/kimi-k2-thinking \
--kt-cpuinfer 96 \
--kt-threadpool-count 2 \
--kt-num-gpu-experts 8 \
--kt-method RAWINT4 \
--kt-gpu-prefill-token-threshold 400 \
--kt-max-deferred-experts-per-token 1 \
--trust-remote-code \
--mem-fraction-static 0.94 \
--served-model-name Kimi-K2-Thinking \
--enable-mixed-chunk \
--tensor-parallel-size 2 \
--enable-p2p-check \
--disable-shared-experts-fusion \
--chunked-prefill-size 65536 \
--max-total-tokens 65536 \
--attention-backend flashinfer
```
It takes about 2~3 minutes to start the server.
See [KT-Kernel Parameters](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel#kt-kernel-parameters) for detailed parameter tuning guidelines.
### Key Parameters
| Parameter | Description |
|-----------|-------------|
| `--kt-method RAWINT4` | CPU and GPU use the same INT4 weight. Set `--model` and `--kt-weight-path` to the same directory. |
| `--kt-num-gpu-experts` | Number of experts kept on GPU for decoding. |
| `--kt-gpu-prefill-token-threshold` | Token count threshold for prefill strategy. Below: hybrid CPU+GPU. Above: layerwise GPU prefill. |
| `--chunked-prefill-size` | Maximum tokens per prefill batch. |
| `--max-total-tokens` | Maximum total tokens in KV cache. |
### About `--kt-gpu-prefill-token-threshold`
This parameter controls the prefill strategy:
- **$\leq$ threshold**: Uses hybrid CPU+GPU prefill. No extra VRAM needed, but performance degrades slowly as token count increases.
- **> threshold**: Uses layerwise GPU prefill. Performance scales exponentially up to `chunked-prefill-size`, but requires 9GB+ extra VRAM.
### Troubleshooting OOM
Layerwise prefill requires extra VRAM (~9GB + incremental cost with prefill length). If you encounter OOM, adjust these parameters based on your use case and hardware (refer to the recommended parameters table below):
| Parameter | VRAM Impact |
|-----------|-------------|
| `--kt-num-gpu-experts` | Reduces expert weight VRAM usage |
| `--chunked-prefill-size` | Reduces prefill extra VRAM allocation |
| `--max-total-tokens` | Reduces KV cache VRAM usage |
**Tip:** Test with an input of length `chunked-prefill-size` to verify your configuration won't OOM during prefill.
### Recommended Parameters
| GPU Config | `kt-num-gpu-experts` | `max-total-tokens` | `chunked-prefill-size` |
|------------|----------------------|---------------------|------------------------|
| 1x RTX 4090 (48GB) | 1 | 32768 | 32768 |
| 2x RTX 4090 (48GB) | 8 | 65536 | 65536 |
| 4x RTX 4090 (48GB) | 30 | 80000 | 65536 |
| 8x RTX 4090 (48GB) | 80 | 100000 | 65536 |
## Step 3: Send Inference Requests
Once the server is running, you can send inference requests using the OpenAI-compatible API.
### Basic Chat Completion Request
```bash
curl -s http://localhost:30001/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Kimi-K2-Thinking",
"stream": false,
"messages": [
{"role": "user", "content": "hi"}
]
}'
```
### Example Response
```json
{
"id": "cd0905562bf44513947284f80cc5634b",
"object": "chat.completion",
"created": 1764921457,
"model": "Kimi-K2-Thinking",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": " <think> The user says \"hi\". This is a very simple greeting. I should respond in a friendly and helpful manner. Since I'm an AI assistant, I should be professional but approachable.\n\nPossible responses:\n1. \"Hello! How can I help you today?\"\n2. \"Hi there! What can I do for you?\"\n3. \"Hello! It's nice to hear from you. What would you like to talk about?\"\n4. \"Hi! I'm here to assist you with any questions you might have.\"\n\nI think option 1 is the most standard and professional. It's direct, friendly, and opens the door for the user to ask their question. I should keep it concise.\n\nLet me go with: \"Hello! How can I help you today?\" </think> Hello! How can I help you today?",
"reasoning_content": null,
"tool_calls": null
},
"logprobs": null,
"finish_reason": "stop",
"matched_stop": 163586
}
],
"usage": {
"prompt_tokens": 26,
"total_tokens": 189,
"completion_tokens": 163,
"prompt_tokens_details": null,
"reasoning_tokens": 0
},
"metadata": {
"weight_version": "default"
}
}
```
## Additional Resources
- [Layerwise Prefill Internals](./layerwise-prefill-internals.md) - Technical details on prefill strategies
- [KT-Kernel Documentation](../../../kt-kernel/README.md)
- [SGLang GitHub](https://github.com/sgl-project/sglang)

View File

@@ -2,26 +2,35 @@
High-performance kernel operations for KTransformers, featuring CPU-optimized MoE inference with AMX, AVX, KML and blis (amd library) support.
- [Note](#note)
- [Features](#features)
- [Installation](#installation)
- [Prerequisites](#prerequisites)
- [Quick Installation (Recommended)](#quick-installation-recommended)
- [Manual Configuration (Advanced)](#manual-configuration-advanced)
- [Verification](#verification)
- [Integration with SGLang](#integration-with-sglang)
- [Installation Steps](#installation-steps)
- [Complete Example: Qwen3-30B-A3B](#complete-example-qwen3-30b-a3b)
- [KT-Kernel Parameters](#kt-kernel-parameters)
- [Direct Python API Usage](#direct-python-api-usage)
- [Advanced Options](#advanced-options)
- [Build Configuration](#build-configuration)
- [Manual Installation](#manual-installation)
- [Error Troubleshooting](#error-troubleshooting)
- [CUDA Not Found](#cuda-not-found)
- [hwloc Not Found](#hwloc-not-found)
- [Weight Quantization](#weight-quantization)
- [Before Commit!](#before-commit)
- [KT-Kernel](#kt-kernel)
- [Note](#note)
- [Features](#features)
- [Installation](#installation)
- [Prerequisites](#prerequisites)
- [Quick Installation (Recommended)](#quick-installation-recommended)
- [Manual Configuration (Advanced)](#manual-configuration-advanced)
- [Verification](#verification)
- [Integration with SGLang](#integration-with-sglang)
- [Installation Steps](#installation-steps)
- [1. Install SGLang](#1-install-sglang)
- [2. Prepare Weights](#2-prepare-weights)
- [3. Launch SGLang Server](#3-launch-sglang-server)
- [Complete Example: Qwen3-30B-A3B](#complete-example-qwen3-30b-a3b)
- [Option A: AMX Backend (AMXINT8)](#option-a-amx-backend-amxint8)
- [Option B: LLAMAFILE Backend (GGUF)](#option-b-llamafile-backend-gguf)
- [KT-Kernel Parameters](#kt-kernel-parameters)
- [Direct Python API Usage](#direct-python-api-usage)
- [Advanced Options](#advanced-options)
- [Build Configuration](#build-configuration)
- [Manual Installation](#manual-installation)
- [1. Install System Dependencies](#1-install-system-dependencies)
- [2. Set Build Configuration](#2-set-build-configuration)
- [3. Build and Install](#3-build-and-install)
- [Error Troubleshooting](#error-troubleshooting)
- [CUDA Not Found](#cuda-not-found)
- [hwloc Not Found](#hwloc-not-found)
- [Weight Quantization](#weight-quantization)
- [Before Commit!](#before-commit)
## Note
**Current Support Status:**
@@ -301,18 +310,20 @@ python -m sglang.launch_server \
| Parameter | Description | Example Value |
|-----------|-------------|---------------|
| `--kt-method` | CPU inference backend method | `AMXINT4`, `AMXINT8`, or `LLAMAFILE` |
| `--kt-method` | CPU inference backend method | `AMXINT4`, `AMXINT8`, `RAWINT4`, or `LLAMAFILE` |
| `--kt-weight-path` | Path to quantized CPU weights | `/path/to/cpu-weights` |
| `--kt-cpuinfer` | Number of CPU inference threads | `64` (adjust based on CPU cores) |
| `--kt-threadpool-count` | Number of thread pools for parallel execution | `2` (typically 1-4) |
| `--kt-num-gpu-experts` | Number of experts to keep on GPU | `32` (remaining experts go to CPU) |
| `--kt-max-deferred-experts-per-token` | Number of experts per token to defer for pipelined execution | `2` (0 to disable, 1-4 recommended) |
| `--kt-gpu-prefill-token-threshold` | Token count threshold for prefill strategy (RAWINT4 only) | ~`400` |
**Parameter Guidelines:**
- **`kt-method`**: Choose based on your CPU and weight format:
- `AMXINT4`: Best performance on AMX CPUs with INT4 quantized weights (May cause huge accuracy drop for some models, e.g., Qwen3-30B-A3B)
- `AMXINT8`: Higher accuracy with INT8 quantized weights on AMX CPUs
- `RAWINT4`: Native INT4 weights shared by CPU and GPU (AMX backend only, currently supports Kimi-K2-Thinking model). See [Kimi-K2-Thinking Native Tutorial](../doc/en/Kimi-K2-Thinking-Native.md) for details.
- `LLAMAFILE`: GGUF-based backend
- **`kt-cpuinfer`**: Set to the number of **physical CPU cores** (not hyperthreads).
@@ -338,6 +349,11 @@ python -m sglang.launch_server \
- `1-4`: Deferred execution (recommended range; good latency/quality balance, requires tuning)
- `5-7`: Highest latency reduction but may introduce noticeable accuracy loss; use with care
- **`kt-gpu-prefill-token-threshold`** (RAWINT4 only): Controls prefill strategy for native INT4 inference:
- **≤ threshold**: Uses hybrid CPU+GPU prefill. No extra VRAM needed, but performance degrades slowly as token count increases.
- **> threshold**: Uses layerwise GPU prefill. Performance scales better with longer sequences, but requires ~9GB+ extra VRAM.
- Only applicable when `--kt-method RAWINT4` is used. Currently supports Kimi-K2-Thinking model only.
## Direct Python API Usage
For standalone usage without SGLang, you can use KT-Kernel directly via Python API:

View File

@@ -2,26 +2,35 @@
高性能 KTransformers 内核库,提供面向 CPU 的高效 MoE 推理内核,支持 AMX 和 AVX 等后端。
- [说明](#说明)
- [特性](#特性)
- [安装](#安装)
- [先决条件](#先决条件)
- [快速安装(推荐)](#快速安装推荐)
- [手动配置(进阶)](#手动配置进阶)
- [验证安装](#验证安装)
- [与 SGLang 集成](#与-sglang-集成)
- [安装步骤](#安装步骤)
- [完整示例Qwen3-30B-A3B](#完整示例qwen3-30b-a3b)
- [KT-Kernel 参数](#kt-kernel-参数)
- [直接使用 Python API](#直接使用-python-api)
- [高级选项](#高级选项)
- [构建配置](#构建配置)
- [手动安装](#手动安装)
- [错误排查](#错误排查)
- [找不到 CUDA](#找不到-cuda)
- [找不到 hwloc](#找不到-hwloc)
- [权重量化](#权重量化)
- [提交前必读](#提交前必读)
- [KT-Kernel](#kt-kernel)
- [说明](#说明)
- [特性](#特性)
- [安装](#安装)
- [先决条件](#先决条件)
- [快速安装(推荐)](#快速安装推荐)
- [手动配置(进阶)](#手动配置进阶)
- [验证安装](#验证安装)
- [与 SGLang 集成](#与-sglang-集成)
- [安装步骤](#安装步骤)
- [1. 安装 SGLang](#1-安装-sglang)
- [2. 准备权重](#2-准备权重)
- [3. 启动 SGLang Server](#3-启动-sglang-server)
- [完整示例Qwen3-30B-A3B](#完整示例qwen3-30b-a3b)
- [方案 AAMX 后端AMXINT8](#方案-aamx-后端amxint8)
- [方案 BLLAMAFILE 后端GGUF](#方案-bllamafile-后端gguf)
- [KT-Kernel 参数](#kt-kernel-参数)
- [直接使用 Python API](#直接使用-python-api)
- [高级选项](#高级选项)
- [构建配置](#构建配置)
- [手动安装](#手动安装)
- [1. 安装系统依赖](#1-安装系统依赖)
- [2. 配置构建参数](#2-配置构建参数)
- [3. 构建并安装](#3-构建并安装)
- [错误排查](#错误排查)
- [找不到 CUDA](#找不到-cuda)
- [找不到 hwloc](#找不到-hwloc)
- [权重量化](#权重量化)
- [提交前必读](#提交前必读)
## 说明
@@ -301,18 +310,20 @@ python -m sglang.launch_server \
| 参数 | 描述 | 示例值 |
|------|------|--------|
| `--kt-method` | CPU 推理后端类型 | `AMXINT4``AMXINT8``LLAMAFILE` |
| `--kt-method` | CPU 推理后端类型 | `AMXINT4``AMXINT8``RAWINT4``LLAMAFILE` |
| `--kt-weight-path` | 量化后的 CPU 权重路径 | `/path/to/cpu-weights` |
| `--kt-cpuinfer` | CPU 推理线程数 | `64`(根据 CPU 核心数调整) |
| `--kt-threadpool-count` | 并行执行的线程池数量 | `2`(通常为 14 |
| `--kt-num-gpu-experts` | 保留在 GPU 上的 experts 数量 | `32`(其余 experts 由 CPU 承担) |
| `--kt-max-deferred-experts-per-token` | 每个 token 延迟到 CPU 的 experts 数量(用于流水线执行) | `2`0 关闭14 推荐) |
| `--kt-gpu-prefill-token-threshold` | Prefill 策略的 token 数量阈值(仅 RAWINT4 | ~`400` |
**参数建议:**
- **`kt-method`**:根据 CPU 能力和权重格式选择:
- `AMXINT4`:在 AMX CPU 上 INT4 量化时具有最佳性能(但可能对某些模型有较大精度影响,例如 Qwen3-30B-A3B
- `AMXINT8`:在 AMX CPU 上提供更高精度的 INT8 量化方案
- `RAWINT4`CPU 和 GPU 共享原生 INT4 权重(仅限 AMX 后端,目前仅支持 Kimi-K2-Thinking 模型)。详见 [Kimi-K2-Thinking 原生推理教程](../doc/en/Kimi-K2-Thinking-Native.md)。
- `LLAMAFILE`:基于 AVX2/AVX512 的通用 CPU 后端,性能较 AMX 略低,但适用范围更广
- **`kt-cpuinfer`**:设置为 **物理核数**(不是线程数)。
@@ -338,6 +349,11 @@ python -m sglang.launch_server \
- `14`:推荐范围,一部分 experts 延迟到 CPU在延迟和质量之间取得较好平衡需要按模型调参
- `57`:可以获得更低延迟,但存在明显精度下降风险,请谨慎使用
- **`kt-gpu-prefill-token-threshold`**(仅 RAWINT4控制原生 INT4 推理的 prefill 策略:
- **≤ 阈值**:使用 CPU+GPU 混合 prefill。无需额外显存但随着 token 数量增加性能会缓慢下降。
- **> 阈值**:使用分层 GPU prefill。长序列性能更好但需要约 9GB+ 额外显存。
- 仅在使用 `--kt-method RAWINT4` 时生效。目前仅支持 Kimi-K2-Thinking 模型。
## 直接使用 Python API
如果不集成 SGLang也可以直接通过 Python API 单独使用 KT-Kernel