* [feat]: simplify sglang installation with submodule, auto-sync CI, and version alignment
- Add kvcache-ai/sglang as git submodule at third_party/sglang (branch = main)
- Add top-level install.sh for one-click source installation (sglang + kt-kernel)
- Add sglang-kt as hard dependency in kt-kernel/pyproject.toml
- Add CI workflow to auto-sync sglang submodule daily and create PR
- Add CI workflow to build and publish sglang-kt to PyPI
- Integrate sglang-kt build into release-pypi.yml (version.py bump publishes both packages)
- Align sglang-kt version with ktransformers via SGLANG_KT_VERSION env var injection
- Update Dockerfile to use submodule and inject aligned version
- Update all 13 doc files, CLI hints, and i18n strings to reference new install methods
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* [build]: bump version to 0.5.2
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* [build]: rename PyPI package from kt-kernel to ktransformers
Users can now `pip install ktransformers` to get everything
(sglang-kt is auto-installed as a dependency).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* Revert "[build]: rename PyPI package from kt-kernel to ktransformers"
This reverts commit e0cbbf6364.
* [build]: add ktransformers meta-package for PyPI
`pip install ktransformers` now works as a single install command.
It pulls kt-kernel (which in turn pulls sglang-kt).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* [fix]: show sglang-kt package version in kt version command
- Prioritize sglang-kt package version (aligned with ktransformers)
over sglang internal __version__
- Update display name from "sglang" to "sglang-kt"
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* [fix]: improve sglang-kt detection in kt doctor and kt version
Recognize sglang-kt package name as proof of kvcache-ai fork installation.
Previously both commands fell through to "PyPI (not recommended)" for
non-editable local source installs. Now version.py reuses the centralized
check_sglang_installation() logic.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* [build]: bump version to 0.5.2.post1
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
4.4 KiB
Running MiniMax-M2.5 with SGLang and KT-Kernel
This tutorial demonstrates how to run MiniMax-M2.5 model inference using SGLang integrated with KT-Kernel for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU.
Table of Contents
- Hardware Requirements
- Prerequisites
- Step 1: Download Model Weights
- Step 2: Launch SGLang Server
- Step 3: Send Inference Requests
Hardware Requirements
Minimum Configuration:
- GPU: NVIDIA RTX 2x4090 48GB (or equivalent with at least total 48GB VRAM available)
- CPU: x86 CPU with AVX512BF16 support (e.g., Intel Sapphire Rapids)
- RAM: At least 200GB system memory
- Storage: ~200GB for model weights (FP8 weight, same weight folder for CPU and GPU)
Prerequisites
Before starting, ensure you have:
- KT-Kernel installed:
git clone https://github.com/kvcache-ai/ktransformers.git
git submodule update --init --recursive
cd kt-kernel && ./install.sh
- SGLang installed - Install the kvcache-ai fork of SGLang (one of):
# Option A: One-click install (from ktransformers root)
./install.sh
# Option B: pip install
pip install sglang-kt
Note: You may need to reinstall cudnn:
pip install nvidia-cudnn-cu12==9.16.0.29
-
CUDA toolkit - Compatible with your GPU (CUDA 12.8+ recommended)
-
Hugging Face CLI - For downloading models:
pip install huggingface-hub
Step 1: Download Model Weights
# Create a directory for models
mkdir -p /path/to/models
cd /path/to/models
# Download MiniMax-M2.5 (FP8 for both CPU and GPU)
huggingface-cli download MiniMaxAI/MiniMax-M2.5 \
--local-dir /path/to/minimax-m2.5
Note: Replace /path/to/models with your actual storage path throughout this tutorial.
Step 2: Launch SGLang Server
Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference.
Launch Command (4x RTX 4090 Example)
python -m sglang.launch_server \
--host 0.0.0.0 \
--port 30005 \
--model /path/to/minimax-m2.5 \
--kt-weight-path /path/to/minimax-m2.5 \
--kt-cpuinfer 96 \
--kt-threadpool-count 2 \
--kt-num-gpu-experts 30 \
--kt-method FP8 \
--kt-gpu-prefill-token-threshold 400 \
--trust-remote-code \
--mem-fraction-static 0.94 \
--served-model-name MiniMax-M2.5 \
--enable-mixed-chunk \
--tensor-parallel-size 4 \
--enable-p2p-check \
--disable-shared-experts-fusion \
--chunked-prefill-size 32658 \
--max-total-tokens 50000 \
--attention-backend flashinfer
It takes about 2~3 minutes to start the server.
See KT-Kernel Parameters for detailed parameter tuning guidelines.
Step 3: Send Inference Requests
Once the server is running, you can send inference requests using the OpenAI-compatible API.
Basic Chat Completion Request
curl -s http://localhost:30005/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "MiniMax-M2.5",
"stream": false,
"messages": [
{"role": "user", "content": "hi, who are you?"}
]
}'
Example Response
{
"id": "e82360a51dd4465281a2b954d5237a06",
"object": "chat.completion",
"created": 1770980318,
"model": "MiniMax-M2.5",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The user is asking who I am. I should give a brief, friendly introduction about myself.\n</think>\n\nHi there! I'm MiniMax-M2.5, an AI assistant created by MiniMax. I'm here to help you with a wide range of tasks, including:\n\n- Answering questions\n- Writing and editing code\n- Explaining concepts\n- Brainstorming ideas\n- And much more!\n\nHow can I help you today?",
"reasoning_content": null,
"tool_calls": null
},
"logprobs": null,
"finish_reason": "stop",
"matched_stop": 200020
}
],
"usage": {
"prompt_tokens": 44,
"total_tokens": 138,
"completion_tokens": 94,
"prompt_tokens_details": null,
"reasoning_tokens": 0
},
"metadata": {
"weight_version": "default"
}
}