# Running MiniMax-M2.5 with SGLang and KT-Kernel This tutorial demonstrates how to run MiniMax-M2.5 model inference using SGLang integrated with KT-Kernel for CPU-GPU heterogeneous inference. This setup enables efficient deployment of large MoE models by offloading experts to CPU. ## Table of Contents - [Hardware Requirements](#hardware-requirements) - [Prerequisites](#prerequisites) - [Step 1: Download Model Weights](#step-1-download-model-weights) - [Step 2: Launch SGLang Server](#step-2-launch-sglang-server) - [Step 3: Send Inference Requests](#step-3-send-inference-requests) ## Hardware Requirements **Minimum Configuration:** - **GPU**: NVIDIA RTX 2x4090 48GB (or equivalent with at least total 48GB VRAM available) - **CPU**: x86 CPU with AVX512BF16 support (e.g., Intel Sapphire Rapids) - **RAM**: At least 200GB system memory - **Storage**: ~200GB for model weights (FP8 weight, same weight folder for CPU and GPU) ## Prerequisites Before starting, ensure you have: 1. **KT-Kernel installed**: ``` git clone https://github.com/kvcache-ai/ktransformers.git git submodule update --init --recursive cd kt-kernel && ./install.sh ``` 2. **SGLang installed** - Install the kvcache-ai fork of SGLang (one of): ```bash # Option A: One-click install (from ktransformers root) ./install.sh # Option B: pip install pip install sglang-kt ``` > Note: You may need to reinstall cudnn: `pip install nvidia-cudnn-cu12==9.16.0.29` 3. **CUDA toolkit** - Compatible with your GPU (CUDA 12.8+ recommended) 4. **Hugging Face CLI** - For downloading models: ```bash pip install huggingface-hub ``` ## Step 1: Download Model Weights ```bash # Create a directory for models mkdir -p /path/to/models cd /path/to/models # Download MiniMax-M2.5 (FP8 for both CPU and GPU) huggingface-cli download MiniMaxAI/MiniMax-M2.5 \ --local-dir /path/to/minimax-m2.5 ``` **Note:** Replace `/path/to/models` with your actual storage path throughout this tutorial. ## Step 2: Launch SGLang Server Start the SGLang server with KT-Kernel integration for CPU-GPU heterogeneous inference. ### Launch Command (4x RTX 4090 Example) ```bash python -m sglang.launch_server \ --host 0.0.0.0 \ --port 30005 \ --model /path/to/minimax-m2.5 \ --kt-weight-path /path/to/minimax-m2.5 \ --kt-cpuinfer 96 \ --kt-threadpool-count 2 \ --kt-num-gpu-experts 30 \ --kt-method FP8 \ --kt-gpu-prefill-token-threshold 400 \ --trust-remote-code \ --mem-fraction-static 0.94 \ --served-model-name MiniMax-M2.5 \ --enable-mixed-chunk \ --tensor-parallel-size 4 \ --enable-p2p-check \ --disable-shared-experts-fusion \ --chunked-prefill-size 32658 \ --max-total-tokens 50000 \ --attention-backend flashinfer ``` It takes about 2~3 minutes to start the server. See [KT-Kernel Parameters](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel#kt-kernel-parameters) for detailed parameter tuning guidelines. ## Step 3: Send Inference Requests Once the server is running, you can send inference requests using the OpenAI-compatible API. ### Basic Chat Completion Request ```bash curl -s http://localhost:30005/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "MiniMax-M2.5", "stream": false, "messages": [ {"role": "user", "content": "hi, who are you?"} ] }' ``` ### Example Response ```json { "id": "e82360a51dd4465281a2b954d5237a06", "object": "chat.completion", "created": 1770980318, "model": "MiniMax-M2.5", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "The user is asking who I am. I should give a brief, friendly introduction about myself.\n\n\nHi there! I'm MiniMax-M2.5, an AI assistant created by MiniMax. I'm here to help you with a wide range of tasks, including:\n\n- Answering questions\n- Writing and editing code\n- Explaining concepts\n- Brainstorming ideas\n- And much more!\n\nHow can I help you today?", "reasoning_content": null, "tool_calls": null }, "logprobs": null, "finish_reason": "stop", "matched_stop": 200020 } ], "usage": { "prompt_tokens": 44, "total_tokens": 138, "completion_tokens": 94, "prompt_tokens_details": null, "reasoning_tokens": 0 }, "metadata": { "weight_version": "default" } } ```