mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-05-02 04:01:40 +00:00
[feat]: simplify sglang installation with submodule, auto-sync CI, and version alignment
- Add kvcache-ai/sglang as git submodule at third_party/sglang (branch = main) - Add top-level install.sh for one-click source installation (sglang + kt-kernel) - Add sglang-kt as hard dependency in kt-kernel/pyproject.toml - Add CI workflow to auto-sync sglang submodule daily and create PR - Add CI workflow to build and publish sglang-kt to PyPI - Integrate sglang-kt build into release-pypi.yml (version.py bump publishes both packages) - Align sglang-kt version with ktransformers via SGLANG_KT_VERSION env var injection - Update Dockerfile to use submodule and inject aligned version - Update all 13 doc files, CLI hints, and i18n strings to reference new install methods Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -42,17 +42,17 @@ This tutorial demonstrates how to run MiniMax-M2.1 model inference using SGLang
|
||||
|
||||
Before starting, ensure you have:
|
||||
|
||||
1. **SGLang installed**
|
||||
1. **SGLang installed**
|
||||
|
||||
Note: Currently, please clone our custom SGLang repository:
|
||||
Install the kvcache-ai fork of SGLang (one of):
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kvcache-ai/sglang.git
|
||||
cd sglang
|
||||
pip install -e "python[all]"
|
||||
```
|
||||
# Option A: One-click install (from ktransformers root)
|
||||
./install.sh
|
||||
|
||||
You can follow [SGLang integration steps](https://docs.sglang.io/get_started/install.html)
|
||||
# Option B: pip install
|
||||
pip install sglang-kt
|
||||
```
|
||||
|
||||
2. **KT-Kernel installed**
|
||||
|
||||
|
||||
Reference in New Issue
Block a user