1222 Commits

Author SHA1 Message Date
github-actions[bot]
7a4b9b0e87 [build]: sync sglang submodule to f6adb4f473ba9a767cd60237ef8325cfcd97eba9 (#1876)
Co-authored-by: ovowei <80044717+ovowei@users.noreply.github.com>
2026-03-04 17:18:08 +08:00
Jianwei Dong
15c624dcae Fix/sglang kt detection (#1875)
* [feat]: simplify sglang installation with submodule, auto-sync CI, and version alignment

- Add kvcache-ai/sglang as git submodule at third_party/sglang (branch = main)
- Add top-level install.sh for one-click source installation (sglang + kt-kernel)
- Add sglang-kt as hard dependency in kt-kernel/pyproject.toml
- Add CI workflow to auto-sync sglang submodule daily and create PR
- Add CI workflow to build and publish sglang-kt to PyPI
- Integrate sglang-kt build into release-pypi.yml (version.py bump publishes both packages)
- Align sglang-kt version with ktransformers via SGLANG_KT_VERSION env var injection
- Update Dockerfile to use submodule and inject aligned version
- Update all 13 doc files, CLI hints, and i18n strings to reference new install methods

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [build]: bump version to 0.5.2

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [build]: rename PyPI package from kt-kernel to ktransformers

Users can now `pip install ktransformers` to get everything
(sglang-kt is auto-installed as a dependency).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Revert "[build]: rename PyPI package from kt-kernel to ktransformers"

This reverts commit e0cbbf6364.

* [build]: add ktransformers meta-package for PyPI

`pip install ktransformers` now works as a single install command.
It pulls kt-kernel (which in turn pulls sglang-kt).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [fix]: show sglang-kt package version in kt version command

- Prioritize sglang-kt package version (aligned with ktransformers)
  over sglang internal __version__
- Update display name from "sglang" to "sglang-kt"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [fix]: improve sglang-kt detection in kt doctor and kt version

Recognize sglang-kt package name as proof of kvcache-ai fork installation.
Previously both commands fell through to "PyPI (not recommended)" for
non-editable local source installs. Now version.py reuses the centralized
check_sglang_installation() logic.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* [build]: bump version to 0.5.2.post1

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
v0.5.2.post1
2026-03-04 16:54:48 +08:00
Chen Hongtao
9e69fccb02 [feat]: add mistral moe loader compatibility (#1873)
Co-authored-by: chenht2022 <chenht2022@users.noreply.github.com>
2026-02-28 17:50:23 +08:00
Jianwei Dong
19887e4363 update docker build (#1872) 2026-02-28 10:34:35 +08:00
VYSE V.E.O
20262b2743 Fix Qwen3.5 FP8 load for VL detection (#1857)
* Fix Qwen3.5 FP8 load for VL detection

1, for VL models(Qwen3.5), modify base_key: model.layers.{N} -> model.language_model.layers.{N}

2, clean DUPLICATED class BF16SafeTensorLoader(SafeTensorLoader) , only the first overrided one.

* Indent type

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
2026-02-26 15:47:22 +08:00
Rin
786987a95f Handle unquoted paths and special characters in model scanner (#1840)
* Handle unquoted paths and special characters in model scanner

* Fix ValueError: capture_output cannot be used with stderr

`capture_output=True` internally sets `stderr=PIPE`, which conflicts
with `stderr=subprocess.DEVNULL`. Replace `capture_output=True` with
explicit `stdout=subprocess.PIPE` to keep stderr suppressed correctly.
Also remove redundant `shell=False` (already the default).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: ErvinXie <ervinxie@foxmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-26 15:44:45 +08:00
Jianwei Dong
16a8b98f3e support qwen3.5 (#1846) 2026-02-16 15:48:14 +08:00
Jiaqi Liao
411b69bec0 Add GLM-5 Day0 Support update to README (#1851) 2026-02-15 11:37:12 +08:00
Jiaqi Liao
7d9943365a Add MiniMax-M2.5 Day0 support update (#1850)
Added update for MiniMax-M2.5 Day0 support in the README.
2026-02-13 22:49:10 +08:00
Jiaqi Liao
a3d5d53605 Update MiniMax-M2.5.md (#1849) 2026-02-13 22:35:36 +08:00
Jianwei Dong
f0e4fc612b support minimax-m2.5 (#1848) 2026-02-13 19:15:44 +08:00
Oql
1c72b3f5bd fix glm5 docs (#1845) 2026-02-12 02:33:37 +08:00
Oql
7f7aeaeff6 support glm 5 (#1844) 2026-02-12 02:03:32 +08:00
Jiaqi Liao
061fb56382 Update Kimi-K2.5.md (#1838) 2026-02-07 16:38:39 +08:00
Chen Hongtao
d342fb1df6 [docs]: add maintainers list (#1837)
* [docs]: add maintainers list

* Apply suggestion from @Copilot

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: ErvinXie <ervinxie@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2026-02-06 18:28:06 +08:00
Oql
56cbd69ac4 kt-cli enhancement (#1834)
* [feat]: redesign kt run interactive configuration with i18n support

- Redesign kt run with 8-step interactive flow (model selection, inference method, NUMA/CPU, GPU experts, KV cache, GPU/TP selection, parsers, host/port)
- Add configuration save/load system (~/.ktransformers/run_configs.yaml)
- Add i18n support for kt chat (en/zh translations)
- Add universal input validators with auto-retry and Chinese comma support
- Add port availability checker with auto-suggestion
- Add parser configuration (--tool-call-parser, --reasoning-parser)
- Remove tuna command and clean up redundant files
- Fix: variable reference bug in run.py, filter to show only MoE models

* [feat]: unify model selection UI and enable shared experts fusion by default

- Unify kt run model selection table with kt model list display
  * Add Total size, MoE Size, Repo, and SHA256 status columns
  * Use consistent formatting and styling
  * Improve user decision-making with more information

- Enable --disable-shared-experts-fusion by default
  * Change default value from False to True
  * Users can still override with --enable-shared-experts-fusion

* [feat]: improve kt chat with performance metrics and better CJK support

- Add performance metrics display after each response
  * Total time, TTFT (Time To First Token), TPOT (Time Per Output Token)
  * Accurate input/output token counts using model tokenizer
  * Fallback to estimation if tokenizer unavailable
  * Metrics shown in dim style (not prominent)

- Fix Chinese character input issues
  * Replace Prompt.ask() with console.input() for better CJK support
  * Fixes backspace deletion showing half-characters

- Suppress NumPy subnormal warnings
  * Filter "The value of the smallest subnormal" warnings
  * Cleaner CLI output on certain hardware environments

* [fix]: correct TTFT measurement in kt chat

- Move start_time initialization before API call
- Previously start_time was set when receiving first chunk, causing TTFT ≈ 0ms
- Now correctly measures time from request sent to first token received

* [docs]: 添加 Clawdbot 集成指南 - KTransformers 企业级 AI 助手部署方案

* [docs]: 强调推荐使用 Kimi K2.5 作为核心模型,突出企业级推理能力

* [docs]: 添加 Clawdbot 飞书接入教程链接

* [feat]: improve CLI table display, model verification, and chat experience

- Add sequence number (#) column to all model tables by default
- Filter kt edit to show only MoE GPU models (exclude AMX)
- Extend kt model verify to check *.json and *.py files in addition to weights
- Fix re-verification bug where repaired files caused false failures
- Suppress tokenizer debug output in kt chat token counting

* [fix]: fix cpu cores.

---------

Co-authored-by: skqliao <skqliao@gmail.com>
2026-02-04 16:44:54 +08:00
Oql
4f64665758 [docs]: add Qwen3 Coder Next Tutorial (#1833) 2026-02-04 16:27:10 +08:00
Oql
c28cfcb26e [fix]: fix k2-moe.hpp load weight (#1830) 2026-02-03 11:28:49 +08:00
Jiaqi Liao
794c04fae4 Revert "[doc]: update kimi_k2.5 doc (#1823)" (#1825)
This reverts commit 2e6506535b.
2026-01-30 16:10:01 +08:00
Oql
ccbb5b1cf8 [docs]: add clawd bot docs
add clawd bot docs
2026-01-30 15:51:30 +08:00
Jiaqi Liao
2e6506535b [doc]: update kimi_k2.5 doc (#1823) 2026-01-30 15:43:18 +08:00
Jiaqi Liao
db82d99fa6 feat: add fallback expert prefix lookup in loader.py from kimi_k2.5 (#1822) 2026-01-30 14:09:38 +08:00
Jiaqi Liao
edc48aba37 [fix]: fix wrapper import issue (#1819) 2026-01-28 16:31:56 +08:00
Peilin Li
8321d00cc5 Add files via upload (#1814) 2026-01-27 17:44:50 +08:00
Jiaqi Liao
2f6f7f1921 Kimi k2.5 doc (#1812)
* [doc]: add Kimi-K2.5 deploy&sft guide

* [doc]: add Kimi-K2.5 deploy&sft guide
2026-01-27 13:33:25 +08:00
Jiaqi Liao
1da075a3fa Revert "[doc]: add Kimi-K2.5 deploy&sft guide (#1810)" (#1811)
This reverts commit a368140d76.
2026-01-27 10:05:13 +08:00
Jiaqi Liao
a368140d76 [doc]: add Kimi-K2.5 deploy&sft guide (#1810) 2026-01-27 10:02:59 +08:00
Oql
5bd5c8f750 [fix]: fix experts-sched-Tutorial.md (#1808) 2026-01-23 18:06:24 +08:00
Oql
bf4c8a690b Add Native Precision Tutorial, update worker strategy and README.md (#1807) 2026-01-23 18:00:13 +08:00
Jianwei Dong
8652346e69 [fix]: doc (#1805) 2026-01-23 11:17:08 +08:00
SCDESPERTATE
b0f827d2a9 [chore](cuda): explicitly use ele_per_blk var for better readability (#1784) 2026-01-23 11:11:08 +08:00
Jianwei Dong
779bf14556 [doc]: add Experts sched tutorial (#1802)
* Change num gpu experts to gpu expert masks and add eplb statistics

* [feat]: update examples

* [fix]: fix fp8 perchannel

* Delete useless tests

* Delete useless tests

* add experts_sched tutorial

---------

Co-authored-by: ouqingliang <1692110604@qq.com>
v0.5.1
2026-01-22 15:40:07 +08:00
Peilin Li
a4de664e62 Add AutoDL Tutorial (#1801) 2026-01-22 14:52:47 +08:00
ErvinXie
d2305538f7 Modify installation steps in Kimi-K2-Thinking-Native.md (#1800)
Updated installation instructions for sglang repository.
2026-01-21 15:46:21 +08:00
mrhaoxx
b27de4068b [fix]: fix exp_avx512 for act_fn (#1797) 2026-01-20 11:07:22 +08:00
Jianwei Dong
027832c590 [feat](kt-kernel): CPU-GPU experts sched (#1796) 2026-01-16 17:01:15 +08:00
Oql
6277da4c2b support GLM 4.7 (#1791)
support GLM 4.7
2026-01-13 17:36:25 +08:00
watamario15
667030d6e6 [kt-kernel]: Fix ignored build configurations in install.sh and CMakeLists.txt (#1789)
* Correct variable defaults

* Remove CMAKE_BUILD_TYPE setting in CMakeLists
2026-01-12 22:16:19 +08:00
Oql
5edc456749 support Native BF16 format MoE. (#1788)
support Native BF16 format MoE
2026-01-12 14:43:28 +08:00
Oql
ddb957596f Fix moe bug. (#1783)
* [fix]: fix moe.hpp load from file bug.

* [fix]: fix all moe hpp init bug.

* [fix]: fix moe & awq-moe ug.
2026-01-05 17:02:24 +08:00
Oql
dc6394e501 [fix]: fix moe hpp bug. (#1780)
fix moe hpp init bug.
2026-01-04 19:32:56 +08:00
ZiWei Yuan
ad7674a6d5 [ci]: Patch ci (#1772)
* [docs]: add kt-cli doc and update corresponding website

* [feat]: update issue template
2025-12-31 12:10:20 +08:00
Jianwei Dong
6d2d7cb057 bump to 0.5.0.post1 (#1771) v0.5.0.post1 2025-12-30 11:09:54 +08:00
Jianwei Dong
47b1bfcff6 Update release-pypi.yml (#1770) 2025-12-30 10:47:28 +08:00
Jianwei Dong
9adc91714f Remove kt-kernel-cuda, kt-kernel uses the version with cuda (#1769) 2025-12-30 10:23:58 +08:00
ZiWei Yuan
b096b01fbc [docs]: add kt-cli doc and update corresponding website (#1768) 2025-12-29 23:06:22 +08:00
ErvinXie
9539ab91eb Cli (#1765)
* [feat]: add custom option for kt run

* [feat]: depth 3
2025-12-29 15:18:42 +08:00
Jianwei Dong
4b235cdaa4 fix cuda wheel build (#1766) 2025-12-29 12:42:06 +08:00
Jianwei Dong
7c127d9fd0 Update release-pypi.yml (#1764) 2025-12-29 11:48:55 +08:00
Jianwei Dong
559a3ad4ac fix pypi cuda install (#1763) 2025-12-29 11:19:43 +08:00