Jianwei Dong
027832c590
[feat](kt-kernel): CPU-GPU experts sched ( #1796 )
2026-01-16 17:01:15 +08:00
mrhaoxx
503295fc88
[feat](kt-kernel): refactor convert_cpu_weights.py to support conversation for GLM-4.6V ( #1687 )
...
Signed-off-by: mrhaoxx <mr.haoxx@gmail.com >
2025-12-09 14:24:41 +08:00
mrhaoxx
637c49c83f
[feat](kt-kernel): support qwen3-vl weights convert ( #1648 )
2025-11-27 22:29:09 +08:00
ZiWei Yuan
1374b98ee5
[feat](moe_kernel): add amd blis support (int8) ( #1600 )
...
* [feat]: init amd adaption
* [feat]: add blis support
* [fix]: fix setup and moe kernel warpper
* [fix](setup.py): support rebuild with cache and import kt_kernel works
fine
* [feat]: add moe_kernel converter for amd and implement the load
method(haven't tested yet)
* [feat](moe_kernel/moe.hpp): delete unused memory when using save
* [fix](moe_kernel): update PLAIN for pack
* [fix](moe_kernel): rm printf debug
* [fix](moe_kernel): skip gpu experts
* [fix](moe_kernel/moe.hpp): update include memory path
* [feat](moe_kernel/moe.hpp): support expert deferral
* [feat]: finish amd
---------
Co-authored-by: mrhaoxx <mr.haoxx@gmail.com >
2025-11-27 12:08:53 +08:00
DocShotgun
e72a4fb880
[feat](kt-kernel): Add resume arg to CPU weight conversion ( #1630 )
...
* [feat]: kt-kernel: Add resume arg to CPU weight conversion
* [docs]: kt-kernel: Document resume arg for CPU weight conversion
* [fix]: kt-kernel: Only print resume layer if in use
* [fix]: kt-kernel: Don't log skipped layers when using resume_layer
2025-11-22 12:00:15 +08:00
ZiWei Yuan
aef6672dd8
[docs]: add contribuing guide and add hooks install ( #1613 )
...
* [feat]: update kt-kernel hooks and add contribution guide
* [docs]: add contributing guide
* [style]: format the python file and cpp file in kt-kernel
2025-11-15 18:26:49 +08:00
Jiaqi Liao
13b8ddecd9
AMXMoEWrapper -> KTMoEWrapper ( #1604 )
...
fix import KTMoEWrapper
2025-11-12 16:34:54 +08:00
ovowei
f854d03bd7
update kt-kernel
2025-11-03 15:19:52 +08:00