ZiWei Yuan
1374b98ee5
[feat](moe_kernel): add amd blis support (int8) ( #1600 )
...
* [feat]: init amd adaption
* [feat]: add blis support
* [fix]: fix setup and moe kernel warpper
* [fix](setup.py): support rebuild with cache and import kt_kernel works
fine
* [feat]: add moe_kernel converter for amd and implement the load
method(haven't tested yet)
* [feat](moe_kernel/moe.hpp): delete unused memory when using save
* [fix](moe_kernel): update PLAIN for pack
* [fix](moe_kernel): rm printf debug
* [fix](moe_kernel): skip gpu experts
* [fix](moe_kernel/moe.hpp): update include memory path
* [feat](moe_kernel/moe.hpp): support expert deferral
* [feat]: finish amd
---------
Co-authored-by: mrhaoxx <mr.haoxx@gmail.com >
2025-11-27 12:08:53 +08:00
Jiaqi Liao
e7d1c1de09
fix(llamafile): resolve deferred experts data race and update README ( #1646 )
2025-11-26 23:19:37 +08:00
Jiaqi Liao
d483147307
Fix kt-kernel compile issue ( #1595 )
...
* update install.sh
* fix import issue
* update README
2025-11-11 19:30:27 +08:00
Jiaqi Liao
94c25626dc
Fix kt-kernel for new wrapper ( #1588 )
...
* update README for kt-kernel
* style: format C++ and Python code in kt-kernel
- Format C++ files: task_queue, ext_bindings, and MoE operators
- Format Python utility modules: amx, llamafile, and loader
- Improve code readability and consistency
2025-11-10 21:47:34 +08:00
Jiaqi Liao
9bc00e587b
Refactor KTMoEWrapper backend ( #1587 )
...
* universal backend for cpu inference
* expert defer
2025-11-10 20:26:15 +08:00
chenht2022
6fe30af50d
Merge branch 'main' into develop-cht
2025-11-03 14:35:44 +00:00
ovowei
f854d03bd7
update kt-kernel
2025-11-03 15:19:52 +08:00
chenht2022
dd4377b60b
feat: add deferred expert scheduling support
2025-10-31 08:03:37 +00:00
ovowei
28d8663374
fix
2025-10-22 18:14:34 +08:00
Atream
4c5fcf9774
add kt-kernel
2025-10-12 05:13:00 +00:00