mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-03-15 02:47:22 +00:00
Add roadmap link to README (#1585)
This commit is contained in:
@@ -8,7 +8,7 @@
|
||||
|
||||
</p>
|
||||
<h3>A Flexible Framework for Experiencing Cutting-edge LLM Inference/Fine-tune Optimizations</h3>
|
||||
<strong><a href="#-overview">🎯 Overview</a> | <a href="#-kt-kernel---high-performance-inference-kernels">🚀 kt-kernel</a> | <a href="#-kt-sft---fine-tuning-framework">🎓 KT-SFT</a> | <a href="#-citation">🔥 Citation</a> | <a href="https://github.com/kvcache-ai/ktransformers/discussions">💬 Discussion</a> </strong>
|
||||
<strong><a href="#-overview">🎯 Overview</a> | <a href="#-kt-kernel---high-performance-inference-kernels">🚀 kt-kernel</a> | <a href="#-kt-sft---fine-tuning-framework">🎓 KT-SFT</a> | <a href="#-citation">🔥 Citation</a> | <a href="https://github.com/kvcache-ai/ktransformers/discussions">💬 Discussion</a> | <a href="https://github.com/kvcache-ai/ktransformers/issues/1582">🚀 Roadmap(2025Q4)</a> </strong>
|
||||
</div>
|
||||
|
||||
## 🎯 Overview
|
||||
@@ -144,4 +144,4 @@ The original integrated KTransformers framework has been archived to the [`archi
|
||||
|
||||
For the original documentation with full quick-start guides and examples, see:
|
||||
- [archive/README.md](./archive/README.md) (English)
|
||||
- [archive/README_ZH.md](./archive/README_ZH.md) (中文)
|
||||
- [archive/README_ZH.md](./archive/README_ZH.md) (中文)
|
||||
|
||||
Reference in New Issue
Block a user