mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-03-14 18:37:23 +00:00
[docs]: refine README for dpo updates (#1740)
* [docs]: refine dpo tutorial * [docs]: refine README for dpo updates * Update doc/en/DPO_tutorial.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * [docs]: update website doc & refine location --------- Co-authored-by: ErvinXie <ervinxie@foxmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: ZiWei Yuan <yzwliam@126.com>
This commit is contained in:
@@ -16,7 +16,7 @@
|
||||
KTransformers is a research project focused on efficient inference and fine-tuning of large language models through CPU-GPU heterogeneous computing. The project has evolved into **two core modules**: [kt-kernel](./kt-kernel/) and [kt-sft](./kt-sft/).
|
||||
|
||||
## 🔥 Updates
|
||||
|
||||
* **Dec 22, 2025**: Support RL-DPO fine-tuning with LLaMA-Factory. ([Tutorial](./doc/en/SFT/DPO_tutorial.md))
|
||||
* **Dec 5, 2025**: Support Native Kimi-K2-Thinking inference ([Tutorial](./doc/en/Kimi-K2-Thinking-Native.md))
|
||||
* **Nov 6, 2025**: Support Kimi-K2-Thinking inference ([Tutorial](./doc/en/Kimi-K2-Thinking.md)) and fine-tune ([Tutorial](./doc/en/SFT_Installation_Guide_KimiK2.md))
|
||||
* **Nov 4, 2025**: KTransformers Fine-Tuning × LLaMA-Factory Integration. ([Tutorial](./doc/en/KTransformers-Fine-Tuning_User-Guide.md))
|
||||
|
||||
@@ -7,8 +7,9 @@
|
||||
|
||||
# Tutorial
|
||||
- [kt-sft part](en/SFT/README.md)
|
||||
- [kt-sft developer tech notes](en/SFT/KTransformers-Fine-Tuning_Developer-Technical-Notes.md)
|
||||
- [Injection Tutorial](en/SFT/injection_tutorial.md)
|
||||
- [kt-sft developer tech notes](en/SFT/KTransformers-Fine-Tuning_Developer-Technical-Notes.md)
|
||||
- [DPO tutorial](en/SFT/DPO_tutorial.md)
|
||||
<!-- - [Multi-GPU Tutorial](en/multi-gpu-tutorial.md) -->
|
||||
<!-- - [Use FP8 GPU Kernel](en/fp8_kernel.md) -->
|
||||
<!-- - [Use AMD GPU](en/ROCm.md) -->
|
||||
|
||||
@@ -61,7 +61,7 @@ pip install custom_flashinfer/
|
||||
|
||||
## Prepare Models
|
||||
|
||||
We uses `deepseek-ai/DeepSeek-V2-Lite` as an example here. You can replace it with other models such as Kimi K2.
|
||||
We use `deepseek-ai/DeepSeek-V2-Lite` as an example here. You can replace it with other models such as Kimi K2.
|
||||
|
||||
## How to start
|
||||
|
||||
@@ -191,6 +191,8 @@ cpu_infer: 32
|
||||
chunk_size: 8192
|
||||
```
|
||||
|
||||
We also support RL DPO training using the KTransformers backend now. See [DPO Tutorial](../doc/en/SFT/DPO_tutorial.md) for details.
|
||||
|
||||
`kt_optimize_rule` controls **placement strategy**. See also [ktransformers/optimize_rules](https://github.com/kvcache-ai/ktransformers/tree/main/ktransformers/optimize/optimize_rules). Naming hints (`*` = wildcard):
|
||||
|
||||
| Pattern | Meaning |
|
||||
|
||||
Reference in New Issue
Block a user