mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-04-20 06:18:59 +00:00
[docs]: update web doc (#1625)
This commit is contained in:
@@ -1,26 +1,13 @@
|
||||
- [KTransformers Fine-Tuning × LLaMA-Factory Integration – Developer Technical Notes](#ktransformers-fine-tuning-x-llama-factory-integration-–-developer-technical-notes)
|
||||
- [Introduction](#introduction)
|
||||
|
||||
- [Overall View of the KT Fine-Tuning Framework](#overall-view-of-the-kt-fine-tuning-framework)
|
||||
- [Attention (LoRA + KT coexist)](#attention-lora--kt-coexist)
|
||||
- [MoE (operator encapsulation + backward)](#moe-operator-encapsulation--backward)
|
||||
- [Encapsulation](#encapsulation)
|
||||
- [Backward (CPU)](#backward-cpu)
|
||||
- [Multi-GPU Loading/Training: Placement strategy instead of DataParallel](#multi-gpu-loadingtraining-placement-strategy-instead-of-dataparallel)
|
||||
|
||||
- [KT-LoRA Fine-Tuning Evaluation](#kt-lora-fine-tuning-evaluation)
|
||||
- [Setup](#setup)
|
||||
- [Results](#results)
|
||||
- [Stylized Dialogue (CatGirl tone)](#stylized-dialogue-catgirl-tone)
|
||||
- [Translational-Style benchmark (generative)](#translational-style-benchmark-generative)
|
||||
- [Medical Vertical Benchmark (AfriMed-SAQ/MCQ)](#medical-vertical-benchmark-afrimed-saqmcq)
|
||||
- [Limitations](#limitations)
|
||||
|
||||
- [Speed Tests](#speed-tests)
|
||||
- [End-to-End Performance](#end-to-end-performance)
|
||||
- [MoE Compute (DeepSeek-V3-671B)](#moe-compute-deepseek-v3-671b)
|
||||
- [Speed Tests](#speed-tests)
|
||||
- [Memory Footprint](#memory-footprint)
|
||||
|
||||
- [Conclusion](#conclusion)
|
||||
|
||||
|
||||
@@ -36,7 +23,7 @@ This architecture bridges resource gaps, enabling **local fine-tuning of ultra-l
|
||||
|
||||
Architecturally, LLaMA-Factory orchestrates data/config/training, LoRA insertion, and inference; KTransformers is a pluggable, high-performance operator backend that takes over Attention and MoE under the same training code, enabling **GPU+CPU heterogeneity** to accelerate training and reduce GPU memory.
|
||||
|
||||

|
||||

|
||||
|
||||
We evaluated LoRA fine-tuning with HuggingFace default, Unsloth, and KTransformers backends (same settings and data). **KTransformers** is currently the only solution feasible on **2–4×24GB 4090s** for **671B-scale MoE**, and also shows higher throughput and lower GPU memory for 14B MoEs.
|
||||
|
||||
@@ -51,7 +38,7 @@ We evaluated LoRA fine-tuning with HuggingFace default, Unsloth, and KTransforme
|
||||
|
||||
From the table above, it can be seen that for the 14B model, the KTransformers backend achieves approximately 75% higher throughput than the default HuggingFace solution, while using only about one-fifth of the GPU memory. For the 671B model, both HuggingFace and Unsloth fail to run on a single 4090 GPU, whereas KTransformers is able to perform LoRA fine-tuning at 40 tokens/s, keeping the GPU memory usage within 70 GB.
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
|
||||
@@ -68,11 +55,11 @@ KTransformers provides operator injection (`BaseInjectedModule`), and PEFT provi
|
||||
- **Inheritance:** `KTransformersLinearLora` retains KT’s high-performance paths (`prefill_linear`/`generate_linear`) while accepting LoRA parameters (`lora_A/lora_B`).
|
||||
- **Replacement:** During preparation, we replace original `KTransformersLinear` layers (Q/K/V/O) with `KTransformersLinearLora`, preserving KT optimizations while enabling LoRA trainability.
|
||||
|
||||

|
||||

|
||||
|
||||
After replacement, LoRA is inserted at Q/K/V/O linear transforms (left), and `KTransformersLinearLora` contains both KT fast paths and LoRA matrices (right).
|
||||
|
||||

|
||||

|
||||
|
||||
### MoE (operator encapsulation + backward)
|
||||
|
||||
@@ -83,13 +70,13 @@ Given large parameters and sparse compute, we encapsulate the expert computation
|
||||
- **Upstream (PyTorch graph):** we register a custom Autograd Function so the MoE layer appears as **a single node**. In the left figure (red box), only `KSFTExpertsCPU` is visible; on the right, the unencapsulated graph expands routing, dispatch, and FFN experts. Encapsulation makes the MoE layer behave like a standard `nn.Module` with gradients.
|
||||
- **Downstream (backend):** inside the Autograd Function, pybind11 calls C++ extensions for forward/backward. Multiple **pluggable backends** exist (AMX BF16/INT8; **llamafile**). The backend can be switched via YAML (e.g., `"backend": "AMXBF16"` vs. `"llamafile"`).
|
||||
|
||||

|
||||

|
||||
|
||||
#### Backward (CPU)
|
||||
|
||||
MoE backward frequently needs the transposed weights $W^\top$. To avoid repeated runtime transposes, we **precompute/cache** $W^\top$ at load time (blue box). We also **cache necessary intermediate activations** (e.g., expert projections, red box) to reuse in backward and reduce recomputation. We provide backward implementations for **llamafile** and **AMX (INT8/BF16)**, with NUMA-aware optimizations.
|
||||
|
||||
<img src="../assets/image-20251016182942726.png" alt="image-20251016182942726" style="zoom:33%;" />
|
||||
<img src="../../assets/image-20251016182942726.png" alt="image-20251016182942726" style="zoom:33%;" />
|
||||
|
||||
### Multi-GPU Loading/Training: Placement strategy instead of DataParallel
|
||||
|
||||
@@ -117,7 +104,7 @@ LLaMA-Factory orchestration, KTransformers backend, LoRA (rank=8, α=32, dropout
|
||||
|
||||
Dataset: [NekoQA-10K](https://zhuanlan.zhihu.com/p/1934983798233231689). The fine-tuned model consistently exhibits the target style (red boxes) versus neutral/rational base (blue). This shows **KT-LoRA injects style features** into the generation distribution with low GPU cost.
|
||||
|
||||

|
||||

|
||||
|
||||
#### Translational-Style benchmark (generative)
|
||||
|
||||
|
||||
@@ -1,23 +1,13 @@
|
||||
- [KTransformers Fine-Tuning × LLaMA-Factory Integration – User Guide](#ktransformers-fine-tuning-x-llama-factory-integration-–-user-guide)
|
||||
- [Introduction](#introduction)
|
||||
|
||||
- [Fine-Tuning Results (Examples)](#fine-tuning-results-examples)
|
||||
- [Stylized Dialogue (CatGirl tone)](#stylized-dialogue-catgirl-tone)
|
||||
- [Benchmarks](#benchmarks)
|
||||
- [Translational-Style dataset](#translational-style-dataset)
|
||||
- [AfriMed-QA (short answer)](#afrimed-qa-short-answer)
|
||||
- [AfriMed-QA (multiple choice)](#afrimed-qa-multiple-choice)
|
||||
|
||||
- [Fine-Tuning Results (Examples)](#fine-tuning-results-examples)
|
||||
- [Quick to Start](#quick-to-start)
|
||||
- [Environment Setup](#environment-setup)
|
||||
- [Core Feature 1: Use KTransformers backend to fine-tune ultra-large MoE models](#core-feature-1-use-ktransformers-backend-to-fine-tune-ultra-large-moe-models)
|
||||
- [Core Feature 2: Chat with the fine-tuned model (base + LoRA adapter)](#core-feature-2-chat-with-the-fine-tuned-model-base--lora-adapter)
|
||||
- [Core Feature 3: Batch inference + metrics (base + LoRA adapter)](#core-feature-3-batch-inference--metrics-base--lora-adapter)
|
||||
|
||||
- [KT Fine-Tuning Speed (User-Side View)](#kt-fine-tuning-speed-user-side-view)
|
||||
- [End-to-End Performance](#end-to-end-performance)
|
||||
- [GPU/CPU Memory Footprint](#gpucpu-memory-footprint)
|
||||
|
||||
- [Conclusion](#conclusion)
|
||||
|
||||
|
||||
@@ -33,7 +23,7 @@ Our goal is to give resource-constrained researchers a **local path to explore f
|
||||
|
||||
As shown below, LLaMA-Factory is the unified orchestration/configuration layer for the whole fine-tuning workflow—handling data, training scheduling, LoRA injection, and inference interfaces. **KTransformers** acts as a pluggable high-performance backend that takes over core operators like Attention/MoE under the same training configs, enabling efficient **GPU+CPU heterogeneous cooperation**.
|
||||
|
||||

|
||||

|
||||
|
||||
Within LLaMA-Factory, we compared LoRA fine-tuning with **HuggingFace**, **Unsloth**, and **KTransformers** backends. KTransformers is the **only workable 4090-class solution** for ultra-large MoE models (e.g., 671B) and also delivers higher throughput and lower GPU memory on smaller MoE models (e.g., DeepSeek-14B).
|
||||
|
||||
@@ -46,7 +36,7 @@ Within LLaMA-Factory, we compared LoRA fine-tuning with **HuggingFace**, **Unslo
|
||||
|
||||
† **1400 GB** is a **theoretical** FP16 full-parameter resident footprint (not runnable). **70 GB** is the **measured peak** with KT strategy (Attention on GPU + layered MoE offload).
|
||||
|
||||

|
||||

|
||||
|
||||
### Fine-Tuning Results (Examples)
|
||||
|
||||
@@ -56,7 +46,7 @@ Dataset: [NekoQA-10K](https://zhuanlan.zhihu.com/p/1934983798233231689). Goal: i
|
||||
|
||||
The figure compares responses from the base vs. fine-tuned models. The fine-tuned model maintains the target tone and address terms more consistently (red boxes), validating the effectiveness of **style-transfer fine-tuning**.
|
||||
|
||||

|
||||

|
||||
|
||||
#### Benchmarks
|
||||
|
||||
@@ -219,7 +209,7 @@ We recommend **AMX acceleration** where available (`lscpu | grep amx`). AMX supp
|
||||
|
||||
Outputs go to `output_dir` in safetensors format plus adapter metadata for later loading.
|
||||
|
||||

|
||||

|
||||
|
||||
### Core Feature 2: Chat with the fine-tuned model (base + LoRA adapter)
|
||||
|
||||
@@ -244,7 +234,7 @@ We also support **GGUF** adapters: for safetensors, set the **directory**; for G
|
||||
|
||||
During loading, LLaMA-Factory maps layer names to KT’s naming. You’ll see logs like `Loaded adapter weight: XXX -> XXX`:
|
||||
|
||||

|
||||

|
||||
|
||||
### Core Feature 3: Batch inference + metrics (base + LoRA adapter)
|
||||
|
||||
|
||||
@@ -4,10 +4,16 @@
|
||||
|
||||
## TL;DR
|
||||
This tutorial will guide you through the process of injecting custom operators into a model using the KTransformers framework. We will use the DeepSeekV2-Chat model as an example to demonstrate how to inject custom operators into the model step by step. The tutorial will cover the following topics:
|
||||
* [How to write injection rules](#how-to-write-injection-rules)
|
||||
* [Understanding the structure of the model](#understanding-model-structure)
|
||||
* [Multi-GPU](#muti-gpu)
|
||||
* [How to write a new operator and inject it into the model](#how-to-write-a-new-operator-and-inject-into-the-model)
|
||||
- [TL;DR](#tldr)
|
||||
- [How to Write Injection Rules](#how-to-write-injection-rules)
|
||||
- [Understanding Model Structure](#understanding-model-structure)
|
||||
- [Matrix Absorption-based MLA Injection](#matrix-absorption-based-mla-injection)
|
||||
- [Injection of Routed Experts](#injection-of-routed-experts)
|
||||
- [Injection of Linear Layers](#injection-of-linear-layers)
|
||||
- [Injection of Modules with Pre-calculated Buffers](#injection-of-modules-with-pre-calculated-buffers)
|
||||
- [Specifying Running Devices for Modules](#specifying-running-devices-for-modules)
|
||||
- [Muti-GPU](#muti-gpu)
|
||||
- [How to Write a New Operator and Inject into the Model](#how-to-write-a-new-operator-and-inject-into-the-model)
|
||||
|
||||
## How to Write Injection Rules
|
||||
The basic form of the injection rules for the Inject framework is as follows:
|
||||
@@ -38,7 +44,7 @@ Using [deepseek-ai/DeepSeek-V2-Lite-Chat](https://huggingface.co/deepseek-ai/Dee
|
||||
Fortunately, knowing the structure of a model is very simple. Open the file list on the [deepseek-ai/DeepSeek-V2-Lite](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat/tree/main) homepage, and you can see the following files:
|
||||
<p align="center">
|
||||
<picture>
|
||||
<img alt="Inject-Struction" src="../assets/model_structure_guild.png" width=60%>
|
||||
<img alt="Inject-Struction" src="../../assets/model_structure_guild.png" width=60%>
|
||||
</picture>
|
||||
</p>
|
||||
|
||||
@@ -48,7 +54,7 @@ From the `modeling_deepseek.py` file, we can see the specific implementation of
|
||||
The structure of the DeepSeekV2 model from the `.saftensors` and `modeling_deepseek.py` files is as follows:
|
||||
<p align="center">
|
||||
<picture>
|
||||
<img alt="Inject-Struction" src="../assets/deepseekv2_structure.png" width=60%>
|
||||
<img alt="Inject-Struction" src="../../assets/deepseekv2_structure.png" width=60%>
|
||||
</picture>
|
||||
</p>
|
||||
|
||||
@@ -171,7 +177,7 @@ DeepseekV2-Chat got 60 layers, if we got 2 GPUs, we can allocate 30 layers to ea
|
||||
|
||||
<p align="center">
|
||||
<picture>
|
||||
<img alt="Inject-Struction" src="../assets/multi_gpu.png" width=60%>
|
||||
<img alt="Inject-Struction" src="../../assets/multi_gpu.png" width=60%>
|
||||
</picture>
|
||||
</p>
|
||||
|
||||
|
||||
Reference in New Issue
Block a user