diff --git a/README.md b/README.md index 9c5047d..2ac1580 100644 --- a/README.md +++ b/README.md @@ -13,7 +13,7 @@ ## 🎯 Overview -KTransformers is a research project focused on efficient inference and fine-tuning of large language models through CPU-GPU heterogeneous computing. The project has evolved into **two core modules**: [kt-kernel](./kt-kernel/) and [kt-sft](./kt-sft/). +KTransformers is a research project focused on efficient inference and fine-tuning of large language models through CPU-GPU heterogeneous computing. The project has evolved into **two core modules**: [kt-kernel](https://github.com/kvcache-ai/ktransformers/tree/main/kt-kernel/) and [kt-sft](https://github.com/kvcache-ai/ktransformers/tree/main/kt-sft). ## 🔥 Updates diff --git a/kt-kernel/README.md b/kt-kernel/README.md index 4aafb7c..82ce00b 100644 --- a/kt-kernel/README.md +++ b/kt-kernel/README.md @@ -28,7 +28,7 @@ High-performance kernel operations for KTransformers, featuring CPU-optimized Mo **Current Support Status:** - ✅ **Intel CPUs with AMX**: Fully supported (using weights converted to INT4/INT8 format) - ✅ **Universal CPU (llamafile backend)**: Supported (using GGUF-format weights) -- ✅ **AMD CPUs with BLIS**: Supported (for int8 prefill & decode) - [Guide](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/kt-kernel/amd_blis) +- ✅ **AMD CPUs with BLIS**: Supported (for int8 prefill & decode) - [Guide](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/kt-kernel/amd_blis.md) - ✅ **Kimi-K2 Native INT4 (RAWINT4)**: Supported on AVX512 CPUs (CPU-GPU shared INT4 weights) - [Guide](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/kt-kernel/Kimi-K2-Thinking-Native.md) - ✅ **FP8 weights (e.g., MiniMax-M2.1)**: Supported on AVX512 CPUs (CPU-GPU shared FP8 weights) - [Guide](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/kt-kernel/MiniMax-M2.1-Tutorial.md)