From eefc8cf98df48466ded3345c3d4a09de2e82836f Mon Sep 17 00:00:00 2001 From: ErvinXie Date: Mon, 8 Dec 2025 19:58:20 +0800 Subject: [PATCH] =?UTF-8?q?=E6=9B=B4=E6=96=B0=20Kimi-K2-Thinking-Native.md?= =?UTF-8?q?=20(#1684)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- doc/en/Kimi-K2-Thinking-Native.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/en/Kimi-K2-Thinking-Native.md b/doc/en/Kimi-K2-Thinking-Native.md index ab52bee..b880ab2 100644 --- a/doc/en/Kimi-K2-Thinking-Native.md +++ b/doc/en/Kimi-K2-Thinking-Native.md @@ -14,7 +14,7 @@ This tutorial demonstrates how to run Kimi-K2 model inference using SGLang integ **Minimum Configuration:** - **GPU**: NVIDIA RTX 4090 48GB (or equivalent with at least 48GB VRAM available) -- **CPU**: Intel Xeon with AMX support (e.g., Sapphire Rapids) +- **CPU**: x86 CPU with AVX512 support (e.g., Sapphire Rapids) - **RAM**: At least 650GB system memory - **Storage**: ~600GB for model weights (native INT4 weight, same weight dir for CPU and GPU)