From a3d5d53605207785908fb952779400bb11036073 Mon Sep 17 00:00:00 2001 From: Jiaqi Liao <30439460+SkqLiao@users.noreply.github.com> Date: Fri, 13 Feb 2026 22:35:36 +0800 Subject: [PATCH] Update MiniMax-M2.5.md (#1849) --- doc/en/MiniMax-M2.5.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/en/MiniMax-M2.5.md b/doc/en/MiniMax-M2.5.md index 049a9c3..c378f8d 100644 --- a/doc/en/MiniMax-M2.5.md +++ b/doc/en/MiniMax-M2.5.md @@ -14,7 +14,7 @@ This tutorial demonstrates how to run MiniMax-M2.5 model inference using SGLang **Minimum Configuration:** - **GPU**: NVIDIA RTX 2x4090 48GB (or equivalent with at least total 48GB VRAM available) -- **CPU**: x86 CPU with AVX512F support (e.g., Intel Sapphire Rapids) +- **CPU**: x86 CPU with AVX512BF16 support (e.g., Intel Sapphire Rapids) - **RAM**: At least 200GB system memory - **Storage**: ~200GB for model weights (FP8 weight, same weight folder for CPU and GPU)