diff --git a/README.md b/README.md index c76cc39c..ea98f291 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,9 @@ This repository is a fork of [llama.cpp](https://github.com/ggerganov/llama.cpp) with better CPU and hybrid GPU/CPU performance, new SOTA quantization types, first-class Bitnet support, better DeepSeek performance via MLA, FlashMLA, fused MoE operations and tensor overrides for hybrid GPU/CPU inference, row-interleaved quant packing, etc. +>[!IMPORTANT] +>Do not use quantized models from Unsloth that have `_XL` in their name. These are likely to not work with `ik_llama.cpp`. + ## Quickstart ### Prerequisites