From c3cd543d77792042493973d6a4e438e4c0800a6a Mon Sep 17 00:00:00 2001 From: Kawrakow Date: Tue, 22 Jul 2025 09:01:59 +0200 Subject: [PATCH] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index f4f0ecbe..7b52edb5 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,8 @@ This repository is a fork of [llama.cpp](https://github.com/ggerganov/llama.cpp) with better CPU and hybrid GPU/CPU performance, new SOTA quantization types, first-class Bitnet support, better DeepSeek performance via MLA, FlashMLA, fused MoE operations and tensor overrides for hybrid GPU/CPU inference, row-interleaved quant packing, etc. +**NOTE:** Under construction. All links below are broken as they refer to the now suspended `ik_llama.cpp` repository on GitHub. + ## Latest News ### Model Support