From 957a6e79119ccb4c41ea4fb346db17ade733c2d8 Mon Sep 17 00:00:00 2001 From: Kawrakow Date: Fri, 9 May 2025 10:13:25 +0300 Subject: [PATCH] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 46638dd7..ea4144bb 100644 --- a/README.md +++ b/README.md @@ -14,6 +14,7 @@ This repository is a fork of [llama.cpp](https://github.com/ggerganov/llama.cpp) ## Latest News +* May 9 2025: Support for LlaMA-3-Nmotron models added, see [PR 377](https://github.com/ikawrakow/ik_llama.cpp/pull/377) * May 7 2025: 🚀 Faster TG for DeepSeek models with GPU or hybrid GPU/CPU inference. See [PR 386](https://github.com/ikawrakow/ik_llama.cpp/pull/386) for details. Caveat: Ampere or newer Nvidia GPU required * May 4 2025: 🚀 Significant token generation performance improvement on CUDA with Flash Attention for GQA models. For details and benchmarks see [PR #370](https://github.com/ikawrakow/ik_llama.cpp/pull/370) * April 29 2025: Qwen3 support added