From 666ea0e9831efb714f0ca0b4848dec9bc706d075 Mon Sep 17 00:00:00 2001 From: Kawrakow Date: Mon, 9 Mar 2026 11:23:39 +0100 Subject: [PATCH] Revise build instructions for ik_llama.cpp Updated documentation to reflect changes from 'llama.cpp' to 'ik_llama.cpp' and clarified build instructions. --- docs/build.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/docs/build.md b/docs/build.md index ca7ec83b..fd6b466b 100644 --- a/docs/build.md +++ b/docs/build.md @@ -1,5 +1,6 @@ -# Build llama.cpp locally -Typical build is aimed at CPU + GPU split and requires pre-installation of numerous tools which can bring mess to the configuration of your main OS if you're on Windows. To avoid this, one may make their builds in a virtual machine with Windows 10. For such cases, make sure you have a way to copy files from the VM to the host OS, e.g. via RDP. So, Windows users, consider doing the following actions in a VM. +# Build ik_llama.cpp locally + +`ik_llama.cpp` requires has a very minimal set of dependencies: `cmake`, a functional C++-17 compiler, and, if building with Nvidia GPU support, the CUDA toolkit. All these are available from the system package manager on Linux. If you are building on Windows and are worried about messing up your main OS, you may consider building in a virtual machine (VM). In that case, make sure you can copy files between the host OS and the VM. **To get the Code:** @@ -8,7 +9,7 @@ git clone https://github.com/ggerganov/llama.cpp cd llama.cpp ``` -In order to build llama.cpp you have four different options. +In order to build `ik_llama.cpp` you have four different options. - Using `make`: - On Linux or MacOS: