mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-01-26 17:20:01 +00:00
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
872 B
872 B
Install pre-built version of llama.cpp
Homebrew
On Mac and Linux, the homebrew package manager can be used via
brew install llama.cpp
The formula is automatically updated with new llama.cpp releases. More info: https://github.com/ggerganov/llama.cpp/discussions/7668
Nix
On Mac and Linux, the Nix package manager can be used via
nix profile install nixpkgs#llama-cpp
For flake enabled installs.
Or
nix-env --file '<nixpkgs>' --install --attr llama-cpp
For non-flake enabled installs.
This expression is automatically updated within the nixpkgs repo.
Flox
On Mac and Linux, Flox can be used to install llama.cpp within a Flox environment via
flox install llama-cpp
Flox follows the nixpkgs build of llama.cpp.