mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-02-27 08:34:09 +00:00
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
5 lines
143 B
Plaintext
5 lines
143 B
Plaintext
-r ../../requirements/requirements-convert_legacy_llama.txt
|
|
--extra-index-url https://download.pytorch.org/whl/cpu
|
|
pillow~=10.2.0
|
|
torch~=2.2.1
|