mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-01-26 09:09:50 +00:00
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
13 lines
505 B
Plaintext
13 lines
505 B
Plaintext
# These requirements include all dependencies for all top-level python scripts
|
|
# for llama.cpp. Avoid adding packages here directly.
|
|
#
|
|
# Package versions must stay compatible across all top-level python scripts.
|
|
#
|
|
|
|
-r ./requirements/requirements-convert_legacy_llama.txt
|
|
|
|
-r ./requirements/requirements-convert_hf_to_gguf.txt
|
|
-r ./requirements/requirements-convert_hf_to_gguf_update.txt
|
|
-r ./requirements/requirements-convert_llama_ggml_to_gguf.txt
|
|
-r ./requirements/requirements-convert_lora_to_gguf.txt
|