mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-02-21 21:54:10 +00:00
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
7 lines
108 B
Plaintext
7 lines
108 B
Plaintext
aiohttp~=3.9.3
|
|
behave~=1.2.6
|
|
huggingface_hub~=0.20.3
|
|
numpy~=1.26.4
|
|
openai~=1.30.3
|
|
prometheus-client~=0.20.0
|