Files
ik_llama.cpp/tests
Kawrakow 0ceeb11721 Merge mainline llama.cpp (#3)
* Merging mainline - WIP

* Merging mainline - WIP

AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.

* Merging mainline - fix Metal

* Remove check

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27 07:55:01 +02:00
..
2024-03-09 14:17:11 +02:00
2024-07-27 07:55:01 +02:00
2024-01-29 15:50:50 -05:00
2024-07-27 07:55:01 +02:00