Merge mainline llama.cpp (#3)

* Merging mainline - WIP

* Merging mainline - WIP

AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.

* Merging mainline - fix Metal

* Remove check

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
Kawrakow
2024-07-27 07:55:01 +02:00
committed by GitHub
parent afd9fd274e
commit 0ceeb11721
612 changed files with 50817 additions and 165936 deletions

View File

@@ -1 +1 @@
../ggml-alloc.h
../ggml/include/ggml-alloc.h

View File

@@ -1 +1 @@
../ggml-backend.h
../ggml/include/ggml-backend.h

1
spm-headers/ggml-metal.h Symbolic link
View File

@@ -0,0 +1 @@
../ggml/include/ggml-metal.h

View File

@@ -1 +1 @@
../ggml.h
../ggml/include/ggml.h

View File

@@ -1 +1 @@
../llama.h
../include/llama.h