mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-01-26 17:20:01 +00:00
Merge mainline llama.cpp (#3)
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
This commit is contained in:
@@ -1 +1 @@
|
||||
../ggml-alloc.h
|
||||
../ggml/include/ggml-alloc.h
|
||||
@@ -1 +1 @@
|
||||
../ggml-backend.h
|
||||
../ggml/include/ggml-backend.h
|
||||
1
spm-headers/ggml-metal.h
Symbolic link
1
spm-headers/ggml-metal.h
Symbolic link
@@ -0,0 +1 @@
|
||||
../ggml/include/ggml-metal.h
|
||||
@@ -1 +1 @@
|
||||
../ggml.h
|
||||
../ggml/include/ggml.h
|
||||
@@ -1 +1 @@
|
||||
../llama.h
|
||||
../include/llama.h
|
||||
Reference in New Issue
Block a user