This website requires JavaScript.
Explore
Help
Register
Sign In
ikawrakow
/
ik_llama.cpp
Watch
1
Star
0
Fork
0
You've already forked ik_llama.cpp
mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced
2026-01-26 17:20:01 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
540a26514fb15366b28a02f78e3cafa9ad1db370
ik_llama.cpp
/
ggml
History
Kawrakow
540a26514f
This is very slightly better (
#762
)
...
Co-authored-by: Iwan Kawrakow <
iwan.kawrakow@gmail.com
>
2025-09-05 21:31:02 +02:00
..
cmake
Merge mainline llama.cpp (
#3
)
2024-07-27 07:55:01 +02:00
include
Offload only activated experts to the GPU (
#698
)
2025-09-04 12:22:30 +02:00
src
This is very slightly better (
#762
)
2025-09-05 21:31:02 +02:00
.gitignore
Merge mainline llama.cpp (
#3
)
2024-07-27 07:55:01 +02:00
CMakeLists.txt
Set default value of GGML_SCHED_MAX_COPIES to 1 (
#751
)
2025-09-02 07:04:39 +02:00