This website requires JavaScript.
Explore
Help
Register
Sign In
ikawrakow
/
ik_llama.cpp
Watch
1
Star
0
Fork
0
You've already forked ik_llama.cpp
mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced
2026-01-26 17:20:01 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
92607d44c4af2618412e0a742f127d5ae2c16f02
ik_llama.cpp
/
ggml
History
Kawrakow
92607d44c4
Much better CPU TG performance at long context for GLM-4.5 (
#899
)
...
Co-authored-by: Iwan Kawrakow <
iwan.kawrakow@gmail.com
>
2025-11-05 10:20:26 +02:00
..
cmake
Merge mainline llama.cpp (
#3
)
2024-07-27 07:55:01 +02:00
include
Port of Qwen3-VL support from mainline (
#883
)
2025-11-04 19:20:54 +02:00
src
Much better CPU TG performance at long context for GLM-4.5 (
#899
)
2025-11-05 10:20:26 +02:00
.gitignore
Merge mainline llama.cpp (
#3
)
2024-07-27 07:55:01 +02:00
CMakeLists.txt
Adding cmake option to disable CUDA fusion (
#902
)
2025-11-05 07:09:27 +02:00