Logo
Explore Help
Register Sign In
ikawrakow/ik_llama.cpp
1
0
Fork 0
You've already forked ik_llama.cpp
mirror of https://github.com/ikawrakow/ik_llama.cpp.git synced 2026-02-21 05:34:08 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
6db8dc86caaecea8f84f1ad22600ef71f3830fad
ik_llama.cpp/ggml
History
Yurko e64b43392f cuda: reduce qwen3next moe/ssm sync overhead and refresh eval
2026-02-06 14:46:59 +00:00
..
cmake
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
include
qwen3next: add architecture support and recurrent-state fixes
2026-02-06 12:13:09 +00:00
src
cuda: reduce qwen3next moe/ssm sync overhead and refresh eval
2026-02-06 14:46:59 +00:00
.gitignore
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
CMakeLists.txt
Remove llamafile remnants (#1179)
2026-01-22 13:20:23 +02:00
Powered by Gitea Version: 1.24.5 Page: 519ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API