Logo
Explore Help
Register Sign In
ikawrakow/ik_llama.cpp
1
0
Fork 0
You've already forked ik_llama.cpp
mirror of https://github.com/ikawrakow/ik_llama.cpp.git synced 2026-02-24 07:04:11 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
f747cbca081c08ce4ec6811b3f5df1b71f129b2a
ik_llama.cpp/ggml
History
Iwan Kawrakow f747cbca08 Fix MMQ when running with quantized K cache without FA
2025-08-08 13:42:03 +03:00
..
cmake
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
include
Adding IQ1_KT - 1.75 bpw SOTA quants (#616)
2025-07-20 10:05:23 +02:00
src
Fix MMQ when running with quantized K cache without FA
2025-08-08 13:42:03 +03:00
.gitignore
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
CMakeLists.txt
Vulkan: add cmake options to build without coopmat(2) support (#674)
2025-08-07 17:26:21 +03:00
Powered by Gitea Version: 1.24.5 Page: 177ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API