Logo
Explore Help
Register Sign In
ikawrakow/ik_llama.cpp
1
0
Fork 0
You've already forked ik_llama.cpp
mirror of https://github.com/ikawrakow/ik_llama.cpp.git synced 2026-03-04 19:10:03 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
fd2a70913cbe6ef84694c1b73b66501233983546
ik_llama.cpp/ggml/include
History
Kawrakow fd2a70913c Make sure we pick the reduced tensor from the right GPU
2026-02-23 13:46:34 +00:00
..
ggml-alloc.h
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
ggml-backend.h
Make sure we pick the reduced tensor from the right GPU
2026-02-23 13:46:34 +00:00
ggml-blas.h
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
ggml-cann.h
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
ggml-cpp.h
Port mdmd from mainline + Qwen2/2.5-VL support (#798)
2025-09-27 08:45:29 +02:00
ggml-cuda.h
CUDA: set compute parameters via command line arguments (#910)
2025-11-07 07:11:23 +02:00
ggml-kompute.h
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
ggml-metal.h
Merge mainline - Aug 12 2024 (#17)
2024-08-12 15:14:32 +02:00
ggml-rpc.h
server: improve speed of speculative decoding (#1119)
2026-01-10 08:01:22 +02:00
ggml-sycl.h
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
ggml-vulkan.h
Vulkan: a fresh start (#608)
2025-07-15 08:03:13 +02:00
ggml.h
Feat - add kimi 2.5 Vision (#1280)
2026-02-19 08:15:03 +01:00
Powered by Gitea Version: 1.25.4 Page: 1085ms Template: 51ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API