Logo
Explore Help
Register Sign In
ikawrakow/ik_llama.cpp
1
0
Fork 0
You've already forked ik_llama.cpp
mirror of https://github.com/ikawrakow/ik_llama.cpp.git synced 2026-02-10 00:10:13 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
ec2ba592b5bafefbce39f7821a68a45f84d7db21
ik_llama.cpp/ggml/include
History
Iwan Kawrakow ec2ba592b5 Command line option to set max. extra VRAM that the scheduler can use
2025-12-16 18:48:42 +00:00
..
ggml-alloc.h
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
ggml-backend.h
Command line option to set max. extra VRAM that the scheduler can use
2025-12-16 18:48:42 +00:00
ggml-blas.h
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
ggml-cann.h
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
ggml-cpp.h
Port mdmd from mainline + Qwen2/2.5-VL support (#798)
2025-09-27 08:45:29 +02:00
ggml-cuda.h
CUDA: set compute parameters via command line arguments (#910)
2025-11-07 07:11:23 +02:00
ggml-kompute.h
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
ggml-metal.h
Merge mainline - Aug 12 2024 (#17)
2024-08-12 15:14:32 +02:00
ggml-rpc.h
RPC: support multiple devices including cpu (#1024)
2025-11-30 18:48:02 +01:00
ggml-sycl.h
Merge mainline llama.cpp (#3)
2024-07-27 07:55:01 +02:00
ggml-vulkan.h
Vulkan: a fresh start (#608)
2025-07-15 08:03:13 +02:00
ggml.h
Hadamard transforms for K-cache - CPU only (#1033)
2025-12-04 06:51:11 +01:00
Powered by Gitea Version: 1.24.5 Page: 250ms Template: 4ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API