This website requires JavaScript.
Explore
Help
Register
Sign In
ikawrakow
/
ik_llama.cpp
Watch
1
Star
0
Fork
0
You've already forked ik_llama.cpp
mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced
2026-03-12 06:50:08 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
ik/k_cache_hadamard
Add File
New File
Upload File
Apply Patch
ik_llama.cpp
/
ggml
/
include
History
Kawrakow
9c17d5f176
WIP: Hadamard transforms for K-cache
2025-12-03 14:26:46 +00:00
..
ggml-alloc.h
…
ggml-backend.h
Offload only activated experts to the GPU (
#698
)
2025-09-04 12:22:30 +02:00
ggml-blas.h
…
ggml-cann.h
…
ggml-cpp.h
Port mdmd from mainline + Qwen2/2.5-VL support (
#798
)
2025-09-27 08:45:29 +02:00
ggml-cuda.h
CUDA: set compute parameters via command line arguments (
#910
)
2025-11-07 07:11:23 +02:00
ggml-kompute.h
…
ggml-metal.h
…
ggml-rpc.h
RPC: support multiple devices including cpu (
#1024
)
2025-11-30 18:48:02 +01:00
ggml-sycl.h
…
ggml-vulkan.h
Vulkan: a fresh start (
#608
)
2025-07-15 08:03:13 +02:00
ggml.h
WIP: Hadamard transforms for K-cache
2025-12-03 14:26:46 +00:00