This website requires JavaScript.
Explore
Help
Register
Sign In
ikawrakow
/
ik_llama.cpp
Watch
1
Star
0
Fork
0
You've already forked ik_llama.cpp
mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced
2026-02-22 06:04:24 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
68a5b60408b1085d2b2ed5de75e004ee23f8ddb9
ik_llama.cpp
/
ggml
History
Kawrakow
68a5b60408
Make Q8_0 KV cache work with mla=2,fa on CUDA (
#264
)
...
Co-authored-by: Iwan Kawrakow <
iwan.kawrakow@gmail.com
>
2025-03-18 15:40:47 +01:00
..
cmake
Merge mainline llama.cpp (
#3
)
2024-07-27 07:55:01 +02:00
include
SER - Smart Expert Reduction (
#239
)
2025-03-02 13:47:38 +02:00
src
Make Q8_0 KV cache work with mla=2,fa on CUDA (
#264
)
2025-03-18 15:40:47 +01:00
.gitignore
Merge mainline llama.cpp (
#3
)
2024-07-27 07:55:01 +02:00
CMakeLists.txt
Compile time option to use bf16 for qunts without MMQ kernels (
#261
)
2025-03-18 07:37:10 +01:00