This website requires JavaScript.
Explore
Help
Register
Sign In
ikawrakow
/
ik_llama.cpp
Watch
1
Star
0
Fork
0
You've already forked ik_llama.cpp
mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced
2026-02-03 13:04:59 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
5c9e48041e0126fd07f039f304ea8521ea5dc48c
ik_llama.cpp
/
src
History
Kawrakow
45bbc08e53
WIP: absorb adding input into std_attn and std_ffn
2025-12-21 06:47:23 +00:00
..
CMakeLists.txt
…
llama-arch.cpp
…
llama-arch.h
…
llama-build-context.cpp
WIP: absorb adding input into std_attn and std_ffn
2025-12-21 06:47:23 +00:00
llama-build-context.h
WIP: absorb adding input into std_attn and std_ffn
2025-12-21 06:47:23 +00:00
llama-context.h
…
llama-cparams.h
add split-mode-graph-scheduling parameter (
#1068
)
2025-12-17 07:58:19 +01:00
llama-grammar.cpp
…
llama-grammar.h
…
llama-hparams.cpp
…
llama-hparams.h
…
llama-impl.h
…
llama-load-tensors.cpp
Use actual active number of layers when preparing splits (
#1065
)
2025-12-14 07:44:13 +01:00
llama-mmap.cpp
…
llama-mmap.h
…
llama-model-loader.cpp
…
llama-model-loader.h
…
llama-model.cpp
…
llama-model.h
Be able to set a max. number of GPUs to be used in split mode graph (
#1051
)
2025-12-11 07:22:53 +01:00
llama-quantize.cpp
…
llama-sampling.cpp
…
llama-sampling.h
…
llama-vocab.cpp
…
llama-vocab.h
…
llama.cpp
add split-mode-graph-scheduling parameter (
#1068
)
2025-12-17 07:58:19 +01:00
unicode-data.cpp
…
unicode-data.h
…
unicode.cpp
…
unicode.h
…