This website requires JavaScript.
Explore
Help
Register
Sign In
ikawrakow
/
ik_llama.cpp
Watch
1
Star
0
Fork
0
You've already forked ik_llama.cpp
mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced
2026-01-26 17:20:01 +00:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
e6ec947c280cbcbad00710162915c7fcdb5da897
ik_llama.cpp
/
tests
History
Georgi Gerganov
e6ec947c28
ggml : sync (abort callback, mul / add broadcast, fix alibi) (
#2183
)
2023-07-11 22:53:34 +03:00
..
CMakeLists.txt
ggml : change ggml_graph_compute() API to not require context (
#1999
)
2023-07-07 19:24:01 +03:00
test-double-float.c
all : be more strict about converting float to double (
#458
)
2023-03-28 19:48:20 +03:00
test-grad0.c
ggml : sync (abort callback, mul / add broadcast, fix alibi) (
#2183
)
2023-07-11 22:53:34 +03:00
test-opt.c
ggml : sync (abort callback, mul / add broadcast, fix alibi) (
#2183
)
2023-07-11 22:53:34 +03:00
test-quantize-fns.cpp
ggml : generalize
quantize_fns
for simpler FP16 handling (
#1237
)
2023-07-05 19:13:06 +03:00
test-quantize-perf.cpp
ggml : generalize
quantize_fns
for simpler FP16 handling (
#1237
)
2023-07-05 19:13:06 +03:00
test-sampling.cpp
llama : fix top-p sampling to match the canonical definition (
#1953
)
2023-06-24 13:15:01 +03:00
test-tokenizer-0.cpp
mpi : add support for distributed inference via MPI (
#2099
)
2023-07-10 18:49:56 +03:00