Logo
Explore Help
Register Sign In
ikawrakow/ik_llama.cpp
1
0
Fork 0
You've already forked ik_llama.cpp
mirror of https://github.com/ikawrakow/ik_llama.cpp.git synced 2026-01-30 11:09:51 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
8e9801d5b0debd80bb2fe9bda4cffefe0e2c8983
ik_llama.cpp/tests
History
katsu560 7458f1729d tests : fix quantize perf (#1990)
* fix test quantize perf

* avoid the global state
2023-06-26 19:47:02 +03:00
..
CMakeLists.txt
ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360)
2023-05-13 15:56:40 +03:00
test-double-float.c
all : be more strict about converting float to double (#458)
2023-03-28 19:48:20 +03:00
test-grad0.c
tests : sync test-grad0 from ggml
2023-06-24 19:40:18 +03:00
test-opt.c
ggml : implement backward pass for llama + small training-llama-from-scratch example (#1360)
2023-05-13 15:56:40 +03:00
test-quantize-fns.cpp
build : fix and ignore MSVC warnings (#1889)
2023-06-16 21:23:53 +03:00
test-quantize-perf.cpp
tests : fix quantize perf (#1990)
2023-06-26 19:47:02 +03:00
test-sampling.cpp
llama : fix top-p sampling to match the canonical definition (#1953)
2023-06-24 13:15:01 +03:00
test-tokenizer-0.cpp
llama : make model stateless and context stateful (llama_state) (#1797)
2023-06-24 11:47:58 +03:00
Powered by Gitea Version: 1.24.5 Page: 243ms Template: 10ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API