Files
Kawrakow 0ceeb11721 Merge mainline llama.cpp (#3)
* Merging mainline - WIP

* Merging mainline - WIP

AVX2 and CUDA appear to work.
CUDA performance seems slightly (~1-2%) lower as it is so often
the case with llama.cpp/ggml after some "improvements" have been made.

* Merging mainline - fix Metal

* Remove check

---------

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
2024-07-27 07:55:01 +02:00
..
2024-07-27 07:55:01 +02:00

llama.cpp/example/passkey

A passkey retrieval task is an evaluation method used to measure a language models ability to recall information from long contexts.

See the following PRs for more info:

Usage

make -j && ./llama-passkey -m ./models/llama-7b-v2/ggml-model-f16.gguf --junk 250