mirror of
https://github.com/ikawrakow/ik_llama.cpp.git
synced 2026-02-03 13:04:59 +00:00
* Merging mainline - WIP * Merging mainline - WIP AVX2 and CUDA appear to work. CUDA performance seems slightly (~1-2%) lower as it is so often the case with llama.cpp/ggml after some "improvements" have been made. * Merging mainline - fix Metal * Remove check --------- Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
13 lines
487 B
Markdown
13 lines
487 B
Markdown
# llama.cpp/examples/lookup
|
|
|
|
Demonstration of Prompt Lookup Decoding
|
|
|
|
https://github.com/apoorvumang/prompt-lookup-decoding
|
|
|
|
The key parameters for lookup decoding are `ngram_min`, `ngram_max` and `n_draft`. The first two determine the size of the ngrams to search for in the prompt for a match. The latter specifies how many subsequent tokens to draft if a match is found.
|
|
|
|
More info:
|
|
|
|
https://github.com/ggerganov/llama.cpp/pull/4484
|
|
https://github.com/ggerganov/llama.cpp/issues/4226
|