Files
ik_llama.cpp/examples
g2mt b6bc5eedad Port speculative decoding from upstream to llama-server (#645)
* server : integrate speculative decoding

* server: Fix field names

* server: fix include, whitespace

* fix compile errors in speculative.cpp

* add llama_sampling_sample_and_accept_n to sampling

* finish porting speculative decoding in server

* port functions from common/speculative, common/sampling

* remove arg

* fix function names

* init params_dft to none

* correct value for n_ctx

* prefix kv cache tensors with model name to avoid conflict

* fix call arguments

* fix spec decoding args

* correct slot.id

* use n_max

* port the rest of sampling funcs

* fix func arguments

* slot.id starts at 1?

* Revert "prefix kv cache tensors with model name to avoid conflict"

This reverts commit fbd5dfd866.

* disable draft logging

* disable logging in speculative.cpp

in mainline, these would be LOG_DEBUG, but since ik_llama doesnt support
it, logging is disabled entirely

* add more draft model parameters

* fix

* pass flash_attn

* add speculative params for parity

* set speculative params in launch_slot_with_task instead
2025-08-16 07:26:44 +03:00
..
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2024-07-27 07:55:01 +02:00
2025-05-23 08:07:42 +03:00
2025-06-19 10:24:53 +03:00
2025-04-07 10:43:26 +02:00
2025-06-19 10:24:53 +03:00
2025-06-19 10:24:53 +03:00
2025-06-19 10:24:53 +03:00
2025-08-09 12:50:30 +00:00
2025-06-19 10:24:53 +03:00
2024-07-27 07:55:01 +02:00
2025-04-12 16:17:50 +02:00
2025-08-09 08:40:18 +03:00
2025-07-14 18:55:08 +02:00
2025-06-19 10:24:53 +03:00
2024-08-12 15:14:32 +02:00
2025-06-19 10:24:53 +03:00
2024-08-12 15:14:32 +02:00
2023-03-29 20:21:09 +03:00
2024-07-27 07:55:01 +02:00