Fix truncated logprobs when streaming is off (#998)

The logic to skip the logprobs of the stop token was originally from
ggml-org/llama.cpp#2849, and was later modified as part of
ggml-org/llama.cpp#10643 to be applied only to STOP_TYPE_WORD.

The latter change wasn't included in #723. Then, after #958 got merged,
the logic got inadvertently applied to GLM-4.5/4.6 and Kimi K2,
resulting in truncated logprobs when streaming is off.

This commit reverts the logic from ggml-org/llama.cpp#2849, such that
the logprobs of the stop token will always be included in the response,
when logprobs is enabled. From testing, this matches with the behavior
of Fireworks inference server, for both chat completions and text
completions endpoints.

Also fix logprobs param handling for the text completion endpoint.
This commit is contained in:
Yap Sok Ann
2025-11-24 12:52:15 +07:00
committed by GitHub
parent 15695a0617
commit 7505165dee
2 changed files with 9 additions and 12 deletions

View File

@@ -2646,18 +2646,9 @@ struct server_context {
// populate res.probs_output
if (slot.sparams.n_probs > 0) {
if (!slot.params.stream && slot.stopped_word) {
const std::vector<llama_token> stop_word_toks = llama_tokenize(ctx, slot.stopping_word, false);
size_t safe_offset = std::min(slot.generated_token_probs.size(), stop_word_toks.size());
res.probs_output = std::vector<completion_token_output>(
slot.generated_token_probs.begin(),
slot.generated_token_probs.end() - safe_offset);
} else {
res.probs_output = std::vector<completion_token_output>(
slot.generated_token_probs.begin(),
slot.generated_token_probs.end());
}
res.probs_output = std::vector<completion_token_output>(
slot.generated_token_probs.begin(),
slot.generated_token_probs.end());
res.data["completion_probabilities"] = probs_vector_to_json(ctx, res.probs_output);
}

View File

@@ -466,6 +466,12 @@ static json oaicompat_chat_params_parse(const json& body) {
throw std::runtime_error("Only no echo is supported");
}
// Handle "logprobs" field
int n_probs = json_value(body, "logprobs", 0);
if (n_probs > 0) {
llama_params["n_probs"] = n_probs;
}
// Params supported by OAI but unsupported by llama.cpp
static const std::vector<std::string> unsupported_params{ "best_of", "suffix" };
for (const auto& param : unsupported_params) {