API: Fix finish_reason returns

OAI expects finish_reason to be "stop" or "length" (there are others,
but they're not in the current scope of this project).

Make all completions and chat completions responses return this
from the model generation itself rather than putting a placeholder.

Signed-off-by: kingbri <bdashore3@proton.me>
This commit is contained in:
kingbri
2024-03-18 15:59:28 -04:00
parent 25f5d4a690
commit 5c7fc69ded
5 changed files with 35 additions and 17 deletions

View File

@@ -605,6 +605,9 @@ class ExllamaV2Container:
joined_generation["generation_tokens"] = unwrap(
generations[-1].get("generated_tokens"), 0
)
joined_generation["finish_reason"] = unwrap(
generations[-1].get("finish_reason"), "stop"
)
return joined_generation
@@ -1004,6 +1007,10 @@ class ExllamaV2Container:
last_chunk_time = now
if eos or generated_tokens == max_tokens:
finish_reason = "length" if generated_tokens == max_tokens else "stop"
generation = {"finish_reason": finish_reason}
yield generation
break
# Print response