Tree: Use unwrap and coalesce for optional handling

Python doesn't have proper handling of optionals. The only way to
handle them is checking via an if statement if the value is None or
by using the "or" keyword to unwrap optionals.

Previously, I used the "or" method to unwrap, but this caused issues
due to falsy values falling back to the default. This is especially
the case with booleans were "False" changed to "True".

Instead, add two new functions: unwrap and coalesce. Both function
to properly implement a functional way of "None" coalescing.

Signed-off-by: kingbri <bdashore3@proton.me>
This commit is contained in:
kingbri
2023-12-09 21:52:17 -05:00
parent 7380a3b79a
commit 5ae2a91c04
5 changed files with 83 additions and 68 deletions

View File

@@ -12,6 +12,7 @@ from OAI.types.lora import LoraList, LoraCard
from OAI.types.model import ModelList, ModelCard
from packaging import version
from typing import Optional, List, Dict
from utils import unwrap
# Check fastchat
try:
@@ -30,7 +31,7 @@ def create_completion_response(text: str, prompt_tokens: int, completion_tokens:
response = CompletionResponse(
choices = [choice],
model = model_name or "",
model = unwrap(model_name, ""),
usage = UsageStats(prompt_tokens = prompt_tokens,
completion_tokens = completion_tokens,
total_tokens = prompt_tokens + completion_tokens)
@@ -51,7 +52,7 @@ def create_chat_completion_response(text: str, prompt_tokens: int, completion_to
response = ChatCompletionResponse(
choices = [choice],
model = model_name or "",
model = unwrap(model_name, ""),
usage = UsageStats(prompt_tokens = prompt_tokens,
completion_tokens = completion_tokens,
total_tokens = prompt_tokens + completion_tokens)
@@ -80,7 +81,7 @@ def create_chat_completion_stream_chunk(const_id: str,
chunk = ChatCompletionStreamChunk(
id = const_id,
choices = [choice],
model = model_name or ""
model = unwrap(model_name, "")
)
return chunk