sse_starlette kept firing a ping response if it was taking too long
to set an event. Rather than using a hacky workaround, switch to
FastAPI's inbuilt streaming response and construct SSE requests with
a utility function.
This helps the API become more robust and removes an extra requirement.
Signed-off-by: kingbri <bdashore3@proton.me>
Some APIs require an OAI model to be sent against the models endpoint.
Fix this by adding a GPT 3.5 turbo entry as first in the list to cover
as many APIs as possible.
Signed-off-by: kingbri <bdashore3@proton.me>
Chat completions require a finish reason to be provided in the OAI
spec once the streaming is completed. This is different from a non-
streaming chat completion response.
Also fix some errors that were raised from the endpoint.
References #15
Signed-off-by: kingbri <bdashore3@proton.me>
If the generator errors, there's no proper handling to send an error
packet and close the connection.
This is especially important for unloading models if the load fails
at any stage to reclaim a user's VRAM. Raising an exception caused
the model_container object to lock and not get freed by the GC.
This made sense to propegate SSE errors across all generator functions
rather than relying on abort signals.
Signed-off-by: kingbri <bdashore3@proton.me>
The default encoding method when opening files on Windows is cp1252
which doesn't support all unicode and can cause issues.
Signed-off-by: kingbri <bdashore3@proton.me>
On /v1/model/load, some internal server errors weren't being sent,
so migrate directory checking out and also add a check to make sure
the proposed model path exists.
Signed-off-by: kingbri <bdashore3@proton.me>
Models can be loaded with a child object called "draft" in the POST
request. Again, models need to be located within the draft model dir
to get loaded.
Signed-off-by: kingbri <bdashore3@proton.me>
Speculative decoding makes use of draft models that ingest the prompt
before forwarding it to the main model.
Add options in the config to support this. API options will occur
in a different commit.
Signed-off-by: kingbri <bdashore3@proton.me>
Responses were not being properly sent as JSON. Only run pydantic's
JSON function on stream responses. FastAPI does the rest with static
responses.
Signed-off-by: kingbri <bdashore3@proton.me>
Add safe fallbacks if any part of the config tree doesn't exist. This
prevents random internal server errors from showing up.
Signed-off-by: kingbri <bdashore3@proton.me>
Chat completions is the endpoint that will be used by OAI in the
future. Makes sense to support it even though the completions
endpoint will be used more often.
Also unify common parameters between the chat completion and completion
requests since they're very similar.
Signed-off-by: kingbri <bdashore3@proton.me>
Models can be loaded and unloaded via the API. Also add authentication
to use the API and for administrator tasks.
Both types of authorization use different keys.
Also fix the unload function to properly free all used vram.
Signed-off-by: kingbri <bdashore3@proton.me>
The models endpoint fetches all the models that OAI has to offer.
However, since this is an OAI clone, just list the models inside
the user's configured model directory instead.
Signed-off-by: kingbri <bdashore3@proton.me>
Add support for /v1/completions with the option to use streaming
if needed. Also rewrite API endpoints to use async when possible
since that improves request performance.
Model container parameter names also needed rewrites as well and
set fallback cases to their disabled values.
Signed-off-by: kingbri <bdashore3@proton.me>
YAML is a more flexible format when it comes to configuration. Commandline
arguments are difficult to remember and configure especially for
an API with complicated commandline names. Rather than using half-baked
textfiles, implement a proper config solution.
Also add a progress bar when loading models in the commandline.
Signed-off-by: kingbri <bdashore3@proton.me>
Use command-line arguments to load an initial model if necessary.
API routes are broken, but we should be using the container from
now on as a primary interface with the exllama2 library.
Also these args should be turned into a YAML configuration file in
the future.
Signed-off-by: kingbri <bdashore3@proton.me>