FA2 v2.5.7 and up is not supported below ampere and on AMD GPUs.
Clarify the error message and explain what happens as a result.
Signed-off-by: kingbri <bdashore3@proton.me>
Add a sequential lock and wait until jobs are completed before executing
any loading requests that directly alter the model. However, we also
need to block any new requests that come in until the load is finished,
so add a condition that triggers once the lock is free.
Signed-off-by: kingbri <bdashore3@proton.me>
The dynamic generator requires Flash attention 2.5.7 or higher to
be installed. This is only supported on Nvidia's 30 series and higher.
If a card is AMD or lower than the 30 series, switch to compatability
mode which functions the same way as the older generator, except
without parallel batching and any features that depend on it, such as
CFG.
Signed-off-by: kingbri <bdashore3@proton.me>
Dynamic generator needed multiple prompts to be tokenized and sent
for them to be sampled in serial, but generated in parallel.
Signed-off-by: kingbri <bdashore3@proton.me>
The new async dynamic job allows for native async support without the
need of threading. Also add logprobs and metrics back to responses.
Signed-off-by: kingbri <bdashore3@proton.me>
Dynamic gen takes in filters differently. Adjust to set the filter list
per class rather than in the generation function.
Signed-off-by: kingbri <bdashore3@proton.me>
Adds basic support for ExllamaV2's dynamic generator. Can generate
a streaming and non-streaming completion.
Signed-off-by: kingbri <bdashore3@proton.me>
skip_special_tokens is in stable exl2. Also default the parameters
if they are not present in the function signature.
Signed-off-by: kingbri <bdashore3@proton.me>
From exllamav2: List of strings that the generator will refuse to output. As soon as a partial match happens, a checkpoint is saved that the generator can rewind to if need be. Subsequent tokens are then held until the full string is resolved (match or no match) and either emitted or discarded, accordingly.
Bans the EOS token until the generation reaches a minimum length. This will not prevent the model from otherwise ending the generation early by outputting other stop conditions.
Appends the banned tokens to the generation. This is equivalent of
setting logit bias to -100 on a specific set of tokens.
Signed-off-by: kingbri <bdashore3@proton.me>
response_prefix is used to add a prefix before generating the next
message. This is used in many cases such as continuining a prompt
(see #96).
Also if a template has BOS token specified, add_bos_token will
append two BOS tokens. Add a check which strips a starting BOS token
from the prompt if it exists.
Signed-off-by: kingbri <bdashore3@proton.me>
Torch errors if float values are passed (because bytes are not float
types). Therefore, overestimate and cast to an int type.
Resolves#97
Signed-off-by: kingbri <bdashore3@proton.me>
Having many utility functions for initialization doesn't make much sense.
Instead, handle anything regarding template creation inside the
class which reduces the amount of function imports.
Signed-off-by: kingbri <bdashore3@proton.me>
GenerationConfig is meant to override various parts of the model
on generation within the transformers lib. Rather than implementing
the entire GenerationConfig framework (since it's pretty redundant),
add in multi eos_token support like VLLM.
The GenerationConfig is used only for generation, but can be used
for other uses if needed.
If there's more necessary parameters in the future, add those in
as well.
Signed-off-by: kingbri <bdashore3@proton.me>
Wrong class attribute name used for max_attention_size and fixes
declaration of the draft model's chunk_size.
Also expose the parameter to the end user in both config and model
load.
Signed-off-by: kingbri <bdashore3@proton.me>
HuggingFace updated transformers to provide templates in a list for
tokenizers. Update to support this new format. Providing the name
of a template for the "prompt_template" value in config.yml will also
look inside the template list.
In addition, log if there's a template exception, but continue model
loading since it shouldn't shut down the application.
Signed-off-by: kingbri <bdashore3@proton.me>
This is used for some models and isn't too big in size (compared to
other huggingface dependencies), so include it by default.
Signed-off-by: kingbri <bdashore3@proton.me>
Some tensors were being taken out of inference mode during each
iteration of exllama's load_autosplit_gen. This causes errors since
autograd is off.
Therefore, make the shared load_gen_sync function have an overarching
inference_mode context to prevent forward issues. This should allow for
the generator to iterate across each thread call.
Signed-off-by: kingbri <bdashore3@proton.me>
Finish_reason was giving an empty offset. Fix this by grabbing the
finish reason first and then handling the static generation as normal.
Signed-off-by: kingbri <bdashore3@proton.me>
Some tokenizer variables don't get cleaned up on init, so these can
persist. Clean these up manually before creating a new tokenizer for
now.
Signed-off-by: kingbri <bdashore3@proton.me>
When the model is processing a prompt, add the ability to abort
on request cancellation. This is also a catch for a SIGINT.
Signed-off-by: kingbri <bdashore3@proton.me>
Yielding the finish reason before the logging causes the function to
terminate early. Instead, log before yielding and breaking out of the
generation loop.
Signed-off-by: kingbri <bdashore3@proton.me>
If max_tokens is None, it automatically scales to fill up the context.
This does not mean the generation will fill up that context since
EOS stops also exist.
Originally suggested by #86
Signed-off-by: kingbri <bdashore3@proton.me>
Max output len should be hardcoded to 16 since it's the amount of
tokens to predict per forward pass. 16 is a good value for both
normal inference and speculative decoding which also helps save
vram compared to 2048 which was the previous default.
Signed-off-by: kingbri <bdashore3@proton.me>
OAI expects finish_reason to be "stop" or "length" (there are others,
but they're not in the current scope of this project).
Make all completions and chat completions responses return this
from the model generation itself rather than putting a placeholder.
Signed-off-by: kingbri <bdashore3@proton.me>
Run these iterators on the background thread. On startup, the API
spawns a background thread as needed to run sync code on without blocking
the event loop.
Use asyncio's run_thread function since it allows for errors to be
propegated.
Signed-off-by: kingbri <bdashore3@proton.me>
Async generation helps remove many roadblocks to managing tasks
using threads. It should allow for abortables and modern-day paradigms.
NOTE: Exllamav2 itself is not an asynchronous library. It's just
been added into tabby's async nature to allow for a fast and concurrent
API server. It's still being debated to run stream_ex in a separate
thread or manually manage it using asyncio.sleep(0)
Signed-off-by: kingbri <bdashore3@proton.me>
Speculative ngram decoding is like speculative decoding without the
draft model. It's not as useful because it only decodes on predictable
sequences, but it depends on the usecase.
Signed-off-by: kingbri <bdashore3@proton.me>
This is a shared module which manages the model container and provides
extra utility functions around it to help slim down the API.
Signed-off-by: kingbri <bdashore3@proton.me>