Middleware runs on both the request and response. Therefore, streaming responses had increased latency when processing tasks and sending data to the client which resulted in erratic streaming behavior. Use a depends to add request IDs since it only executes when the request is run rather than expecting the response to be sent as well. For the future, it would be best to think about limiting the time between each tick of chunk data to be safe. Signed-off-by: kingbri <bdashore3@proton.me>
TabbyAPI
Important
In addition to the README, please read the Wiki page for information about getting started!
Note
Need help? Join the Discord Server and get the
Tabbyrole. Please be nice when asking questions.
A FastAPI based application that allows for generating text using an LLM (large language model) using the Exllamav2 backend
Disclaimer
This project is marked rolling release. There may be bugs and changes down the line. Please be aware that you might need to reinstall dependencies if needed.
TabbyAPI is a hobby project solely for a small amount of users. It is not meant to run on production servers. For that, please look at other backends that support those workloads.
Getting Started
Important
This README is not for getting started. Please read the Wiki.
Read the Wiki for more information. It contains user-facing documentation for installation, configuration, sampling, API usage, and so much more.
Supported Model Types
TabbyAPI uses Exllamav2 as a powerful and fast backend for model inference, loading, etc. Therefore, the following types of models are supported:
-
Exl2 (Highly recommended)
-
GPTQ
-
FP16 (using Exllamav2's loader)
In addition, TabbyAPI supports parallel batching using paged attention for Nvidia Ampere GPUs and higher.
Alternative Loaders/Backends
If you want to use a different model type or quantization method than the ones listed above, here are some alternative backends with their own APIs:
-
GGUF + GGML - KoboldCPP
-
Production ready + Many other quants + batching - Aphrodite Engine
-
Production ready + batching - VLLM
Contributing
Use the template when creating issues or pull requests, otherwise the developers may not look at your post.
If you have issues with the project:
-
Describe the issue in detail
-
If you have a feature request, please indicate it as such.
If you have a Pull Request
- Describe the pull request in detail, what, and why you are changing something
Developers and Permissions
Creators/Developers: