mirror of
https://github.com/theroyallab/tabbyAPI.git
synced 2026-05-11 00:10:02 +00:00
The default `pip install .[cu12,extras]` lets pip resolve xformers
transitively (via infinity-emb / sentence-transformers in the extras
group), which can pull a cu130-aligned wheel that requires
libcudart.so.13. On hosts with NVIDIA driver 590.x (cu128-only), this
fails at import time with:
ImportError: libcudart.so.13: cannot open shared object file
Reproduced on K3s clusters running 12 exllamav2/exllamav3 deployment
pods × 6 hosts; all crash-looped on the published `:latest` image
which had transitively resolved xformers to a cu130 wheel.
Fix: split the install into two pip invocations. Install the cu12 group
first to lock torch + cu128 wheels for exllamav2 / exllamav3 / flash_attn,
then install the extras group with --no-deps so pip cannot resolve
xformers (or any other transitive dep) outside the cu128 lock.
Also align the Windows py3.12 flash_attn wheel version to v0.7.13 to
match the other Windows variants (py3.10, py3.11, py3.13). The py3.12
variant was pinned to v0.7.6 while the rest were on v0.7.13, leaving
py3.12 Windows users on an older flash_attn release with no semantic
reason for the divergence.
Tested on Hydra K3s cluster (NVIDIA 590.48.01-open + cu128 base image
nvidia/cuda:12.8.1-runtime-ubuntu24.04 + torch 2.9.0+cu128). All 12
exllamav2/v3 deployments now import cleanly and serve /v1/models.
Co-authored-by: Josh Jones <scoobydont-666@users.noreply.github.com>
50 lines
1.4 KiB
Docker
50 lines
1.4 KiB
Docker
# Use an official CUDA runtime with Ubuntu as a parent image
|
|
FROM nvidia/cuda:12.8.1-runtime-ubuntu24.04
|
|
|
|
# Install system dependencies
|
|
RUN apt-get update && apt-get install -y --no-install-recommends \
|
|
build-essential \
|
|
curl \
|
|
ca-certificates \
|
|
python3.12 \
|
|
python3-pip \
|
|
python3.12-venv \
|
|
&& rm -rf /var/lib/apt/lists/*
|
|
|
|
# Create a virtual environment
|
|
RUN python3 -m venv /opt/venv
|
|
|
|
# Activate the venv and set the PATH
|
|
ENV PATH="/opt/venv/bin:$PATH"
|
|
|
|
# Upgrade pip
|
|
RUN pip install --no-cache-dir --upgrade pip
|
|
|
|
# Set the working directory in the container
|
|
WORKDIR /app
|
|
|
|
# Get requirements
|
|
COPY pyproject.toml .
|
|
|
|
# Install cu12 group first — pins torch+cu128, exllamav2/v3+cu128, flash_attn+cu128.
|
|
# The 'extras' group (infinity-emb, sentence-transformers) is installed separately
|
|
# with --no-deps so pip cannot resolve xformers transitively and pull a cu130 wheel,
|
|
# which would cause libcudart.so.13 ImportError on driver 590.x (cu128-only hosts).
|
|
# See: https://github.com/theroyallab/tabbyAPI/issues/414
|
|
RUN pip install --no-cache-dir .[cu12]
|
|
RUN pip install --no-cache-dir --no-deps .[extras]
|
|
|
|
RUN rm pyproject.toml
|
|
|
|
# Copy the current directory contents into the container
|
|
COPY . .
|
|
|
|
# Make port 5000 available to the world outside this container
|
|
EXPOSE 5000
|
|
|
|
# Set the entry point
|
|
ENTRYPOINT ["python3"]
|
|
|
|
# Run main.py when the container launches
|
|
CMD ["main.py"]
|