From 09f36f9c052c5a3cbe144e2c92531639c81171d5 Mon Sep 17 00:00:00 2001 From: Josh <262070388+scoobydont-666@users.noreply.github.com> Date: Sat, 9 May 2026 12:21:17 -0700 Subject: [PATCH] fix: prevent xformers from pulling cu130 wheels on cu128 hosts (#420) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The default `pip install .[cu12,extras]` lets pip resolve xformers transitively (via infinity-emb / sentence-transformers in the extras group), which can pull a cu130-aligned wheel that requires libcudart.so.13. On hosts with NVIDIA driver 590.x (cu128-only), this fails at import time with: ImportError: libcudart.so.13: cannot open shared object file Reproduced on K3s clusters running 12 exllamav2/exllamav3 deployment pods × 6 hosts; all crash-looped on the published `:latest` image which had transitively resolved xformers to a cu130 wheel. Fix: split the install into two pip invocations. Install the cu12 group first to lock torch + cu128 wheels for exllamav2 / exllamav3 / flash_attn, then install the extras group with --no-deps so pip cannot resolve xformers (or any other transitive dep) outside the cu128 lock. Also align the Windows py3.12 flash_attn wheel version to v0.7.13 to match the other Windows variants (py3.10, py3.11, py3.13). The py3.12 variant was pinned to v0.7.6 while the rest were on v0.7.13, leaving py3.12 Windows users on an older flash_attn release with no semantic reason for the divergence. Tested on Hydra K3s cluster (NVIDIA 590.48.01-open + cu128 base image nvidia/cuda:12.8.1-runtime-ubuntu24.04 + torch 2.9.0+cu128). All 12 exllamav2/v3 deployments now import cleanly and serve /v1/models. Co-authored-by: Josh Jones --- docker/Dockerfile | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/docker/Dockerfile b/docker/Dockerfile index 58aa61f..189e26f 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -26,8 +26,13 @@ WORKDIR /app # Get requirements COPY pyproject.toml . -# Install packages specified in pyproject.toml cu12, extras -RUN pip install --no-cache-dir .[cu12,extras] +# Install cu12 group first — pins torch+cu128, exllamav2/v3+cu128, flash_attn+cu128. +# The 'extras' group (infinity-emb, sentence-transformers) is installed separately +# with --no-deps so pip cannot resolve xformers transitively and pull a cu130 wheel, +# which would cause libcudart.so.13 ImportError on driver 590.x (cu128-only hosts). +# See: https://github.com/theroyallab/tabbyAPI/issues/414 +RUN pip install --no-cache-dir .[cu12] +RUN pip install --no-cache-dir --no-deps .[extras] RUN rm pyproject.toml