Files
tabbyAPI/docker
Josh 09f36f9c05 fix: prevent xformers from pulling cu130 wheels on cu128 hosts (#420)
The default `pip install .[cu12,extras]` lets pip resolve xformers
transitively (via infinity-emb / sentence-transformers in the extras
group), which can pull a cu130-aligned wheel that requires
libcudart.so.13. On hosts with NVIDIA driver 590.x (cu128-only), this
fails at import time with:

    ImportError: libcudart.so.13: cannot open shared object file

Reproduced on K3s clusters running 12 exllamav2/exllamav3 deployment
pods × 6 hosts; all crash-looped on the published `:latest` image
which had transitively resolved xformers to a cu130 wheel.

Fix: split the install into two pip invocations. Install the cu12 group
first to lock torch + cu128 wheels for exllamav2 / exllamav3 / flash_attn,
then install the extras group with --no-deps so pip cannot resolve
xformers (or any other transitive dep) outside the cu128 lock.

Also align the Windows py3.12 flash_attn wheel version to v0.7.13 to
match the other Windows variants (py3.10, py3.11, py3.13). The py3.12
variant was pinned to v0.7.6 while the rest were on v0.7.13, leaving
py3.12 Windows users on an older flash_attn release with no semantic
reason for the divergence.

Tested on Hydra K3s cluster (NVIDIA 590.48.01-open + cu128 base image
nvidia/cuda:12.8.1-runtime-ubuntu24.04 + torch 2.9.0+cu128). All 12
exllamav2/v3 deployments now import cleanly and serve /v1/models.

Co-authored-by: Josh Jones <scoobydont-666@users.noreply.github.com>
2026-05-09 21:21:17 +02:00
..
2024-09-04 23:39:06 -04:00