Files
tabbyAPI/backends/exllamav2
kingbri 849179df17 Model: Make loading use less VRAM
The model loader was using more VRAM on a single GPU compared to
base exllamav2's loader. This was because single GPUs were running
using the autosplit config which allocates an extra vram buffer
for safe loading. Turn this off for single-GPU setups (and turn
it off by default).

This change should allow users to run models which require the
entire card with hopefully faster T/s. For example, Mixtral with
3.75bpw increased from ~30T/s to 50T/s due to the extra vram headroom
on Windows.

Signed-off-by: kingbri <bdashore3@proton.me>
2024-02-06 22:29:56 -05:00
..
2024-02-06 22:29:56 -05:00