Files
stable-diffusion-webui-forge/backend
layerdiffusion 4c9380c46a Speed up quant model loading and inference ...
... based on 3 evidences:
1. torch.Tensor.view on one big tensor is slightly faster than calling torch.Tensor.to on multiple small tensors.
2. but torch.Tensor.to with dtype change is significantly slower than torch.Tensor.view
3. “baking” model on GPU is significantly faster than computing on CPU when model load.

mainly influence inference of Q8_0, Q4_0/1/K and loading of all quants
2024-08-30 00:49:05 -07:00
..
2024-08-19 06:30:49 -07:00
2024-08-11 17:31:12 -07:00
2024-08-03 15:19:45 -07:00
2024-08-22 11:59:43 -07:00
2024-08-10 19:24:19 -07:00
2024-08-19 11:08:01 -07:00
2024-08-15 00:55:49 -07:00
2024-08-29 19:05:48 -07:00
2024-07-29 11:29:29 -06:00
2024-08-08 18:45:36 -07:00

WIP Backend for Forge