layerdiffusion
5a1c711e80
add info
2024-08-30 10:04:04 -07:00
Serick
11a2c0629a
Apply settings in ui-config.json to "All" presets ( #1541 )
...
* Add preset to load settings from ui-config.json
* Remove added presets and apply ui-config.json to all
2024-08-28 09:15:07 -07:00
DenOfEquity
8c6b64e6e9
main_entry.py: add support for new text_encoder_dir cmd arg
...
copying method used for VAE
2024-08-25 15:01:30 +01:00
lllyasviel
f82029c5cf
support more t5 quants ( #1482 )
...
lets hope this is the last time that people randomly invent new state dict key formats
2024-08-24 12:47:49 -07:00
layerdiffusion
56740824e2
add hints and reduce prints to only release slider
2024-08-20 19:02:30 -07:00
layerdiffusion
0252ad86be
missing print
2024-08-20 08:47:06 -07:00
layerdiffusion
74aacc5d4b
make "GPU weights" also available to SDXL
2024-08-20 08:19:44 -07:00
layerdiffusion
6f411a4940
fix loras on nf4 models when activate "loras in fp16"
2024-08-20 01:29:52 -07:00
layerdiffusion
d38e560e42
Implement some rethinking about LoRA system
...
1. Add an option to allow users to use UNet in fp8/gguf but lora in fp16.
2. All FP16 loras do not need patch. Others will only patch again when lora weight change.
3. FP8 unet + fp16 lora are available (somewhat only available) in Forge now. This also solves some “LoRA too subtle” problems.
4. Significantly speed up all gguf models (in Async mode) by using independent thread (CUDA stream) to compute and dequant at the same time, even when low-bit weights are already on GPU.
5. View “online lora” as a module similar to ControlLoRA so that it is moved to GPU together with model when sampling, achieving significant speedup and perfect low VRAM management simultaneously.
2024-08-19 04:31:59 -07:00
layerdiffusion
a5f3a50d3f
Not all AUTOMATIC have beard
2024-08-15 01:25:47 -07:00
layerdiffusion
32fab6e30d
use better name and change to dropdown list
2024-08-15 01:06:17 -07:00
layerdiffusion
df0fee9396
maybe solve --vae-path
2024-08-14 17:58:12 -07:00
lllyasviel
61f83dd610
support all flux models
2024-08-13 05:42:17 -07:00
layerdiffusion
b1f0d8c6d1
default img2img back to square
2024-08-11 18:20:18 -07:00
layerdiffusion
f10359989f
fix
2024-08-10 19:40:56 -07:00
lllyasviel
cfa5242a75
forge 2.0.0
...
see also discussions
2024-08-10 19:24:19 -07:00
layerdiffusion
593455c4de
global unload after env var change
2024-08-08 22:34:55 -07:00
layerdiffusion
3f3cb12f76
more tests
2024-08-08 22:12:14 -07:00
layerdiffusion
02ffb04649
revise stream
2024-08-08 19:23:23 -07:00
layerdiffusion
20e1ba4a82
fix
2024-08-08 15:08:20 -07:00
lllyasviel
6921420b3f
Load Model only when click Generate
...
#964
2024-08-08 14:51:13 -07:00
lllyasviel
71c94799d1
diffusion in fp8 landed
2024-08-06 16:47:39 -07:00
layerdiffusion
dd8997ee2e
use binder
2024-08-06 15:31:19 -07:00
layerdiffusion
b4ca5d7420
model load entry for all model loads from webui
...
including startup, checkpoint selection, and that HTML tab of some preview images
2024-08-06 14:38:40 -07:00
layerdiffusion
6d789653b9
better model load logic
2024-08-06 14:34:57 -07:00
layerdiffusion
1e8c0f3436
change name
2024-08-06 14:05:56 -07:00
lllyasviel
71eaa5ca12
rework UI so that the toolbar is managed by Forge
2024-08-06 13:54:06 -07:00