layerdiffusion
df598c4df5
add "Diffusion in Low Bits" to info text
2024-08-22 23:23:14 -07:00
layerdiffusion
08f7487590
revise "send distilled_cfg_scale when generating hires conds"
2024-08-22 06:35:13 -07:00
DenOfEquity
bfd7fb1d9f
send distilled_cfg_scale when generating hires conds ( #1403 )
2024-08-22 06:33:48 -07:00
layerdiffusion
14ac95f908
fix
2024-08-20 01:37:01 -07:00
layerdiffusion
d38e560e42
Implement some rethinking about LoRA system
...
1. Add an option to allow users to use UNet in fp8/gguf but lora in fp16.
2. All FP16 loras do not need patch. Others will only patch again when lora weight change.
3. FP8 unet + fp16 lora are available (somewhat only available) in Forge now. This also solves some “LoRA too subtle” problems.
4. Significantly speed up all gguf models (in Async mode) by using independent thread (CUDA stream) to compute and dequant at the same time, even when low-bit weights are already on GPU.
5. View “online lora” as a module similar to ControlLoRA so that it is moved to GPU together with model when sampling, achieving significant speedup and perfect low VRAM management simultaneously.
2024-08-19 04:31:59 -07:00
DenOfEquity
93bfd7f85b
invalidate cond cache if distilled CFG changed ( #1240 )
...
* Update processing.py
add distilled_cfg_scale to params that invalidate cond cache
* Update ui.py
distilled CFG and CFG step size 0.1 (from 0.5)
2024-08-17 19:34:11 -07:00
layerdiffusion
0f266c48bd
make clear_prompt_cache a function
2024-08-17 19:00:13 -07:00
lllyasviel
61f83dd610
support all flux models
2024-08-13 05:42:17 -07:00
layerdiffusion
19b41b9438
Add option to experiment with results from other impl
...
Setting -> Compatibility -> Try to reproduce the results from external software
2024-08-11 17:02:50 -07:00
lllyasviel
cfa5242a75
forge 2.0.0
...
see also discussions
2024-08-10 19:24:19 -07:00
layerdiffusion
593455c4de
global unload after env var change
2024-08-08 22:34:55 -07:00
layerdiffusion
20e1ba4a82
fix
2024-08-08 15:08:20 -07:00
lllyasviel
6921420b3f
Load Model only when click Generate
...
#964
2024-08-08 14:51:13 -07:00
layerdiffusion
a189f3e53e
distilled cfg scale info text
2024-08-08 01:22:27 -07:00
layerdiffusion
572a3c3d8b
distilled cfg scale
2024-08-08 00:57:40 -07:00
layerdiffusion
653be0a3ad
comments
2024-08-08 00:20:15 -07:00
layerdiffusion
396d9f378d
we do not need to waste 10 seconds on T5 when CFG=1
2024-08-08 00:19:16 -07:00
lllyasviel
a6baf4a4b5
revise kernel
...
and add unused files
2024-08-07 16:51:24 -07:00
lllyasviel
71eaa5ca12
rework UI so that the toolbar is managed by Forge
2024-08-06 13:54:06 -07:00
layerdiffusion
ae1d995d0d
Finally removed model_hijack
...
finally
2024-08-05 21:05:25 -07:00
layerdiffusion
27641043ee
remove dirty cond_stage_key
2024-08-05 11:39:20 -07:00
layerdiffusion
d77582aa5a
fix img2img inpaint model
2024-08-05 11:36:08 -07:00
layerdiffusion
bccf9fb23a
Free WebUI from its Prison
...
Congratulations WebUI. Say Hello to freedom.
2024-08-05 04:21:35 -07:00
layerdiffusion
0863765173
rework sd1.5 and sdxl from scratch
2024-08-05 03:08:17 -07:00
layerdiffusion
9679232d29
fix TI embedding info text
...
original webui is also broken but forge is fixed now
2024-08-04 18:49:35 -07:00
layerdiffusion
a72154405e
Text Processing Engine is Finished
...
100% reproduce all previous results, including TI embeddings, LoRAs in CLIP, emphasize settings, BREAK, timestep swap scheduling, AB mixture, advanced uncond, etc
Backend is 85% finished
2024-08-04 18:42:51 -07:00
layerdiffusion
8c087f920e
rename files
2024-08-03 15:54:39 -07:00
layerdiffusion
f6a0c69f0f
fix ADetailer
2024-07-26 15:11:12 -07:00
layerdiffusion
e26abf87ec
Gradio 4 + WebUI 1.10
2024-07-26 12:02:46 -07:00
lllyasviel
10b5ca2541
vae already sliced in inner loop
2024-03-08 00:40:33 -08:00
Chengsong Zhang
b9705c58f6
fix alphas cumprod ( #478 )
...
* fix alphas cumprod
* indentation
2024-03-04 01:03:32 -06:00
lllyasviel
95bcea72b1
Revert "fix alphas cumprod ( #475 )"
...
This reverts commit 72139b000c .
2024-03-03 22:40:56 -08:00
Chengsong Zhang
72139b000c
fix alphas cumprod ( #475 )
2024-03-03 20:09:04 -06:00
lllyasviel
5166a723c2
bring back tiling
...
close #215
2024-02-25 21:29:35 -08:00
lllyasviel
16caff3d14
Revert "try solve #381 "
...
This reverts commit 9d4c88912d .
2024-02-23 16:13:40 -08:00
lllyasviel
9d4c88912d
try solve #381
...
caused by some potential historical webui problems
2024-02-23 16:01:40 -08:00
lllyasviel
bde779a526
apply_token_merging
2024-02-23 15:43:27 -08:00
lllyasviel
95ddac3117
Merge upstream
...
upstream
2024-02-21 23:56:45 -08:00
Andray
33c8fe1221
avoid doble upscaling in inpaint
2024-02-19 16:57:49 +04:00
lllyasviel
30c8d742b3
Merge branch 'main' into upt
2024-02-11 16:42:36 -08:00
AUTOMATIC1111
e2b19900ec
add infotext entry for emphasis; put emphasis into a separate file, add an option to parse but still ignore emphasis
2024-02-11 09:39:51 +03:00
Chenlei Hu
388ca351f4
Revert "Fix ruff linter ( #137 )" ( #143 )
...
This reverts commit 6b3ad64388 .
2024-02-08 21:24:04 -05:00
Chenlei Hu
6b3ad64388
Fix ruff linter ( #137 )
...
* Fix ruff linter
* Remove unused imports
* Remove unused imports
2024-02-08 20:35:20 -05:00
lllyasviel
760f727eb9
use better context manager to fix potential problems
2024-02-08 01:51:18 -08:00
lllyasviel
c11490d560
add_alphas_cumprod_modifier
2024-02-03 22:33:18 -08:00
lllyasviel
905a027fc1
Update processing.py
2024-02-02 23:01:03 -08:00
lllyasviel
1edd626f7c
Update processing.py
2024-02-02 22:30:55 -08:00
lllyasviel
66e563fe03
inpaint done!
2024-01-30 15:13:26 -08:00
lllyasviel
d04765cf3d
Update processing.py
2024-01-27 22:47:04 -08:00
lllyasviel
60bd01e378
i
2024-01-27 20:04:10 -08:00