layerdiffusion
7d99a193e9
space use better delete logics
2024-08-20 22:24:05 -07:00
layerdiffusion
69f238ea38
revise hints
2024-08-20 22:01:31 -07:00
layerdiffusion
8752bfc1b0
revise space
2024-08-20 21:53:02 -07:00
layerdiffusion
250ae27749
The First IDM-VTON that pass 4GB VRAM
...
100% reproduce official results
2024-08-20 21:30:47 -07:00
layerdiffusion
25e97a8895
revise space
2024-08-20 21:27:16 -07:00
layerdiffusion
1096c708cc
revise swap module name
2024-08-20 21:18:53 -07:00
layerdiffusion
4ef29f9546
revise space
2024-08-20 20:40:56 -07:00
DenOfEquity
a46cfa6a1d
minor fixes related to Extras tab ( #1312 )
...
* update ui.js - correct index for extras tab
one character change
* Update postprocessing.py
fix missing attribute orig_name by using name instead
avoid duplication of postprocessing text. Previously written twice, to png sections postprocessing and extras.
* Update postprocessing.py
unnecessary line
2024-08-20 20:28:42 -07:00
altoiddealer
8bf98cee93
Update README.md ( #1346 )
2024-08-20 20:28:22 -07:00
DenOfEquity
b2353a4911
fix hires-fix button ( #1360 )
...
underlying gallery object changed with gradio update, old code broken, new code more simple
added check for attempt to upscale grid
removed redundant check already covered by second assert
2024-08-20 20:28:07 -07:00
layerdiffusion
cb783405bb
revise space
2024-08-20 20:23:33 -07:00
layerdiffusion
7d9f1350f2
revise space
2024-08-20 19:03:17 -07:00
layerdiffusion
56740824e2
add hints and reduce prints to only release slider
2024-08-20 19:02:30 -07:00
layerdiffusion
e750407053
Update README.md
2024-08-20 18:23:12 -07:00
layerdiffusion
0252ad86be
missing print
2024-08-20 08:47:06 -07:00
layerdiffusion
74aacc5d4b
make "GPU weights" also available to SDXL
2024-08-20 08:19:44 -07:00
layerdiffusion
8fd889dcad
fix #1336
2024-08-20 08:04:09 -07:00
layerdiffusion
5452bc6ac3
All Forge Spaces Now Pass 4GB VRAM
...
and they all 100% reproduce author results
2024-08-20 08:01:10 -07:00
Panchovix
f136f86fee
Merge pull request #1340 from DenOfEquity/fix-for-new-samplers
...
Fix for new samplers
2024-08-20 10:12:37 -04:00
DenOfEquity
c127e60cf0
Update sd_samplers_kdiffusion.py
...
add new samplers here
2024-08-20 15:01:58 +01:00
DenOfEquity
8c7db614ba
Update alter_samplers.py
...
move new samplers from here
2024-08-20 15:00:56 +01:00
layerdiffusion
14ac95f908
fix
2024-08-20 01:37:01 -07:00
layerdiffusion
6f411a4940
fix loras on nf4 models when activate "loras in fp16"
2024-08-20 01:29:52 -07:00
layerdiffusion
65ec461f8a
revise space
2024-08-19 22:43:09 -07:00
layerdiffusion
fef6df29d9
Update README.md
2024-08-19 22:33:20 -07:00
layerdiffusion
6c7c85628e
change News to Quick List
2024-08-19 22:30:56 -07:00
layerdiffusion
5ecc525664
fix #1322
2024-08-19 20:19:13 -07:00
layerdiffusion
475524496d
revise
2024-08-19 18:54:54 -07:00
Panchovix
8eeeace725
Merge pull request #1316 from lllyasviel/more_samplers1
...
Add samplers: HeunPP2, IPDNM, IPNDM_V, DEIS
2024-08-19 20:49:37 -04:00
Panchovix
2fc1708a59
Add samplers: HeunPP2, IPDNM, IPNDM_V, DEIS
...
Pending: CFG++ Samplers, ODE Samplers
The latter is probably easy to implement, the former needs modifications in sd_samplers_cfg_denoiser.py
2024-08-19 20:48:41 -04:00
Panchovix
9bc2d04ca9
Merge pull request #1310 from lllyasviel/more_schedulers
...
Add Align Your Steps GITS, AYS 11 Steps and AYS 32 Steps Schedulers.
2024-08-19 16:58:52 -04:00
Panchovix
9f5a27ca4e
Add Align Your Steps GITS, AYS 11 Steps and AYS 32 Steps Schedulers.
2024-08-19 16:57:58 -04:00
layerdiffusion
d7151b4dcd
add low vram warning
2024-08-19 11:08:01 -07:00
layerdiffusion
2f1d04759f
avoid some mysteries problems when using lots of python local delegations
2024-08-19 09:47:04 -07:00
layerdiffusion
0b70b7287c
gradio
2024-08-19 09:12:38 -07:00
layerdiffusion
584b6c998e
#1294
2024-08-19 09:09:22 -07:00
layerdiffusion
054a3416f1
revise space logics
2024-08-19 08:06:24 -07:00
layerdiffusion
96f264ec6a
add a way to save models
2024-08-19 06:30:49 -07:00
layerdiffusion
4e8ba14dd0
info
2024-08-19 05:13:28 -07:00
layerdiffusion
d03fc5c2b1
speed up a bit
2024-08-19 05:06:46 -07:00
layerdiffusion
d38e560e42
Implement some rethinking about LoRA system
...
1. Add an option to allow users to use UNet in fp8/gguf but lora in fp16.
2. All FP16 loras do not need patch. Others will only patch again when lora weight change.
3. FP8 unet + fp16 lora are available (somewhat only available) in Forge now. This also solves some “LoRA too subtle” problems.
4. Significantly speed up all gguf models (in Async mode) by using independent thread (CUDA stream) to compute and dequant at the same time, even when low-bit weights are already on GPU.
5. View “online lora” as a module similar to ControlLoRA so that it is moved to GPU together with model when sampling, achieving significant speedup and perfect low VRAM management simultaneously.
2024-08-19 04:31:59 -07:00
layerdiffusion
e5f213c21e
upload some GGUF supports
2024-08-19 01:09:50 -07:00
layerdiffusion
95bc586547
geowizard
2024-08-19 00:27:21 -07:00
layerdiffusion
00115ae02a
revise space
2024-08-19 00:05:09 -07:00
layerdiffusion
3bef4e331a
change space path order
2024-08-18 23:55:28 -07:00
layerdiffusion
deca20551e
pydantic==2.8.2
2024-08-18 22:58:36 -07:00
lllyasviel
0024e41107
Update install
2024-08-18 22:55:48 -07:00
layerdiffusion
631a097d0b
change Florence-2 default to base
2024-08-18 21:31:24 -07:00
layerdiffusion
ca2db770a9
ic light
2024-08-18 21:16:58 -07:00
layerdiffusion
4751d6646d
Florence-2
2024-08-18 20:51:44 -07:00