layerdiffusion
ae9380bde2
update
2024-09-07 05:06:17 -07:00
layerdiffusion
f527b64ead
update google blockly version
2024-09-07 02:07:11 -07:00
layerdiffusion
70db164318
revise space
2024-09-07 01:55:28 -07:00
layerdiffusion
accf133a38
Upload "Google Blockly" prototyping tools
2024-09-07 01:43:40 -07:00
Haoming
68559eb180
space requirement ( #1721 )
2024-09-06 23:45:51 -04:00
layerdiffusion
447a4a7fba
expose more parameters to space
2024-08-30 17:00:31 -07:00
layerdiffusion
5a1c711e80
add info
2024-08-30 10:04:04 -07:00
Serick
11a2c0629a
Apply settings in ui-config.json to "All" presets ( #1541 )
...
* Add preset to load settings from ui-config.json
* Remove added presets and apply ui-config.json to all
2024-08-28 09:15:07 -07:00
DenOfEquity
8c6b64e6e9
main_entry.py: add support for new text_encoder_dir cmd arg
...
copying method used for VAE
2024-08-25 15:01:30 +01:00
lllyasviel
f82029c5cf
support more t5 quants ( #1482 )
...
lets hope this is the last time that people randomly invent new state dict key formats
2024-08-24 12:47:49 -07:00
layerdiffusion
4e3c78178a
[revised] change some dtype behaviors based on community feedbacks
...
only influence old devices like 1080/70/60/50.
please remove cmd flags if you are on 1080/70/60/50 and previously used many cmd flags to tune performance
2024-08-21 10:23:38 -07:00
layerdiffusion
1419ef29aa
Revert "change some dtype behaviors based on community feedbacks"
...
This reverts commit 31bed671ac .
2024-08-21 10:10:49 -07:00
layerdiffusion
31bed671ac
change some dtype behaviors based on community feedbacks
...
only influence old devices like 1080/70/60/50.
please remove cmd flags if you are on 1080/70/60/50 and previously used many cmd flags to tune performance
2024-08-21 08:46:52 -07:00
lllyasviel
c73ff3724c
update some codes related to win32
2024-08-21 03:23:34 -07:00
layerdiffusion
49435de094
add removal hint for space
2024-08-20 23:23:43 -07:00
layerdiffusion
579ff49225
revise space
2024-08-20 23:05:25 -07:00
layerdiffusion
389d011fee
revise space
2024-08-20 23:01:34 -07:00
layerdiffusion
7d99a193e9
space use better delete logics
2024-08-20 22:24:05 -07:00
layerdiffusion
69f238ea38
revise hints
2024-08-20 22:01:31 -07:00
layerdiffusion
8752bfc1b0
revise space
2024-08-20 21:53:02 -07:00
layerdiffusion
56740824e2
add hints and reduce prints to only release slider
2024-08-20 19:02:30 -07:00
layerdiffusion
0252ad86be
missing print
2024-08-20 08:47:06 -07:00
layerdiffusion
74aacc5d4b
make "GPU weights" also available to SDXL
2024-08-20 08:19:44 -07:00
DenOfEquity
8c7db614ba
Update alter_samplers.py
...
move new samplers from here
2024-08-20 15:00:56 +01:00
layerdiffusion
6f411a4940
fix loras on nf4 models when activate "loras in fp16"
2024-08-20 01:29:52 -07:00
layerdiffusion
65ec461f8a
revise space
2024-08-19 22:43:09 -07:00
Panchovix
2fc1708a59
Add samplers: HeunPP2, IPDNM, IPNDM_V, DEIS
...
Pending: CFG++ Samplers, ODE Samplers
The latter is probably easy to implement, the former needs modifications in sd_samplers_cfg_denoiser.py
2024-08-19 20:48:41 -04:00
layerdiffusion
054a3416f1
revise space logics
2024-08-19 08:06:24 -07:00
layerdiffusion
d38e560e42
Implement some rethinking about LoRA system
...
1. Add an option to allow users to use UNet in fp8/gguf but lora in fp16.
2. All FP16 loras do not need patch. Others will only patch again when lora weight change.
3. FP8 unet + fp16 lora are available (somewhat only available) in Forge now. This also solves some “LoRA too subtle” problems.
4. Significantly speed up all gguf models (in Async mode) by using independent thread (CUDA stream) to compute and dequant at the same time, even when low-bit weights are already on GPU.
5. View “online lora” as a module similar to ControlLoRA so that it is moved to GPU together with model when sampling, achieving significant speedup and perfect low VRAM management simultaneously.
2024-08-19 04:31:59 -07:00
layerdiffusion
3bef4e331a
change space path order
2024-08-18 23:55:28 -07:00
layerdiffusion
60dfcd0464
revise space
2024-08-18 19:25:39 -07:00
layerdiffusion
128a793265
gradio
2024-08-18 03:57:25 -07:00
layerdiffusion
72ab92f83e
upload meta files
2024-08-18 00:12:53 -07:00
layerdiffusion
4bb5613916
remove space path after invoke
2024-08-17 08:52:41 -07:00
layerdiffusion
fcf71fd9ae
fix space logics
2024-08-17 08:42:49 -07:00
lllyasviel
93b40f355e
Forge Space and BiRefNet
2024-08-17 08:29:08 -07:00
layerdiffusion
447f261154
fix
2024-08-15 01:56:21 -07:00
layerdiffusion
a5f3a50d3f
Not all AUTOMATIC have beard
2024-08-15 01:25:47 -07:00
layerdiffusion
32fab6e30d
use better name and change to dropdown list
2024-08-15 01:06:17 -07:00
layerdiffusion
df0fee9396
maybe solve --vae-path
2024-08-14 17:58:12 -07:00
layerdiffusion
a985afd857
thread safety
2024-08-13 15:45:45 -07:00
layerdiffusion
bb58520a4c
completely solve 'NoneType' object is not iterable
2024-08-13 15:36:18 -07:00
lllyasviel
61f83dd610
support all flux models
2024-08-13 05:42:17 -07:00
layerdiffusion
b1f0d8c6d1
default img2img back to square
2024-08-11 18:20:18 -07:00
layerdiffusion
643a485d1a
Update forge_version.py
2024-08-10 19:58:04 -07:00
layerdiffusion
f10359989f
fix
2024-08-10 19:40:56 -07:00
lllyasviel
cfa5242a75
forge 2.0.0
...
see also discussions
2024-08-10 19:24:19 -07:00
layerdiffusion
593455c4de
global unload after env var change
2024-08-08 22:34:55 -07:00
layerdiffusion
3f3cb12f76
more tests
2024-08-08 22:12:14 -07:00
layerdiffusion
02ffb04649
revise stream
2024-08-08 19:23:23 -07:00