DenOfEquity
f23bc80d2f
SD3+ ( #2688 )
...
Co-authored-by: graemeniedermayer graemeniedermayer@users.noreply.github.com
2025-02-27 17:54:44 +00:00
DenOfEquity
8dd92501e6
Add SDXL refiner model ( #2686 )
...
add sdxlrefiner
adjust some Settings
custom CLIP-G support
2025-02-25 10:49:47 +00:00
DenOfEquity
184bb04f8d
increased support for custom CLIPs ( #2642 )
...
increased support for custom CLIPs
more forms recognised
now can be applied to sd1.5, sdxl, (sd3)
2025-02-21 12:01:39 +00:00
catboxanon
6e1a7908b4
Fallback to estimated prediction if .yaml prediction not suitable ( #2273 )
2024-11-06 23:49:18 -05:00
catboxanon
f4afbaff45
Only use .yaml config prediction if actually set ( #2272 )
2024-11-06 23:44:12 -05:00
catboxanon
05b01da01f
Fix ldm .yaml loading typo ( #2248 )
2024-11-02 10:29:13 -04:00
catboxanon
90a6970fb7
Compatibility for ldm .yaml configs ( #2247 )
2024-11-02 10:16:51 -04:00
catboxanon
6f4350d65f
Fix typo in .yaml config load ( #2228 )
2024-10-31 07:09:27 -04:00
catboxanon
b691b1e755
Fix .yaml config loading ( #2224 )
2024-10-30 16:18:44 -04:00
catboxanon
edeb2b883f
Fix loading diffusers format VAEs ( #2171 )
2024-10-24 14:27:57 -04:00
catboxanon
edc46380cc
Automatically enable ztSNR when applicable ( #2122 )
2024-10-19 20:33:34 -04:00
catboxanon
f620f55e56
Fallback to already detected prediction type if none applicable found
2024-10-19 07:47:56 -04:00
catboxanon
5ec47a6b93
Fix model prediction detection
...
Closes #1109
2024-10-19 06:18:02 -04:00
layerdiffusion
70a555906a
use safer codes
2024-08-31 10:55:19 -07:00
layerdiffusion
4c9380c46a
Speed up quant model loading and inference ...
...
... based on 3 evidences:
1. torch.Tensor.view on one big tensor is slightly faster than calling torch.Tensor.to on multiple small tensors.
2. but torch.Tensor.to with dtype change is significantly slower than torch.Tensor.view
3. “baking” model on GPU is significantly faster than computing on CPU when model load.
mainly influence inference of Q8_0, Q4_0/1/K and loading of all quants
2024-08-30 00:49:05 -07:00
layerdiffusion
b25b62da96
fix T5 not baked
2024-08-25 17:31:50 -07:00
layerdiffusion
13d6f8ed90
revise GGUF by precomputing some parameters
...
rather than computing them in each diffusion iteration
2024-08-25 14:30:09 -07:00
lllyasviel
f82029c5cf
support more t5 quants ( #1482 )
...
lets hope this is the last time that people randomly invent new state dict key formats
2024-08-24 12:47:49 -07:00
layerdiffusion
4e3c78178a
[revised] change some dtype behaviors based on community feedbacks
...
only influence old devices like 1080/70/60/50.
please remove cmd flags if you are on 1080/70/60/50 and previously used many cmd flags to tune performance
2024-08-21 10:23:38 -07:00
layerdiffusion
1419ef29aa
Revert "change some dtype behaviors based on community feedbacks"
...
This reverts commit 31bed671ac .
2024-08-21 10:10:49 -07:00
layerdiffusion
31bed671ac
change some dtype behaviors based on community feedbacks
...
only influence old devices like 1080/70/60/50.
please remove cmd flags if you are on 1080/70/60/50 and previously used many cmd flags to tune performance
2024-08-21 08:46:52 -07:00
layerdiffusion
d0518b7249
make prints beautiful
2024-08-15 00:20:03 -07:00
layerdiffusion
d8b83a9501
gguf preview
2024-08-15 00:03:32 -07:00
lllyasviel
61f83dd610
support all flux models
2024-08-13 05:42:17 -07:00
layerdiffusion
2d17e8df8d
better print
2024-08-11 22:56:51 -07:00
layerdiffusion
a8d7cac503
make transformers less verbose
2024-08-11 18:55:36 -07:00
lllyasviel
cfa5242a75
forge 2.0.0
...
see also discussions
2024-08-10 19:24:19 -07:00
layerdiffusion
4014013d05
fix text encoder dtype
2024-08-09 15:11:07 -07:00
lllyasviel
6921420b3f
Load Model only when click Generate
...
#964
2024-08-08 14:51:13 -07:00
layerdiffusion
a05a06b337
make results more consistent to A1111
2024-08-08 01:53:03 -07:00
lllyasviel
a6baf4a4b5
revise kernel
...
and add unused files
2024-08-07 16:51:24 -07:00
lllyasviel
14a759b5ca
revise kernel
2024-08-07 13:28:12 -07:00
layerdiffusion
f743fbff83
revise kernel
2024-08-06 21:39:06 -07:00
layerdiffusion
b57573c8da
Implement many kernels from scratch
2024-08-06 20:19:03 -07:00
lllyasviel
71c94799d1
diffusion in fp8 landed
2024-08-06 16:47:39 -07:00
layerdiffusion
24cfce26dc
add sd2 template
2024-08-05 11:49:45 -07:00
layerdiffusion
46442f90a2
Update loader.py
2024-08-05 03:17:35 -07:00
layerdiffusion
0863765173
rework sd1.5 and sdxl from scratch
2024-08-05 03:08:17 -07:00
layerdiffusion
6dd8cd8820
force config intergrity
2024-08-03 17:09:09 -07:00
layerdiffusion
fb3052350b
rework model loader
2024-08-03 17:01:40 -07:00
layerdiffusion
bc9977a305
UNet from Scratch
...
Now backend rewrite is about 50% finished.
Estimated finish is in 72 hours.
After that, many newer features will land.
2024-08-01 21:19:41 -07:00
layerdiffusion
4d1be42975
Intergrate CLIP
2024-08-01 12:27:20 -07:00
layerdiffusion
0d079a846d
Intergrate Native AutoEncoderKL
2024-07-31 21:10:19 -07:00
layerdiffusion
f052fabd4d
make model guess a function that can be patched
2024-07-30 17:26:49 -06:00
layerdiffusion
c8156fcf41
rework model loader and configs
2024-07-30 13:27:26 -06:00