altoiddealer
862c7a589e
API Improvements: Modules Change AND Restore override_settings ( #2027 )
...
* Improve API modules change
* Restore override_settings and make it work
* Simplify some memory management
2024-10-13 12:29:02 +01:00
DenOfEquity
82eb756617
checks for GPU weights from Settings ( #2019 )
...
for the case when GPU has changed and saved settings are not possible anymore
2024-10-09 22:10:06 +01:00
DenOfEquity
7eb824a0da
fix for control images ( #2011 )
2024-10-08 13:37:46 +01:00
DenOfEquity
4b695514de
preserve infotext with Extras > Single Image ( #2006 )
...
credit to @PumpkinHat
https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/1779#issuecomment-2364974588
2024-10-08 12:18:39 +01:00
layerdiffusion
4baf8ffb9c
less prints
2024-10-03 21:23:56 -07:00
layerdiffusion
dc4f5e4119
remove some assets and dependences from repo
2024-10-03 21:17:51 -07:00
DenOfEquity
b1256b5fc9
More UI settings ( #1903 )
...
adds options for user-set defaults for Sampler and Scheduler to UI settings sd, xl, flux;
adds options for user-set defaults for GPU Weights to UI settings xl, flux;
necessitates change to .input event listener instead of .release for ui_forge_inference_memory, which may be more correct anyway.
2024-09-24 11:40:44 +01:00
DenOfEquity
a82d5d177c
restore lora version filtering ( #1885 )
...
Added Flux to lora types in extra networks UI, so user can set.
Loras versioned first by user-set type, if any. Falls back to heuristics - these are much more reliable than the removed old A1111 tests and in case of no match default to Unknown (always displayed).
Filtering is done based on UI setting. 'all' setting does not filter. Filters lora lists on change.
Removed unused 'lora_hide_unknown_for_versions' setting.
2024-09-23 14:53:58 +01:00
DenOfEquity
95b54a27f1
Settings for UI defaults ( #1890 )
...
Three new settings pages, used by UI presets sd, xl, flux. Defaults unchanged.
2024-09-22 15:14:17 +01:00
DenOfEquity
cd244d29d7
add UI defaults for HiRes CFG ( #1849 )
2024-09-17 18:07:43 +01:00
Haoming
210af4f804
[Space] Add the ability to reinstall requirements specifically ( #1783 )
...
* reinstall
* force_download=False
2024-09-14 12:54:52 +01:00
DenOfEquity
cb412b290b
CFG and Distilled CFG for hiresfix ( #1810 )
2024-09-13 15:01:40 +01:00
layerdiffusion
9439319007
better way to load google files
2024-09-08 18:03:14 -07:00
layerdiffusion
c3366a7689
exclude torch jit objects from space memory management
...
todo: fix a bug that torch jit module offload does not work on some versions
2024-09-07 19:08:17 -07:00
layerdiffusion
3fbb8ebe30
Remove unnecessary base64 coding
2024-09-07 16:53:59 -07:00
layerdiffusion
ae9380bde2
update
2024-09-07 05:06:17 -07:00
layerdiffusion
f527b64ead
update google blockly version
2024-09-07 02:07:11 -07:00
layerdiffusion
70db164318
revise space
2024-09-07 01:55:28 -07:00
layerdiffusion
accf133a38
Upload "Google Blockly" prototyping tools
2024-09-07 01:43:40 -07:00
Haoming
68559eb180
space requirement ( #1721 )
2024-09-06 23:45:51 -04:00
layerdiffusion
447a4a7fba
expose more parameters to space
2024-08-30 17:00:31 -07:00
layerdiffusion
5a1c711e80
add info
2024-08-30 10:04:04 -07:00
Serick
11a2c0629a
Apply settings in ui-config.json to "All" presets ( #1541 )
...
* Add preset to load settings from ui-config.json
* Remove added presets and apply ui-config.json to all
2024-08-28 09:15:07 -07:00
DenOfEquity
8c6b64e6e9
main_entry.py: add support for new text_encoder_dir cmd arg
...
copying method used for VAE
2024-08-25 15:01:30 +01:00
lllyasviel
f82029c5cf
support more t5 quants ( #1482 )
...
lets hope this is the last time that people randomly invent new state dict key formats
2024-08-24 12:47:49 -07:00
layerdiffusion
4e3c78178a
[revised] change some dtype behaviors based on community feedbacks
...
only influence old devices like 1080/70/60/50.
please remove cmd flags if you are on 1080/70/60/50 and previously used many cmd flags to tune performance
2024-08-21 10:23:38 -07:00
layerdiffusion
1419ef29aa
Revert "change some dtype behaviors based on community feedbacks"
...
This reverts commit 31bed671ac .
2024-08-21 10:10:49 -07:00
layerdiffusion
31bed671ac
change some dtype behaviors based on community feedbacks
...
only influence old devices like 1080/70/60/50.
please remove cmd flags if you are on 1080/70/60/50 and previously used many cmd flags to tune performance
2024-08-21 08:46:52 -07:00
lllyasviel
c73ff3724c
update some codes related to win32
2024-08-21 03:23:34 -07:00
layerdiffusion
49435de094
add removal hint for space
2024-08-20 23:23:43 -07:00
layerdiffusion
579ff49225
revise space
2024-08-20 23:05:25 -07:00
layerdiffusion
389d011fee
revise space
2024-08-20 23:01:34 -07:00
layerdiffusion
7d99a193e9
space use better delete logics
2024-08-20 22:24:05 -07:00
layerdiffusion
69f238ea38
revise hints
2024-08-20 22:01:31 -07:00
layerdiffusion
8752bfc1b0
revise space
2024-08-20 21:53:02 -07:00
layerdiffusion
56740824e2
add hints and reduce prints to only release slider
2024-08-20 19:02:30 -07:00
layerdiffusion
0252ad86be
missing print
2024-08-20 08:47:06 -07:00
layerdiffusion
74aacc5d4b
make "GPU weights" also available to SDXL
2024-08-20 08:19:44 -07:00
DenOfEquity
8c7db614ba
Update alter_samplers.py
...
move new samplers from here
2024-08-20 15:00:56 +01:00
layerdiffusion
6f411a4940
fix loras on nf4 models when activate "loras in fp16"
2024-08-20 01:29:52 -07:00
layerdiffusion
65ec461f8a
revise space
2024-08-19 22:43:09 -07:00
Panchovix
2fc1708a59
Add samplers: HeunPP2, IPDNM, IPNDM_V, DEIS
...
Pending: CFG++ Samplers, ODE Samplers
The latter is probably easy to implement, the former needs modifications in sd_samplers_cfg_denoiser.py
2024-08-19 20:48:41 -04:00
layerdiffusion
054a3416f1
revise space logics
2024-08-19 08:06:24 -07:00
layerdiffusion
d38e560e42
Implement some rethinking about LoRA system
...
1. Add an option to allow users to use UNet in fp8/gguf but lora in fp16.
2. All FP16 loras do not need patch. Others will only patch again when lora weight change.
3. FP8 unet + fp16 lora are available (somewhat only available) in Forge now. This also solves some “LoRA too subtle” problems.
4. Significantly speed up all gguf models (in Async mode) by using independent thread (CUDA stream) to compute and dequant at the same time, even when low-bit weights are already on GPU.
5. View “online lora” as a module similar to ControlLoRA so that it is moved to GPU together with model when sampling, achieving significant speedup and perfect low VRAM management simultaneously.
2024-08-19 04:31:59 -07:00
layerdiffusion
3bef4e331a
change space path order
2024-08-18 23:55:28 -07:00
layerdiffusion
60dfcd0464
revise space
2024-08-18 19:25:39 -07:00
layerdiffusion
128a793265
gradio
2024-08-18 03:57:25 -07:00
layerdiffusion
72ab92f83e
upload meta files
2024-08-18 00:12:53 -07:00
layerdiffusion
4bb5613916
remove space path after invoke
2024-08-17 08:52:41 -07:00
layerdiffusion
fcf71fd9ae
fix space logics
2024-08-17 08:42:49 -07:00