- `/sdapi/v1/options` GET now calls `get_config()` from **sysinfo** module, instead of from its own version of the function.
- Defined a new, flexible and more robust `set_config()` function in **sysinfo** module, which:
- obsoletes redundant code
- skips updating values that are unchanged
- has flexible args for both API and UI use
- `/sdapi/v1/options` POST and `override_settings` now use the new `set_config()` function. `set_config()` could possibly obsolete additional functions, but I'm not going to get into that just yet.
- Options for `forge_additional_modules` can now be provided either as the file path, or just the module name.
- Most importantly, `refresh_model_loading_parameters()` is now only called ONCE per request, and **only** if necessary.
- It is now much easier to call `shared.opts.save()` as needed
* Do refresh load params for modules
* Adjust call order for model mgmt/prompt cache
* new function `manage_model_and_prompt_cache()` to improve code clarity
* make unload_model_weights do that
* rename Settings > Actions > unload checkpoint button to 'Unload all models'
* remove (comment out) reload button, as it does nothing and is unlikely to ever do anything since models are loaded on demand
(#1862)
a fix for the 0.01% who use comments in prompts. Before this, styles could be considered part of a comment.
strips comments from prompts first, then from each applied style before merge
same process for extracting styles from prompts
updated tooltips for toolbuttons to apply styles
removed code made redundant by this change, from modules.processing_scripts.comments
Allows setting of preferred VAE and Text encoder(s) for checkpoints when selected via Checkpoint cards. No selection saved means no change to current toprow setting. 'Built in' option, if the only choice, means clear the toprow selection (therefore use vae/te built-in to checkpoint).
Also allows setting model type for checkpoints (SD1/SD2/SDXL/Flux/Unknown) (user set only, no attempt at autodetection), enabling filtering of the cards based on UI preset.
adds options for user-set defaults for Sampler and Scheduler to UI settings sd, xl, flux;
adds options for user-set defaults for GPU Weights to UI settings xl, flux;
necessitates change to .input event listener instead of .release for ui_forge_inference_memory, which may be more correct anyway.
Added Flux to lora types in extra networks UI, so user can set.
Loras versioned first by user-set type, if any. Falls back to heuristics - these are much more reliable than the removed old A1111 tests and in case of no match default to Unknown (always displayed).
Filtering is done based on UI setting. 'all' setting does not filter. Filters lora lists on change.
Removed unused 'lora_hide_unknown_for_versions' setting.
Previously these were Numbers, and limited to integers, with no range check.
Some schedulers don't respect these values anyway, but that's a different issue.
* autoset width and height
When loading into img2img, added an option to autoset Width and Height from image (Settings -> img2img ->After loading into Img2img, automatically update Width and Height).
Also fixed scale_by display, now correctly rounded to multiple of 8
Also fixed img2img new width/height calculation, same way, (affects infotext)
* Update ui.py: hide progress animation on image size change
#1760, original solution by cmdr2. extended to 2 other locations where setting is read. I could replicate the issue only by manually entering a decimal value.
Possibly also #1764. Though I don't know how the clipskip setting became a string.
Re-enables *hires_fix_show_sampler* and *hires_fix_show_prompts* user-configurable settings.
Also corrects the reported result width x height; now rounded down to multiple of 8.
#1247
* fix GFPGAN to work with visibility < 1
* fix codeformer to work with visibility < 1
* try harder to download GFPGAN model. Old method would download only if there were no .pth models in the GFPGAN directory. If codeformer was used before GFPGAN, the supporting models are already downloaded into the GFPGAN directory.
* Fix Checkpoint Merging #1359,#1095
- checkpoint_list[] contains the CheckpointInfo.title which is "checkpointname.safetensor [hash]"
when a checkpoint is selected to be loaded during merge, we try to match it with just "checkpointname.safetensor".
-> use checkpoint_aliases[] which already contains the checkpoint key in all possible variants.
- replaced removed sd_models.read_state_dict() with sd_models.load_torch_file()
- replaced removed sd_vae.load_vae_dict() with sd_vae.load_torch_file()
- uncommented create_config() for now, since it calls a removed method: sd_models_config.find_checkpoint_config_near_filename()
* Follow up merge fix for #1359#1095
- read_state_dict() does nothing, replaced 2 occurrences with load_torch_file()
- now merging actually merges again
hide the color/opacity/softness controls on the inpaint tab:
consistent with high contrast option +
they don't offer real control anyway (color irrelevant to mask, mask gets thresholded to binary)
1. fix several problems related to layerdiffuse not unloaded
2. fix several problems related to Fooocus inpaint
3. Slightly speed up on-the-fly LoRAs by precomputing them to computation dtype
In *Settings -> Disregard fields from pasted infotext* there is a very long list of things that can optionally be ignored when parsing infotext. Now it is a slightly longer list, and includes `Lora hashes`.
main issue: Upscaling would fail on single image when controlnet used.
minor issues: On the way to fixing my oversight of not accounting for control images in the gallery, I found that attempting to upscale a control image would fail due to trying to access infotext that doesn't exist. Then I handled a case previously caught by an assert more gracefully. Unhandled, these minor issues would lose the current gallery, so these extra fixes are good QoL.
Then I found another related minor issue if grids are not displayed.
- checkpoint_list[] contains the CheckpointInfo.title which is "checkpointname.safetensor [hash]"
when a checkpoint is selected to be loaded during merge, we try to match it with just "checkpointname.safetensor".
-> use checkpoint_aliases[] which already contains the checkpoint key in all possible variants.
- replaced removed sd_models.read_state_dict() with sd_models.load_torch_file()
- replaced removed sd_vae.load_vae_dict() with sd_vae.load_torch_file()
- uncommented create_config() for now, since it calls a removed method: sd_models_config.find_checkpoint_config_near_filename()