In *Settings -> Disregard fields from pasted infotext* there is a very long list of things that can optionally be ignored when parsing infotext. Now it is a slightly longer list, and includes `Lora hashes`.
main issue: Upscaling would fail on single image when controlnet used.
minor issues: On the way to fixing my oversight of not accounting for control images in the gallery, I found that attempting to upscale a control image would fail due to trying to access infotext that doesn't exist. Then I handled a case previously caught by an assert more gracefully. Unhandled, these minor issues would lose the current gallery, so these extra fixes are good QoL.
Then I found another related minor issue if grids are not displayed.
- checkpoint_list[] contains the CheckpointInfo.title which is "checkpointname.safetensor [hash]"
when a checkpoint is selected to be loaded during merge, we try to match it with just "checkpointname.safetensor".
-> use checkpoint_aliases[] which already contains the checkpoint key in all possible variants.
- replaced removed sd_models.read_state_dict() with sd_models.load_torch_file()
- replaced removed sd_vae.load_vae_dict() with sd_vae.load_torch_file()
- uncommented create_config() for now, since it calls a removed method: sd_models_config.find_checkpoint_config_near_filename()
by precomputing all possible 4bit dequant into a lookup table and use pytorch indexing to get dequant, rather than really computing the bit operations.
This should give very similar performance to native CUDA kernels, while being LoRA friendly and more flexiable
Generating from browser works with controlnet.
Generating via API wasn't.
This is because from_dict() was making 'image' and 'mask_image' a dictionary.
In 'controlnet.py' function 'get_input_data' this would cause the following check to throw exception because used to be an incorrect type:
elif (unit_image < 5).all() and (unit_image_fg > 5).any():
Now, they are never a dictionary and check is fine