Fix model loading during Checkpoint Merging #1359,#1095 (#1639)

* Fix Checkpoint Merging #1359,#1095

- checkpoint_list[] contains the CheckpointInfo.title which is "checkpointname.safetensor [hash]"
  when a checkpoint is selected to be loaded during merge, we try to match it with just "checkpointname.safetensor".
  -> use checkpoint_aliases[] which already contains the checkpoint key in all possible variants.
- replaced removed sd_models.read_state_dict() with sd_models.load_torch_file()
- replaced removed sd_vae.load_vae_dict() with sd_vae.load_torch_file()
- uncommented create_config() for now, since it calls a removed method: sd_models_config.find_checkpoint_config_near_filename()

* Follow up merge fix for #1359 #1095

- read_state_dict() does nothing, replaced 2 occurrences with load_torch_file()
- now merging actually merges again
This commit is contained in:
cluder
2024-09-01 02:09:06 +02:00
committed by GitHub
parent 69ffe37f14
commit 4f64f6daa4

View File

@@ -150,14 +150,14 @@ def run_modelmerger(id_task, primary_model_name, secondary_model_name, tertiary_
if theta_func2:
shared.state.textinfo = "Loading B"
print(f"Loading {secondary_model_info.filename}...")
theta_1 = sd_models.read_state_dict(secondary_model_info.filename, map_location='cpu')
theta_1 = sd_models.load_torch_file(secondary_model_info.filename)
else:
theta_1 = None
if theta_func1:
shared.state.textinfo = "Loading C"
print(f"Loading {tertiary_model_info.filename}...")
theta_2 = sd_models.read_state_dict(tertiary_model_info.filename, map_location='cpu')
theta_2 = sd_models.load_torch_file(tertiary_model_info.filename)
shared.state.textinfo = 'Merging B and C'
shared.state.sampling_steps = len(theta_1.keys())