Jaret Burkett
5b5aadadb8
Add LTX-2 Support ( #644 )
...
* WIP, adding support for LTX2
* Training on images working
* Fix loading comfy models
* Handle converting and deconverting lora so it matches original format
* Reworked ui to habdle ltx and propert dataset default overwriting.
* Update the way lokr saves to it is more compatable with comfy
* Audio loading and synchronization/resampling is working
* Add audio to training. Does it work? Maybe, still testing.
* Fixed fps default issue for sound
* Have ui set fps for accurate audio mapping on ltx
* Added audio procession options to the ui for ltx
* Clean up requirements
2026-01-13 04:55:30 -07:00
Jaret Burkett
4e62c38df5
Add support for training Z-Image Turbo with a de-distill training adapter
2025-11-28 08:08:53 -07:00
Jaret Burkett
0d8a33dc16
Offload ARA with the layer if doing layer offloading. Add support to offload the LoRA. Still needs optimizer support
2025-10-21 06:03:27 -06:00
Jaret Burkett
af6fdaaaf9
Add ability to train a full rank LoRA. (experimental)
2025-09-09 07:36:25 -06:00
Jaret Burkett
f48d21caee
Upgrade a LoRA rank if the new one is larger so users can increase the rank on an exiting training job and continue training at a higher rank.
2025-08-24 13:40:25 -06:00
Jaret Burkett
3413fa537f
Wan22 14b training is working, still need tons of testing and some bug fixes
2025-08-14 13:03:27 -06:00
Jaret Burkett
2b4c525489
Reworked automagic optimizer and did more testing. Starting to really like it. Working well.
2025-04-28 08:01:10 -06:00
Jaret Burkett
bbfd6ef0fe
Fixed bug that prevented using schnell training adapter
2025-03-19 10:25:12 -06:00
Jaret Burkett
3812957bc9
Added ability to train control loras. Other important bug fixes thrown in
2025-03-14 18:03:00 -06:00
Jaret Burkett
e6739f7eb2
Convert wan lora weights on save to be something comfy can handle
2025-03-08 12:55:11 -07:00
Jaret Burkett
1f3f45a48d
Bugfixes
2025-03-03 08:22:15 -07:00
Jaret Burkett
b16819f8e7
Added LoKr support
2025-03-02 06:57:50 -07:00
Jaret Burkett
894374b2e9
Various bug fixes and optimizations for quantized training. Added untested custom adam8bit optimizer. Did some work on LoRM (dont use)
2024-11-20 09:16:55 -07:00
Jaret Burkett
40f5c59da0
Fixes for training ilora on flux
2024-08-31 16:55:26 -06:00
Jaret Burkett
a939cf3730
WIP - adding support for flux DoRA and ip adapter training
2024-08-22 04:36:39 -06:00
Jaret Burkett
7fed4ea761
fixed huge flux training bug. Added ability to use an assistatn lora
2024-08-14 10:14:13 -06:00
Jaret Burkett
c2424087d6
8 bit training working on flux
2024-08-06 11:53:27 -06:00
Jaret Burkett
187663ab55
Use peft format for flux loras so they are compatible with diffusers. allow loading an assistant lora
2024-08-05 14:34:37 -06:00
Jaret Burkett
0bc4d555c7
A lot of pixart sigma training tweaks
2024-07-28 11:23:18 -06:00
Jaret Burkett
cab8a1c7b8
WIP to add the caption_proj weight to pixart sigma TE adapter
2024-07-06 13:00:21 -06:00
Jaret Burkett
3072d20f17
Add ability to include conv_in and conv_out to full train when doing a lora
2024-06-29 14:54:50 -06:00
Jaret Burkett
f965a1299f
Fixed Dora implementation. Still highly experimental
2024-02-24 10:26:01 -07:00
Jaret Burkett
1bd94f0f01
Added early DoRA support, but will change shortly. Dont use right now.
2024-02-23 05:55:41 -07:00
Jaret Burkett
e074058faa
Work on additional image embedding methods. Finalized zipper resampler. It works amazing
2024-02-10 09:00:05 -07:00
Jaret Burkett
e18e0cb5f8
Added comparitive loss when training clip encoder. Allow selecting clip layer. on ip adapter. Improvements to prior prediction
2024-02-05 07:40:03 -07:00
Jaret Burkett
39870411d8
More guidance work. Improved LoRA module resolver for unet. Added vega mappings and LoRA training for it. Various other bigfixes and changes
2023-12-15 06:02:10 -07:00
Jaret Burkett
eaa0fb6253
Tons of bug fixes and improvements to special training. Fixed slider training.
2023-12-09 16:38:10 -07:00
Jaret Burkett
d7e55b6ad4
Bug fixes, negative prompting during training, hardened catching
2023-11-24 07:25:11 -07:00
Jaret Burkett
6f3e0d5af2
Improved lorm extraction and training
2023-10-28 08:21:59 -06:00
Jaret Burkett
002279cec3
Allow short and long caption combinations like form the new captioning system. Merge the network into the model before inference and reextract when done. Doubles inference speed on locon models during inference. allow splitting a batch into individual components and run them through alone. Basicallt gradient accumulation with single batch size.
2023-10-24 16:02:07 -06:00
Jaret Burkett
63ceffae24
Massive speed increases and ram optimizations
2023-10-10 06:07:55 -06:00
Jaret Burkett
830e87cb87
Added IP adapter training. Not functioning correctly yet
2023-09-24 02:39:43 -06:00
Jaret Burkett
0f105690cc
Added some further extendability for plugins
2023-09-19 05:41:44 -06:00
Jaret Burkett
27f343fc08
Added base setup for training t2i adapters. Currently untested, saw something else shiny i wanted to finish sirst. Added content_or_style to the training config. It defaults to balanced, which is standard uniform time step sampling. If style or content is passed, it will use cubic sampling for timesteps to favor timesteps that are beneficial for training them. for style, favor later timesteps. For content, favor earlier timesteps.
2023-09-16 08:30:38 -06:00
Jaret Burkett
569d7464d5
implemented device placement preset system more places. Vastly improved speed on setting network multiplier and activating network. Fixed timing issues on progress bar
2023-09-14 08:31:54 -06:00
Jaret Burkett
ae70200d3c
Bug fixes, speed improvements, compatability adjustments withdiffusers updates
2023-09-13 07:03:53 -06:00
Jaret Burkett
e8583860ad
Upgraded to dev for t2i on diffusers. Minor migrations to make it work.
2023-09-11 14:46:06 -06:00
Jaret Burkett
083cefa78c
Bugfixes for slider reference
2023-09-10 18:36:23 -06:00
Jaret Burkett
708b07adb7
Fixed issue with interleaving when doing cfg
2023-09-10 10:26:58 -06:00
Jaret Burkett
be804c9cf5
Save embeddings as their trigger to match auto and comfy style loading. Also, FINALLY found why gradients were wonkey and fixed it. The root problem is dropping out of network state before backward pass.
2023-09-09 12:02:07 -06:00
Jaret Burkett
408c50ead1
actually got gradient checkpointing working, again, again, maybe
2023-09-09 11:27:42 -06:00
Jaret Burkett
b01ab5d375
FINALLY fixed gradient checkpointing issue. Big batches baby.
2023-09-08 15:21:46 -06:00
Jaret Burkett
3feb663a51
Switched to new bucket system that matched sdxl trained buckets. Fixed requirements. Updated embeddings to work with sdxl. Added method to train lora with an embedding at the trigger. Still testing but works amazingly well from what I can see
2023-09-07 13:06:18 -06:00
Jaret Burkett
64a5441832
Fully tested and now supporting locon on sdxl. If you have the ram
2023-09-04 14:05:10 -06:00
Jaret Burkett
a4c3507a62
Added LoCON from LyCORIS
2023-09-04 08:48:07 -06:00