9 Commits

Author SHA1 Message Date
Jaret Burkett
8ef07a9c36 Added training for an experimental decoratgor embedding. Allow for turning off guidance embedding on flux (for unreleased model). Various bug fixes and modifications 2024-12-15 08:59:27 -07:00
Jaret Burkett
22cd40d7b9 Improvements for full tuning flux. Added debugging launch config for vscode 2024-10-29 04:54:08 -06:00
Jaret Burkett
2776221497 Added option to cache empty prompt or trigger and unload text encoders while training 2024-09-21 20:54:09 -06:00
Jaret Burkett
39870411d8 More guidance work. Improved LoRA module resolver for unet. Added vega mappings and LoRA training for it. Various other bigfixes and changes 2023-12-15 06:02:10 -07:00
Jaret Burkett
dc8448d958 Added a way pass refiner ratio to sample config 2023-11-06 09:22:58 -07:00
Jaret Burkett
93ea955d7c Added refiner fine tuning. Works, but needs some polish. 2023-11-05 17:15:03 -07:00
Jaret Burkett
cac8754399 Allow for training loras on onle one text encoder for sdxl 2023-10-06 08:11:56 -06:00
Jaret Burkett
27f343fc08 Added base setup for training t2i adapters. Currently untested, saw something else shiny i wanted to finish sirst. Added content_or_style to the training config. It defaults to balanced, which is standard uniform time step sampling. If style or content is passed, it will use cubic sampling for timesteps to favor timesteps that are beneficial for training them. for style, favor later timesteps. For content, favor earlier timesteps. 2023-09-16 08:30:38 -06:00
Jaret Burkett
569d7464d5 implemented device placement preset system more places. Vastly improved speed on setting network multiplier and activating network. Fixed timing issues on progress bar 2023-09-14 08:31:54 -06:00