Commit Graph

180 Commits

Author SHA1 Message Date
Jaret Burkett
ca3ce0f34c Make it easier to designate lora blocks for new models. Improve i2v adapter speed. Fix issue with i2v adapter where cached torch tensor was wrong range. 2025-04-13 13:49:13 -06:00
Jaret Burkett
96ba2fd129 Added methods to the dataloader to automatically generate controls for line, mask, inpainting, depth, and pose. 2025-04-09 13:35:04 -06:00
Jaret Burkett
a8680c75eb Added initial support for finetuning wan i2v WIP 2025-04-07 20:34:38 -06:00
Jaret Burkett
77763a3e5c Update divisiblity of SD3 2025-04-02 06:49:06 -06:00
Jaret Burkett
5ea19b6292 small bug fixes 2025-03-30 20:09:40 -06:00
Jaret Burkett
860d892214 Pixel shuffle adapter. Some bug fixes thrown in 2025-03-29 21:15:01 -06:00
Jaret Burkett
5365200da1 Added ability to add models to finetune as plugins. Also added flux2 new arch via that method. 2025-03-27 16:07:00 -06:00
Jaret Burkett
45be82d5d6 Handle inpainting training for control_lora adapter 2025-03-24 13:17:47 -06:00
Jaret Burkett
f10937e6da Handle multi control inputs for control lora training 2025-03-23 07:37:08 -06:00
Jaret Burkett
f5aa4232fa Added ability to quantize with torchao 2025-03-20 16:28:54 -06:00
Jaret Burkett
3812957bc9 Added ability to train control loras. Other important bug fixes thrown in 2025-03-14 18:03:00 -06:00
Jaret Burkett
e6739f7eb2 Convert wan lora weights on save to be something comfy can handle 2025-03-08 12:55:11 -07:00
Jaret Burkett
4fe33f51c1 Fix issue with picking layers for quantization, adjust layers fo better quantization of cogview4 2025-03-05 13:44:40 -07:00
Jaret Burkett
6f6fb90812 Added cogview4. Loss still needs work. 2025-03-04 18:43:52 -07:00
Jaret Burkett
8bb47d1bfe Merge branch 'main' into wan21 2025-03-04 00:31:57 -07:00
Jaret Burkett
c5e0c2bbe2 Fixes to allow for redux assisted training 2025-03-03 16:27:19 -07:00
Jaret Burkett
acc79956aa WIP create new class to add new models more easily 2025-03-01 13:49:02 -07:00
Jaret Burkett
56d8d6bd81 Capture speed from the timer for the ui 2025-02-23 14:38:46 -07:00
Jaret Burkett
60f848a877 Send more data when loading the model to the ui 2025-02-23 12:49:54 -07:00
Jaret Burkett
adcf884c0f Built out the ui trainer plugin with db comminication 2025-02-21 05:53:35 -07:00
Jaret Burkett
4af6c5cf30 Work on supporting flex.2 potential arch 2025-02-17 14:10:25 -07:00
Jaret Burkett
7679105d52 Added llm text encoder adapter 2025-02-13 08:28:32 -07:00
Jaret Burkett
2622de1e01 DFE tweaks. Adding support for more llms as text encoders 2025-02-13 04:31:49 -07:00
Jaret Burkett
9a7266275d Wokr on lumina2 2025-02-08 14:52:39 -07:00
Jaret Burkett
d138f07365 Imitial lumina3 support 2025-02-08 10:59:53 -07:00
Jaret Burkett
216ab164ce Experimental features and bug fixes 2025-02-04 13:36:34 -07:00
Jaret Burkett
e6180d1e1d Bug fixes 2025-01-31 13:23:01 -07:00
Jaret Burkett
15a57bc89f Add new version of DFE. Kitchen sink 2025-01-31 11:42:27 -07:00
Jaret Burkett
34a1c6947a Added flux_shift as timestep type 2025-01-27 07:35:00 -07:00
Jaret Burkett
5e663746b8 Working multi gpu training. Still need a lot of tweaks and testing. 2025-01-25 16:46:20 -07:00
Jaret Burkett
fadb2f3a76 Allow quantizing the te independently on flux. added lognorm_blend timestep schedule 2025-01-18 18:02:31 -07:00
Jaret Burkett
4723f23c0d Added ability to split up flux across gpus (experimental). Changed the way timestep scheduling works to prep for more specific schedules. 2024-12-31 07:06:55 -07:00
Jaret Burkett
8ef07a9c36 Added training for an experimental decoratgor embedding. Allow for turning off guidance embedding on flux (for unreleased model). Various bug fixes and modifications 2024-12-15 08:59:27 -07:00
Jaret Burkett
6509ba4484 Fix seed generation to make it deterministic so it is consistant from gpu to gpu 2024-11-15 12:11:13 -07:00
Jaret Burkett
4aa19b5c1d Only quantize flux T5 is also quantizing model. Load TE from original name and path if fine tuning. 2024-10-29 14:25:31 -06:00
Jaret Burkett
22cd40d7b9 Improvements for full tuning flux. Added debugging launch config for vscode 2024-10-29 04:54:08 -06:00
Jaret Burkett
3400882a80 Added preliminary support for SD3.5-large lora training 2024-10-22 12:21:36 -06:00
Jaret Burkett
9452929300 Apply a mask to the embeds for SD if using T5 encoder 2024-10-04 10:55:20 -06:00
Jaret Burkett
a800c9d19e Add a method to have an inference only lora 2024-10-04 10:06:53 -06:00
Jaret Burkett
58537fc92b Added initial direct vision pixtral support 2024-09-28 10:47:51 -06:00
Jaret Burkett
40a8ff5731 Load local hugging face packages for assistant adapter 2024-09-23 10:37:12 -06:00
Jaret Burkett
2776221497 Added option to cache empty prompt or trigger and unload text encoders while training 2024-09-21 20:54:09 -06:00
Plat
79b4e04b80 Feat: Wandb logging (#95)
* wandb logging

* fix: start logging before train loop

* chore: add wandb dir to gitignore

* fix: wrap wandb functions

* fix: forget to send last samples

* chore: use valid type

* chore: use None when not type-checking

* chore: resolved complicated logic

* fix: follow log_every

---------

Co-authored-by: Plat <github@p1at.dev>
Co-authored-by: Jaret Burkett <jaretburkett@gmail.com>
2024-09-19 20:01:01 -06:00
Jaret Burkett
fc34a69bec Ignore guidance embed when full tuning flux. adjust block scaler to decat to 1.0. Add MLP resampler for reducing vision adapter tokens 2024-09-09 16:24:46 -06:00
Jaret Burkett
e5fadddd45 Added ability to do prompt attn masking for flux 2024-09-02 17:29:36 -06:00
Jaret Burkett
60232def91 Made peleminary arch for flux ip adapter training 2024-08-28 08:55:39 -06:00
Jaret Burkett
338c77d677 Fixed breaking change with diffusers. Allow flowmatch on normal stable diffusion models. 2024-08-22 14:36:22 -06:00
Jaret Burkett
c45887192a Unload interum weights when doing multi lora fuse 2024-08-18 09:35:10 -06:00
Jaret Burkett
13a965a26c Fixed bad key naming on lora fuse I just pushed 2024-08-18 09:33:31 -06:00
Jaret Burkett
f944eeaa4d Fuse flux schnell assistant adapter in pieces when doing lowvram to drastically speed ip up from minutes to seconds. 2024-08-18 09:09:11 -06:00