Jaret Burkett
22cd40d7b9
Improvements for full tuning flux. Added debugging launch config for vscode
2024-10-29 04:54:08 -06:00
Jaret Burkett
3400882a80
Added preliminary support for SD3.5-large lora training
2024-10-22 12:21:36 -06:00
Jaret Burkett
9f94c7b61e
Added experimental param multiplier to the ema module
2024-10-22 09:25:52 -06:00
Jaret Burkett
bedb8197a2
Fixed issue with sizes for some images being loaded sideways resulting in squished images.
2024-10-20 11:51:29 -06:00
Jaret Burkett
e3ebd73610
Add a projection layer on vision direct when doing image embeds
2024-10-20 10:48:23 -06:00
Jaret Burkett
0640cdf569
Handle errors in loading size database
2024-10-20 07:04:19 -06:00
Jaret Burkett
ce759ebd8c
Normalize the image embeddings on vd adapter forward
2024-10-12 15:09:48 +00:00
Jaret Burkett
628a7923a3
Remove norm on image embeds on custom adapter
2024-10-12 00:43:18 +00:00
Jaret Burkett
3922981996
Added some additional experimental things to the vision direct encoder
2024-10-10 19:42:26 +00:00
Jaret Burkett
ab22674980
Allow for a default caption file in the folder. Minor bug fixes.
2024-10-10 07:31:33 -06:00
Jaret Burkett
9452929300
Apply a mask to the embeds for SD if using T5 encoder
2024-10-04 10:55:20 -06:00
Jaret Burkett
a800c9d19e
Add a method to have an inference only lora
2024-10-04 10:06:53 -06:00
Jaret Burkett
28e6f00790
Fixed bug in returning clip image embed to actually return it
2024-10-03 10:49:09 -06:00
Jaret Burkett
67e0aca750
Added ability to load clip pairs randomly from folder. Other small bug fixes
2024-10-03 10:03:49 -06:00
Jaret Burkett
f05224970f
Added Vision Languate Adapter usage for pixtral vd adapter
2024-09-29 19:39:56 -06:00
Jaret Burkett
b4f64de4c2
Quick patch to scope xformer imports until a better solution
2024-09-28 15:36:42 -06:00
Jaret Burkett
e4c82803e1
Handle random resizing for pixtral input on direct vision adapter
2024-09-28 14:53:38 -06:00
Jaret Burkett
69aa92bce5
Added support for AdEMAMix8bit
2024-09-28 14:33:51 -06:00
Jaret Burkett
a508caad1d
Change pixtral to crop based on number of pixels instead of largest dimension
2024-09-28 13:05:26 -06:00
Jaret Burkett
58537fc92b
Added initial direct vision pixtral support
2024-09-28 10:47:51 -06:00
Jaret Burkett
86b5938cf3
Fixed the webp bug finally.
2024-09-25 13:56:00 -06:00
Jaret Burkett
6b4034122f
REmove layers from direct vision resampler
2024-09-24 15:08:29 -06:00
Jaret Burkett
10817696fb
Fixed issue where direct vision was not passing additional params from resampler when it is added
2024-09-24 10:34:11 -06:00
Jaret Burkett
037ce11740
Always return vision encoder in state dict
2024-09-24 07:43:17 -06:00
Jaret Burkett
04424fe2d6
Added config setting to set the timestep type
2024-09-24 06:53:59 -06:00
Jaret Burkett
40a8ff5731
Load local hugging face packages for assistant adapter
2024-09-23 10:37:12 -06:00
Jaret Burkett
2776221497
Added option to cache empty prompt or trigger and unload text encoders while training
2024-09-21 20:54:09 -06:00
Jaret Burkett
f85ad452c6
Added initial support for pixtral vision as a vision encoder.
2024-09-21 15:21:14 -06:00
Plat
79b4e04b80
Feat: Wandb logging ( #95 )
...
* wandb logging
* fix: start logging before train loop
* chore: add wandb dir to gitignore
* fix: wrap wandb functions
* fix: forget to send last samples
* chore: use valid type
* chore: use None when not type-checking
* chore: resolved complicated logic
* fix: follow log_every
---------
Co-authored-by: Plat <github@p1at.dev >
Co-authored-by: Jaret Burkett <jaretburkett@gmail.com >
2024-09-19 20:01:01 -06:00
Jaret Burkett
951e223481
Added support to disable single transformers in vision direct adapter
2024-09-11 08:54:51 -06:00
Jaret Burkett
fc34a69bec
Ignore guidance embed when full tuning flux. adjust block scaler to decat to 1.0. Add MLP resampler for reducing vision adapter tokens
2024-09-09 16:24:46 -06:00
Jaret Burkett
279ee65177
Remove block scaler
2024-09-06 08:28:17 -06:00
Jaret Burkett
3a1f464132
Added support for training vision direct weight adapters
2024-09-05 10:11:44 -06:00
Jaret Burkett
121a760c19
Added proper grad accumulation
2024-09-03 07:24:18 -06:00
Jaret Burkett
e5fadddd45
Added ability to do prompt attn masking for flux
2024-09-02 17:29:36 -06:00
Jaret Burkett
d44d4eb61a
Added a new experimental linear weighing technique
2024-09-02 09:22:13 -06:00
Jaret Burkett
7d9ab22405
Rework ip adapter and vision direct adapters to apply to the single transformer blocks even though they are not cross attn.
2024-09-01 10:40:42 -06:00
Jaret Burkett
40f5c59da0
Fixes for training ilora on flux
2024-08-31 16:55:26 -06:00
Jaret Burkett
3e71a99df0
Check for contains only against clean name for lora, not the adjusted one
2024-08-31 07:44:13 -06:00
Jaret Burkett
60232def91
Made peleminary arch for flux ip adapter training
2024-08-28 08:55:39 -06:00
Jaret Burkett
3843e0d148
Added support for vision direct adapter for flux
2024-08-26 16:27:28 -06:00
liaoliaojun
e127c079da
fix: print out the path where the image encode failed ( #107 )
2024-08-22 21:34:35 -06:00
martintomov
34db804c76
Modal cloud training support, fixed typo in toolkit/scheduler.py, Schnell training support for Colab, issue #92 , issue #114 ( #115 )
...
* issue #76 , load_checkpoint_and_dispatch() 'force_hooks'
https://github.com/ostris/ai-toolkit/issues/76
* RunPod cloud config
https://github.com/ostris/ai-toolkit/issues/90
* change 2x A40 to 1x A40 and price per hour
referring to https://github.com/ostris/ai-toolkit/issues/90#issuecomment-2294894929
* include missed FLUX.1-schnell setup guide in last commit
* huggingface-cli login required auth
* #92 peft, #114 colab, schnell training in colab
* modal cloud - run_modal.py and .yaml configs
* run_modal.py mount path example
* modal_examples renamed to modal
* Training in Modal README.md setup guide
* rename run command in title for consistency
2024-08-22 21:25:44 -06:00
apolinário
4d35a29c97
Add push_to_hub to the trainer ( #109 )
...
* add push_to_hub
* fix indentation
* indent again
* model_config
* allow samples to not exist
* repo creation fix
* dont show empty [] if widget doesnt exist
* dont submit the config and optimizer
* Unsafe to have tokens saved in the yaml file
* make sure to catch only the latest samples
* change name to slug
* formatting
* formatting
---------
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com >
2024-08-22 21:18:56 -06:00
Jaret Burkett
338c77d677
Fixed breaking change with diffusers. Allow flowmatch on normal stable diffusion models.
2024-08-22 14:36:22 -06:00
Jaret Burkett
a939cf3730
WIP - adding support for flux DoRA and ip adapter training
2024-08-22 04:36:39 -06:00
Jaret Burkett
c45887192a
Unload interum weights when doing multi lora fuse
2024-08-18 09:35:10 -06:00
Jaret Burkett
13a965a26c
Fixed bad key naming on lora fuse I just pushed
2024-08-18 09:33:31 -06:00
Jaret Burkett
f944eeaa4d
Fuse flux schnell assistant adapter in pieces when doing lowvram to drastically speed ip up from minutes to seconds.
2024-08-18 09:09:11 -06:00
Jaret Burkett
81899310f8
Added support for training on flux schnell. Added example config and instructions for training on flux schnell
2024-08-17 06:58:39 -06:00