Commit Graph

43 Commits

Author SHA1 Message Date
Jaret Burkett
6fc9ec1396 Added example config for training wan22 14b 24GB on images 2025-08-28 13:08:49 -06:00
Jaret Burkett
5c27f89af5 Add example config for qwen image edit 2025-08-23 18:20:36 -06:00
Jaret Burkett
554dfb33bc Added example config file for qwen image at 24GB 2025-08-23 12:37:46 -06:00
Jaret Burkett
60ef2f1df7 Added support for FLUX.1-Kontext-dev 2025-06-26 15:24:37 -06:00
Jaret Burkett
19ea8ecc38 Added support for finetuning OmniGen2. 2025-06-25 13:58:16 -06:00
Jaret Burkett
43cb5603ad Added chroma model to the ui. Added logic to easily pull latest, use local, or use a specific version of chroma. Allow ustom name or path in the ui for custom models 2025-05-07 12:06:30 -06:00
Jaret Burkett
8ff85ba14f Add Flex2 training example 2025-04-22 11:59:47 -06:00
Jaret Burkett
79c87701e7 Add hidream to the ui 2025-04-16 13:45:21 -06:00
Jaret Burkett
fecc64e646 Update hidream defaults, pass additional information to flow guidance 2025-04-16 13:03:04 -06:00
Jaret Burkett
d5a64006b5 Added example config to train hidream 2025-04-16 10:18:22 -06:00
Jaret Burkett
7c21eac1b3 Added support for Lodestone Rock's Chroma model 2025-04-05 13:21:36 -06:00
Jaret Burkett
ab59ca5091 Updated comment on control path 2025-04-04 10:23:06 -06:00
Jaret Burkett
eddd3c1611 Added finetuning/training example for redux 2025-04-04 10:05:41 -06:00
Jaret Burkett
51ad19b568 Add config file examples for training Wan LoRAs on 24GB cards. 2025-03-08 13:56:21 -07:00
Jaret Burkett
ed1deb71c4 Added examples for training lumina2 2025-02-08 16:13:18 -07:00
Jaret Burkett
a6a690f796 Update full fine tune example to only train transformer blocks. 2025-01-24 09:28:34 -07:00
Jaret Burkett
6a8e3d8610 Added a config file for full finetuning flex. Added a lora extraction script for flex 2025-01-20 10:09:01 -07:00
Jaret Burkett
4c8a9e1b88 Added example config to train Flex 2025-01-18 18:03:20 -07:00
Jaret Burkett
3400882a80 Added preliminary support for SD3.5-large lora training 2024-10-22 12:21:36 -06:00
martintomov
34db804c76 Modal cloud training support, fixed typo in toolkit/scheduler.py, Schnell training support for Colab, issue #92 , issue #114 (#115)
* issue #76, load_checkpoint_and_dispatch() 'force_hooks'

https://github.com/ostris/ai-toolkit/issues/76

* RunPod cloud config

https://github.com/ostris/ai-toolkit/issues/90

* change 2x A40 to 1x A40 and price per hour

referring to https://github.com/ostris/ai-toolkit/issues/90#issuecomment-2294894929

* include missed FLUX.1-schnell setup guide in last commit

* huggingface-cli login required auth

* #92 peft, #114 colab, schnell training in colab

* modal cloud - run_modal.py and .yaml configs

* run_modal.py mount path example

* modal_examples renamed to modal

* Training in Modal README.md setup guide

* rename run command in title for consistency
2024-08-22 21:25:44 -06:00
apolinário
4d35a29c97 Add push_to_hub to the trainer (#109)
* add push_to_hub

* fix indentation

* indent again

* model_config

* allow samples to not exist

* repo creation fix

* dont show empty [] if widget doesnt exist

* dont submit the config and optimizer

* Unsafe to have tokens saved in the yaml file

* make sure to catch only the latest samples

* change name to slug

* formatting

* formatting

---------

Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com>
2024-08-22 21:18:56 -06:00
Jaret Burkett
81899310f8 Added support for training on flux schnell. Added example config and instructions for training on flux schnell 2024-08-17 06:58:39 -06:00
Jaret Burkett
418f5f7e8c Added new experimental time step weighing that should solve a lot of issues with distribution. Updated example. Removed a warning 2024-08-13 12:02:11 -06:00
Jaret Burkett
8d48ad4e85 fixed bug I added to demo config 2024-08-11 10:28:39 -06:00
Jaret Burkett
ec1ea7aa0e Added support for training on primary gpu with low_vram flag. Updated example script to remove creepy horse sample at that seed 2024-08-11 09:54:30 -06:00
Jaret Burkett
2308ef2868 Added flux training instructions 2024-08-10 14:10:02 -06:00
Jaret Burkett
c6675e2801 Added shuffeling to prompts 2023-08-19 07:57:30 -06:00
Jaret Burkett
90eedb78bf Added multiplier jitter, min_snr, ability to choose sdxl encoders to use, shuffle generator, and other fun 2023-08-19 05:54:22 -06:00
Jaret Burkett
8c90fa86c6 Complete reqork of how slider training works and optimized it to hell. Can run entire algorythm in 1 batch now with less VRAM consumption than a quarter of it used to take 2023-08-05 18:46:08 -06:00
Jaret Burkett
66c6f0f6f7 Big refactor of SD runner and added image generator 2023-08-03 14:51:25 -06:00
Jaret Burkett
2bf3e529ce Set gradient checkpointing on unet enabled by default. Help out immensly with sdxl backprop spikes 2023-08-01 15:43:27 -06:00
Jaret Burkett
8b8d53888d Added Model rescale and prepared a release upgrade 2023-08-01 13:49:54 -06:00
Jaret Burkett
61dd818608 Added anchors to regulate the lora 2023-07-24 14:59:16 -06:00
Jaret Burkett
9a2819900c added target weight to targets 2023-07-23 14:08:37 -06:00
Jaret Burkett
452f2a6da2 Added info, config, etc for lora extracotr and slider trainer 2023-07-23 13:13:45 -06:00
Jaret Burkett
9367089d48 Added example for slider training that will run as is 2023-07-23 11:24:12 -06:00
Jaret Burkett
982e0be7a9 Removed train config, updating it, and added my llvae as pytorch model 2023-07-20 14:12:28 -06:00
Jaret Burkett
78b59c5e99 Added support for 3cleir, not fully tested 2023-07-16 15:35:14 -06:00
Jaret Burkett
8d6edae9fd Added support for traditional LoRa extract using LoCon script 2023-07-12 19:51:40 -06:00
Jaret Burkett
57f14e5ef2 WIP implementing training 2023-07-12 08:23:46 -06:00
Jaret Burkett
47d094e528 Setup base for training jobs. Added sd-scripts as a submodule 2023-07-08 13:50:59 -06:00
Jaret Burkett
37354b006e Reworked so everything is in classes for easy expansion. Single entry point for all config files now. 2023-07-08 09:51:42 -06:00
Jaret Burkett
e4de8983c9 Initial commit 2023-07-05 16:44:58 -06:00