Jaret Burkett
|
3812957bc9
|
Added ability to train control loras. Other important bug fixes thrown in
|
2025-03-14 18:03:00 -06:00 |
|
Jaret Burkett
|
e6739f7eb2
|
Convert wan lora weights on save to be something comfy can handle
|
2025-03-08 12:55:11 -07:00 |
|
Jaret Burkett
|
4fe33f51c1
|
Fix issue with picking layers for quantization, adjust layers fo better quantization of cogview4
|
2025-03-05 13:44:40 -07:00 |
|
Jaret Burkett
|
6f6fb90812
|
Added cogview4. Loss still needs work.
|
2025-03-04 18:43:52 -07:00 |
|
Jaret Burkett
|
8bb47d1bfe
|
Merge branch 'main' into wan21
|
2025-03-04 00:31:57 -07:00 |
|
Jaret Burkett
|
c5e0c2bbe2
|
Fixes to allow for redux assisted training
|
2025-03-03 16:27:19 -07:00 |
|
Jaret Burkett
|
acc79956aa
|
WIP create new class to add new models more easily
|
2025-03-01 13:49:02 -07:00 |
|
Jaret Burkett
|
56d8d6bd81
|
Capture speed from the timer for the ui
|
2025-02-23 14:38:46 -07:00 |
|
Jaret Burkett
|
60f848a877
|
Send more data when loading the model to the ui
|
2025-02-23 12:49:54 -07:00 |
|
Jaret Burkett
|
adcf884c0f
|
Built out the ui trainer plugin with db comminication
|
2025-02-21 05:53:35 -07:00 |
|
Jaret Burkett
|
4af6c5cf30
|
Work on supporting flex.2 potential arch
|
2025-02-17 14:10:25 -07:00 |
|
Jaret Burkett
|
7679105d52
|
Added llm text encoder adapter
|
2025-02-13 08:28:32 -07:00 |
|
Jaret Burkett
|
2622de1e01
|
DFE tweaks. Adding support for more llms as text encoders
|
2025-02-13 04:31:49 -07:00 |
|
Jaret Burkett
|
9a7266275d
|
Wokr on lumina2
|
2025-02-08 14:52:39 -07:00 |
|
Jaret Burkett
|
d138f07365
|
Imitial lumina3 support
|
2025-02-08 10:59:53 -07:00 |
|
Jaret Burkett
|
216ab164ce
|
Experimental features and bug fixes
|
2025-02-04 13:36:34 -07:00 |
|
Jaret Burkett
|
e6180d1e1d
|
Bug fixes
|
2025-01-31 13:23:01 -07:00 |
|
Jaret Burkett
|
15a57bc89f
|
Add new version of DFE. Kitchen sink
|
2025-01-31 11:42:27 -07:00 |
|
Jaret Burkett
|
34a1c6947a
|
Added flux_shift as timestep type
|
2025-01-27 07:35:00 -07:00 |
|
Jaret Burkett
|
5e663746b8
|
Working multi gpu training. Still need a lot of tweaks and testing.
|
2025-01-25 16:46:20 -07:00 |
|
Jaret Burkett
|
fadb2f3a76
|
Allow quantizing the te independently on flux. added lognorm_blend timestep schedule
|
2025-01-18 18:02:31 -07:00 |
|
Jaret Burkett
|
4723f23c0d
|
Added ability to split up flux across gpus (experimental). Changed the way timestep scheduling works to prep for more specific schedules.
|
2024-12-31 07:06:55 -07:00 |
|
Jaret Burkett
|
8ef07a9c36
|
Added training for an experimental decoratgor embedding. Allow for turning off guidance embedding on flux (for unreleased model). Various bug fixes and modifications
|
2024-12-15 08:59:27 -07:00 |
|
Jaret Burkett
|
6509ba4484
|
Fix seed generation to make it deterministic so it is consistant from gpu to gpu
|
2024-11-15 12:11:13 -07:00 |
|
Jaret Burkett
|
4aa19b5c1d
|
Only quantize flux T5 is also quantizing model. Load TE from original name and path if fine tuning.
|
2024-10-29 14:25:31 -06:00 |
|
Jaret Burkett
|
22cd40d7b9
|
Improvements for full tuning flux. Added debugging launch config for vscode
|
2024-10-29 04:54:08 -06:00 |
|
Jaret Burkett
|
3400882a80
|
Added preliminary support for SD3.5-large lora training
|
2024-10-22 12:21:36 -06:00 |
|
Jaret Burkett
|
9452929300
|
Apply a mask to the embeds for SD if using T5 encoder
|
2024-10-04 10:55:20 -06:00 |
|
Jaret Burkett
|
a800c9d19e
|
Add a method to have an inference only lora
|
2024-10-04 10:06:53 -06:00 |
|
Jaret Burkett
|
58537fc92b
|
Added initial direct vision pixtral support
|
2024-09-28 10:47:51 -06:00 |
|
Jaret Burkett
|
40a8ff5731
|
Load local hugging face packages for assistant adapter
|
2024-09-23 10:37:12 -06:00 |
|
Jaret Burkett
|
2776221497
|
Added option to cache empty prompt or trigger and unload text encoders while training
|
2024-09-21 20:54:09 -06:00 |
|
Plat
|
79b4e04b80
|
Feat: Wandb logging (#95)
* wandb logging
* fix: start logging before train loop
* chore: add wandb dir to gitignore
* fix: wrap wandb functions
* fix: forget to send last samples
* chore: use valid type
* chore: use None when not type-checking
* chore: resolved complicated logic
* fix: follow log_every
---------
Co-authored-by: Plat <github@p1at.dev>
Co-authored-by: Jaret Burkett <jaretburkett@gmail.com>
|
2024-09-19 20:01:01 -06:00 |
|
Jaret Burkett
|
fc34a69bec
|
Ignore guidance embed when full tuning flux. adjust block scaler to decat to 1.0. Add MLP resampler for reducing vision adapter tokens
|
2024-09-09 16:24:46 -06:00 |
|
Jaret Burkett
|
e5fadddd45
|
Added ability to do prompt attn masking for flux
|
2024-09-02 17:29:36 -06:00 |
|
Jaret Burkett
|
60232def91
|
Made peleminary arch for flux ip adapter training
|
2024-08-28 08:55:39 -06:00 |
|
Jaret Burkett
|
338c77d677
|
Fixed breaking change with diffusers. Allow flowmatch on normal stable diffusion models.
|
2024-08-22 14:36:22 -06:00 |
|
Jaret Burkett
|
c45887192a
|
Unload interum weights when doing multi lora fuse
|
2024-08-18 09:35:10 -06:00 |
|
Jaret Burkett
|
13a965a26c
|
Fixed bad key naming on lora fuse I just pushed
|
2024-08-18 09:33:31 -06:00 |
|
Jaret Burkett
|
f944eeaa4d
|
Fuse flux schnell assistant adapter in pieces when doing lowvram to drastically speed ip up from minutes to seconds.
|
2024-08-18 09:09:11 -06:00 |
|
Jaret Burkett
|
81899310f8
|
Added support for training on flux schnell. Added example config and instructions for training on flux schnell
|
2024-08-17 06:58:39 -06:00 |
|
Jaret Burkett
|
f9179540d2
|
Flush after sampling
|
2024-08-16 17:29:42 -06:00 |
|
Jaret Burkett
|
452e0e286d
|
For lora assisted training, merge in before quantizing then sample with schnell at -1 weight. Almost doubles training speed with lora adapter.
|
2024-08-16 17:28:44 -06:00 |
|
Jaret Burkett
|
7fed4ea761
|
fixed huge flux training bug. Added ability to use an assistatn lora
|
2024-08-14 10:14:13 -06:00 |
|
Jaret Burkett
|
599fafe01f
|
Allow user to have the full flux checkpoint local
|
2024-08-12 09:57:16 -06:00 |
|
Jaret Burkett
|
6490a326e5
|
Fixed issue for vaes without a shift
|
2024-08-11 10:30:55 -06:00 |
|
Jaret Burkett
|
ec1ea7aa0e
|
Added support for training on primary gpu with low_vram flag. Updated example script to remove creepy horse sample at that seed
|
2024-08-11 09:54:30 -06:00 |
|
Jaret Burkett
|
b3e03295ad
|
Reworked flux pred. Again
|
2024-08-08 13:06:34 -06:00 |
|
Jaret Burkett
|
acafe9984f
|
Adjustments to loading of flux. Added a feedback to ema
|
2024-08-07 13:17:26 -06:00 |
|
Jaret Burkett
|
c2424087d6
|
8 bit training working on flux
|
2024-08-06 11:53:27 -06:00 |
|