Jaret Burkett
|
1c96b95617
|
Fix issue where sometimes the transformer does not get loaded properly.
|
2025-08-14 14:24:41 -06:00 |
|
Jaret Burkett
|
3413fa537f
|
Wan22 14b training is working, still need tons of testing and some bug fixes
|
2025-08-14 13:03:27 -06:00 |
|
Jaret Burkett
|
259d68d440
|
Added a flushg during sampling to prevent spikes on low vram qwen
|
2025-08-12 12:57:18 -06:00 |
|
Jaret Burkett
|
77b10d884d
|
Add support for training with an accuracy recovery adapter with qwen image
|
2025-08-12 08:21:36 -06:00 |
|
Jaret Burkett
|
4ad18f3d00
|
Clip max token embeddings to the max rope length for qwen image to solve for an error for super long captions > 1024
|
2025-08-10 08:44:41 -06:00 |
|
Jaret Burkett
|
f0105c33a7
|
Fixed issue that sometimes happens in qwen image where text seq length is wrong
|
2025-08-09 16:33:05 -06:00 |
|
Jaret Burkett
|
bb6db3d635
|
Added support for caching text embeddings. This is just initial support and will probably fail for some models. Still needs to be ompimized
|
2025-08-07 10:27:55 -06:00 |
|
Jaret Burkett
|
4c4a10d439
|
Remove vision model from qwen text encoder as it is not needed for image generation currently
|
2025-08-06 11:40:02 -06:00 |
|
Jaret Burkett
|
14ccf2f3ce
|
Refactor qwen5b model code to be qwen 5b specific
|
2025-08-06 10:54:56 -06:00 |
|
Jaret Burkett
|
5d8922fca2
|
Add ability to designate a dataset as i2v or t2v for models that support it
|
2025-08-06 09:29:47 -06:00 |
|
Jaret Burkett
|
93202c7a2b
|
Training working for Qwen Image
|
2025-08-04 21:14:30 +00:00 |
|
Jaret Burkett
|
9da8b5408e
|
Initial but untested support for qwen_image
|
2025-08-04 13:29:37 -06:00 |
|
Jaret Burkett
|
a558d5b68f
|
Move transformer back to device on aggresive wan 2.2 pipeline after generation.
|
2025-07-29 09:13:47 -06:00 |
|
Jaret Burkett
|
1d1199b15b
|
Fix bug that prevented training wan 2.2 with batch size greater than 1
|
2025-07-29 09:06:25 -06:00 |
|
Jaret Burkett
|
ca7c5c950b
|
Add support for Wan2.2 5B
|
2025-07-29 05:31:54 -06:00 |
|
Jaret Burkett
|
cefa2ca5fe
|
Added initial support for Hidream E1 training
|
2025-07-27 15:12:56 -06:00 |
|
Daniel Verdu
|
a77ba5a089
|
fix: Guidance incorrect shape
|
2025-07-18 12:49:18 +02:00 |
|
Jaret Burkett
|
611969ec1f
|
Allow control image for omnigen training and sampling
|
2025-07-09 13:54:55 -06:00 |
|
Jaret Burkett
|
bbb57de6ec
|
Speed up omnigen TE loading
|
2025-07-05 09:32:00 -06:00 |
|
Jaret Burkett
|
5906a76666
|
Fixed issue with flux kontext forcing generation image sizes
|
2025-06-29 05:38:20 -06:00 |
|
Jaret Burkett
|
57a81bc0db
|
Update base model version for kontext meta
|
2025-06-28 14:48:36 -06:00 |
|
Jaret Burkett
|
01a3c8a9b1
|
Fix device issue
|
2025-06-26 19:14:25 -06:00 |
|
Jaret Burkett
|
4f91cb7148
|
Fix issue with gradient checkpointing and flux kontext
|
2025-06-26 19:03:12 -06:00 |
|
Jaret Burkett
|
446b0b6989
|
Remove revision for kontext
|
2025-06-26 16:46:58 -06:00 |
|
Jaret Burkett
|
60ef2f1df7
|
Added support for FLUX.1-Kontext-dev
|
2025-06-26 15:24:37 -06:00 |
|
Jaret Burkett
|
8d9c47316a
|
Work on mean flow. Minor bug fixes. Omnigen improvements
|
2025-06-26 13:46:20 -06:00 |
|
Jaret Burkett
|
19ea8ecc38
|
Added support for finetuning OmniGen2.
|
2025-06-25 13:58:16 -06:00 |
|
Jaret Burkett
|
ffaf2f154a
|
Fix issue with the way chroma handled gradient checkpointing.
|
2025-05-28 08:41:47 -06:00 |
|
Jaret Burkett
|
79bb9be92b
|
Fix issue with saving chroma full finetune.
|
2025-05-28 07:42:30 -06:00 |
|
Jaret Burkett
|
79499fa795
|
Allow fine tuning pruned versions of chroma. Allow flash attention 2 for chroma if it is installed.
|
2025-05-21 07:02:50 -06:00 |
|
Jaret Burkett
|
6174ba474e
|
Fixed issue with chroma sampling
|
2025-05-10 18:30:23 +00:00 |
|
Jaret Burkett
|
43cb5603ad
|
Added chroma model to the ui. Added logic to easily pull latest, use local, or use a specific version of chroma. Allow ustom name or path in the ui for custom models
|
2025-05-07 12:06:30 -06:00 |
|
Jaret Burkett
|
d9700bdb99
|
Added initial support for f-lite model
|
2025-05-01 11:15:18 -06:00 |
|
Jaret Burkett
|
add83df5cc
|
Fixed issue with training hidream when batch size is larger than 1
|
2025-04-21 17:26:29 +00:00 |
|
Jaret Burkett
|
77001ee77f
|
Upodate model tag on loras
|
2025-04-19 10:41:27 -06:00 |
|
Jaret Burkett
|
0f99fce004
|
Adjust hidream lora names to work with comfy
|
2025-04-16 09:24:23 -06:00 |
|
Jaret Burkett
|
524bd2edfc
|
Make flash attn optional. Handle larger batch sizes.
|
2025-04-14 14:34:46 +00:00 |
|
Jaret Burkett
|
3a5ea2c742
|
Remove some moe stuff for finetuning. Drastically reduces vram usage
|
2025-04-14 00:57:34 +00:00 |
|
Jaret Burkett
|
f80cf99f40
|
Hidream is training, but has a memory leak
|
2025-04-13 23:28:18 +00:00 |
|
Jaret Burkett
|
594e166ca3
|
Initial support for hidream. Still a WIP
|
2025-04-13 13:50:11 -06:00 |
|
Jaret Burkett
|
7c21eac1b3
|
Added support for Lodestone Rock's Chroma model
|
2025-04-05 13:21:36 -06:00 |
|