Jaret Burkett
|
dc1cc3e78a
|
Fixed issue where multi control samples didnt work when not caching
|
2025-10-05 14:38:53 -06:00 |
|
Jaret Burkett
|
4e5707854f
|
Initial support for RamTorch. Still a WIP
|
2025-10-05 13:03:26 -06:00 |
|
Jaret Burkett
|
3086a58e5b
|
git status
|
2025-10-01 14:12:17 -06:00 |
|
Jaret Burkett
|
3b1f7b0948
|
Allow user to set the attention backend. Add method to recomver from the occasional OOM if it is a rare event. Still exit if it ooms 3 times in a row.
|
2025-09-27 08:56:15 -06:00 |
|
Jaret Burkett
|
be990630b9
|
Remove dropout from cached text embeddings even if used specifies it so blank prompts are not cached.
|
2025-09-26 11:50:53 -06:00 |
|
Jaret Burkett
|
1069dee0e4
|
Added ui sopport for multi control samples and datasets. Added qwen image edit 5209 to the ui
|
2025-09-25 11:10:02 -06:00 |
|
Jaret Burkett
|
454be0958a
|
Initial support for qwen image edit plus
|
2025-09-24 11:39:10 -06:00 |
|
Jaret Burkett
|
f74475161e
|
Add stepped loss type
|
2025-09-22 15:50:12 -06:00 |
|
Jaret Burkett
|
28728a1e92
|
Added experimental dfe 5
|
2025-09-21 10:48:52 -06:00 |
|
Jaret Burkett
|
390e21bec6
|
Integrate dataset level trigger words and allow them to be cached. Default to global trigger if it is set.
|
2025-09-18 03:29:18 -06:00 |
|
Jaret Burkett
|
3cdf50cbfc
|
Merge pull request #426 from squewel/prior_reg
Dataset-level prior regularization
|
2025-09-18 03:03:18 -06:00 |
|
squewel
|
e27e229b36
|
add prior_reg flag to FileItemDTO
|
2025-09-18 02:09:39 +03:00 |
|
max
|
e4ae97e790
|
add dataset-level distillation-style regularization
|
2025-09-18 01:11:19 +03:00 |
|
Jaret Burkett
|
218f673e3d
|
Added support for new concept slider training script to CLI and UI
|
2025-09-16 10:22:34 -06:00 |
|
Jaret Burkett
|
3666b112a8
|
DEF for fake vae and adjust scaling
|
2025-09-12 18:09:08 -06:00 |
|
Jaret Burkett
|
b95c17dc17
|
Add initial support for chroma radiance
|
2025-09-10 08:41:05 -06:00 |
|
Jaret Burkett
|
af6fdaaaf9
|
Add ability to train a full rank LoRA. (experimental)
|
2025-09-09 07:36:25 -06:00 |
|
Jaret Burkett
|
f699f4be5f
|
Add ability to set transparent color for control images
|
2025-09-02 11:08:44 -06:00 |
|
Jaret Burkett
|
85dcae6e2b
|
Set full size control images to default true
|
2025-09-02 10:30:42 -06:00 |
|
Jaret Burkett
|
7040d8d73b
|
Preperation for audio
|
2025-09-02 07:26:50 -06:00 |
|
Jaret Burkett
|
9ef425a1c5
|
Fixed issue with training qwen with cached text embeds with a batch size more than 1
|
2025-08-28 08:07:12 -06:00 |
|
Jaret Burkett
|
1f541bc5d8
|
Changes to handle a different DFE arch
|
2025-08-27 11:05:16 -06:00 |
|
Jaret Burkett
|
ea01a1c7d0
|
Fixed a bug where samples would fail if merging in lora on sampling for unquantized models. Quantize non ARA modules as uint8 when using an ARA
|
2025-08-25 09:21:40 -06:00 |
|
Jaret Burkett
|
f48d21caee
|
Upgrade a LoRA rank if the new one is larger so users can increase the rank on an exiting training job and continue training at a higher rank.
|
2025-08-24 13:40:25 -06:00 |
|
Jaret Burkett
|
5c27f89af5
|
Add example config for qwen image edit
|
2025-08-23 18:20:36 -06:00 |
|
Jaret Burkett
|
bf2700f7be
|
Initial support for finetuning qwen image. Will only work with caching for now, need to add controls everywhere.
|
2025-08-21 16:41:17 -06:00 |
|
Jaret Burkett
|
8ea2cf00f6
|
Added training to the ui. Still testing, but everything seems to be working.
|
2025-08-16 05:51:37 -06:00 |
|
Jaret Burkett
|
3413fa537f
|
Wan22 14b training is working, still need tons of testing and some bug fixes
|
2025-08-14 13:03:27 -06:00 |
|
Jaret Burkett
|
be71cc75ce
|
Switch to unified text encoder for wan models. Pred for 2.2 14b
|
2025-08-14 10:07:18 -06:00 |
|
Jaret Burkett
|
e12bb21780
|
Quantize blocks sequentialls without a ARA
|
2025-08-14 09:59:58 -06:00 |
|
Jaret Burkett
|
3ff4430e84
|
Fix issue with fake text encoder unload
|
2025-08-14 09:33:44 -06:00 |
|
Jaret Burkett
|
85bad57df3
|
Fix bug that would use EMA when set false
|
2025-08-13 11:39:40 -06:00 |
|
Jaret Burkett
|
77b10d884d
|
Add support for training with an accuracy recovery adapter with qwen image
|
2025-08-12 08:21:36 -06:00 |
|
Jaret Burkett
|
bb6db3d635
|
Added support for caching text embeddings. This is just initial support and will probably fail for some models. Still needs to be ompimized
|
2025-08-07 10:27:55 -06:00 |
|
Jaret Burkett
|
5d8922fca2
|
Add ability to designate a dataset as i2v or t2v for models that support it
|
2025-08-06 09:29:47 -06:00 |
|
Jaret Burkett
|
9da8b5408e
|
Initial but untested support for qwen_image
|
2025-08-04 13:29:37 -06:00 |
|
Jaret Burkett
|
9dfb614755
|
Initial work for training wan first and last frame
|
2025-08-04 11:37:26 -06:00 |
|
Jaret Burkett
|
f453e28ea3
|
Fixed deprecation of lumina pipeline error
|
2025-07-29 08:26:51 -06:00 |
|
Jaret Burkett
|
ca7c5c950b
|
Add support for Wan2.2 5B
|
2025-07-29 05:31:54 -06:00 |
|
Jaret Burkett
|
cefa2ca5fe
|
Added initial support for Hidream E1 training
|
2025-07-27 15:12:56 -06:00 |
|
Jaret Burkett
|
77dc38a574
|
Some work on caching text embeddings
|
2025-07-26 09:22:04 -06:00 |
|
Jaret Burkett
|
c5eb763342
|
Improvements to VAE trainer. Allow CLIP loss.
|
2025-07-24 06:50:56 -06:00 |
|
Jaret Burkett
|
f500b9f240
|
Add ability to do more advanced sample prompt objects to prepart for a UI rework on control images and other things.
|
2025-07-17 07:13:35 -06:00 |
|
Jaret Burkett
|
e5ed450dc7
|
Allow finetuning tiny autoencoder in vae trainer
|
2025-07-16 07:13:30 -06:00 |
|
Jaret Burkett
|
1930c3edea
|
Fix naming with wan i2v new keys in lora
|
2025-07-14 07:34:01 -06:00 |
|
Jaret Burkett
|
755f0e207c
|
Fix issue with wan i2v scaling. Adjust aggressive loader to be compatable with updated diffusers.
|
2025-07-12 16:56:27 -06:00 |
|
Jaret Burkett
|
60ef2f1df7
|
Added support for FLUX.1-Kontext-dev
|
2025-06-26 15:24:37 -06:00 |
|
Jaret Burkett
|
8d9c47316a
|
Work on mean flow. Minor bug fixes. Omnigen improvements
|
2025-06-26 13:46:20 -06:00 |
|
Jaret Burkett
|
24cd94929e
|
Fix bug that can happen with fast processing dataset
|
2025-06-25 14:01:08 -06:00 |
|
Jaret Burkett
|
19ea8ecc38
|
Added support for finetuning OmniGen2.
|
2025-06-25 13:58:16 -06:00 |
|