Commit Graph

551 Commits

Author SHA1 Message Date
Jaret Burkett
3cdf50cbfc Merge pull request #426 from squewel/prior_reg
Dataset-level prior regularization
2025-09-18 03:03:18 -06:00
squewel
e27e229b36 add prior_reg flag to FileItemDTO 2025-09-18 02:09:39 +03:00
max
e4ae97e790 add dataset-level distillation-style regularization 2025-09-18 01:11:19 +03:00
Jaret Burkett
218f673e3d Added support for new concept slider training script to CLI and UI 2025-09-16 10:22:34 -06:00
Jaret Burkett
3666b112a8 DEF for fake vae and adjust scaling 2025-09-12 18:09:08 -06:00
Jaret Burkett
b95c17dc17 Add initial support for chroma radiance 2025-09-10 08:41:05 -06:00
Jaret Burkett
af6fdaaaf9 Add ability to train a full rank LoRA. (experimental) 2025-09-09 07:36:25 -06:00
Jaret Burkett
f699f4be5f Add ability to set transparent color for control images 2025-09-02 11:08:44 -06:00
Jaret Burkett
85dcae6e2b Set full size control images to default true 2025-09-02 10:30:42 -06:00
Jaret Burkett
7040d8d73b Preperation for audio 2025-09-02 07:26:50 -06:00
Jaret Burkett
9ef425a1c5 Fixed issue with training qwen with cached text embeds with a batch size more than 1 2025-08-28 08:07:12 -06:00
Jaret Burkett
1f541bc5d8 Changes to handle a different DFE arch 2025-08-27 11:05:16 -06:00
Jaret Burkett
ea01a1c7d0 Fixed a bug where samples would fail if merging in lora on sampling for unquantized models. Quantize non ARA modules as uint8 when using an ARA 2025-08-25 09:21:40 -06:00
Jaret Burkett
f48d21caee Upgrade a LoRA rank if the new one is larger so users can increase the rank on an exiting training job and continue training at a higher rank. 2025-08-24 13:40:25 -06:00
Jaret Burkett
5c27f89af5 Add example config for qwen image edit 2025-08-23 18:20:36 -06:00
Jaret Burkett
bf2700f7be Initial support for finetuning qwen image. Will only work with caching for now, need to add controls everywhere. 2025-08-21 16:41:17 -06:00
Jaret Burkett
8ea2cf00f6 Added training to the ui. Still testing, but everything seems to be working. 2025-08-16 05:51:37 -06:00
Jaret Burkett
3413fa537f Wan22 14b training is working, still need tons of testing and some bug fixes 2025-08-14 13:03:27 -06:00
Jaret Burkett
be71cc75ce Switch to unified text encoder for wan models. Pred for 2.2 14b 2025-08-14 10:07:18 -06:00
Jaret Burkett
e12bb21780 Quantize blocks sequentialls without a ARA 2025-08-14 09:59:58 -06:00
Jaret Burkett
3ff4430e84 Fix issue with fake text encoder unload 2025-08-14 09:33:44 -06:00
Jaret Burkett
85bad57df3 Fix bug that would use EMA when set false 2025-08-13 11:39:40 -06:00
Jaret Burkett
77b10d884d Add support for training with an accuracy recovery adapter with qwen image 2025-08-12 08:21:36 -06:00
Jaret Burkett
bb6db3d635 Added support for caching text embeddings. This is just initial support and will probably fail for some models. Still needs to be ompimized 2025-08-07 10:27:55 -06:00
Jaret Burkett
5d8922fca2 Add ability to designate a dataset as i2v or t2v for models that support it 2025-08-06 09:29:47 -06:00
Jaret Burkett
9da8b5408e Initial but untested support for qwen_image 2025-08-04 13:29:37 -06:00
Jaret Burkett
9dfb614755 Initial work for training wan first and last frame 2025-08-04 11:37:26 -06:00
Jaret Burkett
f453e28ea3 Fixed deprecation of lumina pipeline error 2025-07-29 08:26:51 -06:00
Jaret Burkett
ca7c5c950b Add support for Wan2.2 5B 2025-07-29 05:31:54 -06:00
Jaret Burkett
cefa2ca5fe Added initial support for Hidream E1 training 2025-07-27 15:12:56 -06:00
Jaret Burkett
77dc38a574 Some work on caching text embeddings 2025-07-26 09:22:04 -06:00
Jaret Burkett
c5eb763342 Improvements to VAE trainer. Allow CLIP loss. 2025-07-24 06:50:56 -06:00
Jaret Burkett
f500b9f240 Add ability to do more advanced sample prompt objects to prepart for a UI rework on control images and other things. 2025-07-17 07:13:35 -06:00
Jaret Burkett
e5ed450dc7 Allow finetuning tiny autoencoder in vae trainer 2025-07-16 07:13:30 -06:00
Jaret Burkett
1930c3edea Fix naming with wan i2v new keys in lora 2025-07-14 07:34:01 -06:00
Jaret Burkett
755f0e207c Fix issue with wan i2v scaling. Adjust aggressive loader to be compatable with updated diffusers. 2025-07-12 16:56:27 -06:00
Jaret Burkett
60ef2f1df7 Added support for FLUX.1-Kontext-dev 2025-06-26 15:24:37 -06:00
Jaret Burkett
8d9c47316a Work on mean flow. Minor bug fixes. Omnigen improvements 2025-06-26 13:46:20 -06:00
Jaret Burkett
24cd94929e Fix bug that can happen with fast processing dataset 2025-06-25 14:01:08 -06:00
Jaret Burkett
19ea8ecc38 Added support for finetuning OmniGen2. 2025-06-25 13:58:16 -06:00
Jaret Burkett
03bc431279 Fixed an issue training lumina 2 2025-06-24 10:29:47 -06:00
Jaret Burkett
f3eb1dff42 Add a config flag to trigger fast image size db builder. Add config flag to set unconditional prompt for guidance loss 2025-06-24 08:51:29 -06:00
Jaret Burkett
ba1274d99e Added a guidance burning loss. Modified DFE to work with new model. Bug fixes 2025-06-23 08:38:27 -06:00
Jaret Burkett
8602470952 Updated diffusion feature extractor 2025-06-19 15:36:10 -06:00
Jaret Burkett
989ebfaa11 Added a basic torch profiler that can be used in config during development to find some obvious issues. 2025-06-17 13:03:39 -06:00
Jaret Burkett
1cc663a664 Performance optimizations for pre processing the batch 2025-06-17 07:37:41 -06:00
Jaret Burkett
1c2b7298dd More work on mean flow loss. Moved it to an adapter. Still not functioning properly though. 2025-06-16 07:17:35 -06:00
Jaret Burkett
c0314ba325 Fixed some issues with training mean flow algo. Still testing WIP 2025-06-16 07:14:59 -06:00
Jaret Burkett
fc83eb7691 WIP on mean flow loss. Still a WIP. 2025-06-12 08:00:51 -06:00
Hameer Abbasi
5e86139e0a Fix NameError. 2025-06-11 15:07:20 +02:00