Jaret Burkett
|
87edca1b2b
|
Added initial support to initiate lora training from an existing lora
|
2025-12-22 12:49:15 -07:00 |
|
Jaret Burkett
|
8864ba915e
|
Remove easy-dwpose from the default requierments
|
2025-12-20 07:16:20 -07:00 |
|
Jaret Burkett
|
ba00eea7d9
|
Add loss graph to the ui
|
2025-12-18 10:08:59 -07:00 |
|
Jaret Burkett
|
e6c5aead3b
|
Fix issue that prevented ramtorch layer offloading with z_image
|
2025-12-02 16:14:34 -07:00 |
|
Jaret Burkett
|
d42f5af2fc
|
Fixed issue with DOP when using Z-Image
|
2025-11-28 09:36:21 -07:00 |
|
Jaret Burkett
|
08a39754a4
|
Fixed issue that prevented caching text embeddings on z-image
|
2025-11-28 09:19:39 -07:00 |
|
Jaret Burkett
|
4e62c38df5
|
Add support for training Z-Image Turbo with a de-distill training adapter
|
2025-11-28 08:08:53 -07:00 |
|
Jaret Burkett
|
50e5d99545
|
Fix issue where text encoder was not fully unloaded in some instances
|
2025-11-19 09:01:00 -07:00 |
|
Jaret Burkett
|
323b4aaf5a
|
Do not copy pin memory if it fails, just move
|
2025-11-17 18:04:00 +00:00 |
|
Jaret Burkett
|
2e7b2d9926
|
Added Differential Guidance training target
|
2025-11-10 09:38:25 -07:00 |
|
Jaret Burkett
|
6f308fc46e
|
When soing guidance loss, make CFG zero an optional target instead of a forced one.
|
2025-11-04 09:16:15 -07:00 |
|
Jaret Burkett
|
8c12977891
|
Fixed adafactor eps
|
2025-10-26 05:47:25 -06:00 |
|
Jaret Burkett
|
ee206cfa18
|
Added blank prompt preservation
|
2025-10-22 14:55:13 -06:00 |
|
Jaret Burkett
|
ff14cd6343
|
Fix check for making sure vae is on the right device.
|
2025-10-21 14:49:20 -06:00 |
|
Jaret Burkett
|
5123090f6c
|
Adjust dataloader tester to handle videos to test them
|
2025-10-21 14:47:23 -06:00 |
|
Jaret Burkett
|
0d8a33dc16
|
Offload ARA with the layer if doing layer offloading. Add support to offload the LoRA. Still needs optimizer support
|
2025-10-21 06:03:27 -06:00 |
|
Jaret Burkett
|
76ce757e0c
|
Added initial support for layer offloading wit Wan 2.2 14B models.
|
2025-10-20 14:54:30 -06:00 |
|
Jaret Burkett
|
1f81bc4060
|
Fix issue where text encoder could be the wrong quantization and fail when using memory manager
|
2025-10-15 11:01:30 -06:00 |
|
Jaret Burkett
|
7abf5e20be
|
Add conv3d to memory management excluded modules
|
2025-10-15 10:12:06 -06:00 |
|
Jaret Burkett
|
1bc6dee127
|
Change auto_memory to be layer_offloading and allow you to set the amount to unload
|
2025-10-10 13:12:32 -06:00 |
|
Jaret Burkett
|
55b8b0e23e
|
Fix issue where ARA was not working when using memory manager
|
2025-10-07 13:39:44 -06:00 |
|
Jaret Burkett
|
c9f982af83
|
Add support for using quantized models with ramtorch
|
2025-10-06 13:46:57 -06:00 |
|
Jaret Burkett
|
dc1cc3e78a
|
Fixed issue where multi control samples didnt work when not caching
|
2025-10-05 14:38:53 -06:00 |
|
Jaret Burkett
|
4e5707854f
|
Initial support for RamTorch. Still a WIP
|
2025-10-05 13:03:26 -06:00 |
|
Jaret Burkett
|
3086a58e5b
|
git status
|
2025-10-01 14:12:17 -06:00 |
|
Jaret Burkett
|
3b1f7b0948
|
Allow user to set the attention backend. Add method to recomver from the occasional OOM if it is a rare event. Still exit if it ooms 3 times in a row.
|
2025-09-27 08:56:15 -06:00 |
|
Jaret Burkett
|
be990630b9
|
Remove dropout from cached text embeddings even if used specifies it so blank prompts are not cached.
|
2025-09-26 11:50:53 -06:00 |
|
Jaret Burkett
|
1069dee0e4
|
Added ui sopport for multi control samples and datasets. Added qwen image edit 5209 to the ui
|
2025-09-25 11:10:02 -06:00 |
|
Jaret Burkett
|
454be0958a
|
Initial support for qwen image edit plus
|
2025-09-24 11:39:10 -06:00 |
|
Jaret Burkett
|
f74475161e
|
Add stepped loss type
|
2025-09-22 15:50:12 -06:00 |
|
Jaret Burkett
|
28728a1e92
|
Added experimental dfe 5
|
2025-09-21 10:48:52 -06:00 |
|
Jaret Burkett
|
390e21bec6
|
Integrate dataset level trigger words and allow them to be cached. Default to global trigger if it is set.
|
2025-09-18 03:29:18 -06:00 |
|
Jaret Burkett
|
3cdf50cbfc
|
Merge pull request #426 from squewel/prior_reg
Dataset-level prior regularization
|
2025-09-18 03:03:18 -06:00 |
|
squewel
|
e27e229b36
|
add prior_reg flag to FileItemDTO
|
2025-09-18 02:09:39 +03:00 |
|
max
|
e4ae97e790
|
add dataset-level distillation-style regularization
|
2025-09-18 01:11:19 +03:00 |
|
Jaret Burkett
|
218f673e3d
|
Added support for new concept slider training script to CLI and UI
|
2025-09-16 10:22:34 -06:00 |
|
Jaret Burkett
|
3666b112a8
|
DEF for fake vae and adjust scaling
|
2025-09-12 18:09:08 -06:00 |
|
Jaret Burkett
|
b95c17dc17
|
Add initial support for chroma radiance
|
2025-09-10 08:41:05 -06:00 |
|
Jaret Burkett
|
af6fdaaaf9
|
Add ability to train a full rank LoRA. (experimental)
|
2025-09-09 07:36:25 -06:00 |
|
Jaret Burkett
|
f699f4be5f
|
Add ability to set transparent color for control images
|
2025-09-02 11:08:44 -06:00 |
|
Jaret Burkett
|
85dcae6e2b
|
Set full size control images to default true
|
2025-09-02 10:30:42 -06:00 |
|
Jaret Burkett
|
7040d8d73b
|
Preperation for audio
|
2025-09-02 07:26:50 -06:00 |
|
Jaret Burkett
|
9ef425a1c5
|
Fixed issue with training qwen with cached text embeds with a batch size more than 1
|
2025-08-28 08:07:12 -06:00 |
|
Jaret Burkett
|
1f541bc5d8
|
Changes to handle a different DFE arch
|
2025-08-27 11:05:16 -06:00 |
|
Jaret Burkett
|
ea01a1c7d0
|
Fixed a bug where samples would fail if merging in lora on sampling for unquantized models. Quantize non ARA modules as uint8 when using an ARA
|
2025-08-25 09:21:40 -06:00 |
|
Jaret Burkett
|
f48d21caee
|
Upgrade a LoRA rank if the new one is larger so users can increase the rank on an exiting training job and continue training at a higher rank.
|
2025-08-24 13:40:25 -06:00 |
|
Jaret Burkett
|
5c27f89af5
|
Add example config for qwen image edit
|
2025-08-23 18:20:36 -06:00 |
|
Jaret Burkett
|
bf2700f7be
|
Initial support for finetuning qwen image. Will only work with caching for now, need to add controls everywhere.
|
2025-08-21 16:41:17 -06:00 |
|
Jaret Burkett
|
8ea2cf00f6
|
Added training to the ui. Still testing, but everything seems to be working.
|
2025-08-16 05:51:37 -06:00 |
|
Jaret Burkett
|
3413fa537f
|
Wan22 14b training is working, still need tons of testing and some bug fixes
|
2025-08-14 13:03:27 -06:00 |
|