Commit Graph

28 Commits

Author SHA1 Message Date
Jaret Burkett
55b8b0e23e Fix issue where ARA was not working when using memory manager 2025-10-07 13:39:44 -06:00
Jaret Burkett
4e5707854f Initial support for RamTorch. Still a WIP 2025-10-05 13:03:26 -06:00
Jaret Burkett
f74475161e Add stepped loss type 2025-09-22 15:50:12 -06:00
Jaret Burkett
ea01a1c7d0 Fixed a bug where samples would fail if merging in lora on sampling for unquantized models. Quantize non ARA modules as uint8 when using an ARA 2025-08-25 09:21:40 -06:00
Jaret Burkett
e12bb21780 Quantize blocks sequentialls without a ARA 2025-08-14 09:59:58 -06:00
Jaret Burkett
77b10d884d Add support for training with an accuracy recovery adapter with qwen image 2025-08-12 08:21:36 -06:00
Jaret Burkett
9da8b5408e Initial but untested support for qwen_image 2025-08-04 13:29:37 -06:00
Jaret Burkett
5890e67a46 Various bug fixes 2025-04-29 09:30:33 -06:00
Jaret Burkett
88b3fbae37 Various experiments and minor bug fixes for edge cases 2025-04-25 13:44:38 -06:00
Jaret Burkett
bfe29e2151 Removed all submodules. Submodule free now, yay. 2025-04-18 10:39:15 -06:00
Jaret Burkett
5f312cd46b Remove ip adapter submodule 2025-04-18 09:59:42 -06:00
Jaret Burkett
4a43589666 Use a shuffled embedding as unconditional for i2v adapter 2025-04-11 10:44:43 -06:00
Jaret Burkett
059155174a Added mask diffirential mask dialation for flex2. Handle video for the i2v adapter 2025-04-10 11:50:01 -06:00
Jaret Burkett
a8680c75eb Added initial support for finetuning wan i2v WIP 2025-04-07 20:34:38 -06:00
Jaret Burkett
5365200da1 Added ability to add models to finetune as plugins. Also added flux2 new arch via that method. 2025-03-27 16:07:00 -06:00
Jaret Burkett
ce4c5291a0 Added experimental wavelet loss 2025-03-26 18:11:23 -06:00
Jaret Burkett
4595965e06 Added an inpainting mask generator for training inpainting if inpaint mask is not provided 2025-03-25 12:16:10 -06:00
Jaret Burkett
f5aa4232fa Added ability to quantize with torchao 2025-03-20 16:28:54 -06:00
Jaret Burkett
25341c4613 Got wan 14b training to work on 24GB card. 2025-03-07 17:04:10 -07:00
Jaret Burkett
4fe33f51c1 Fix issue with picking layers for quantization, adjust layers fo better quantization of cogview4 2025-03-05 13:44:40 -07:00
Jaret Burkett
6f6fb90812 Added cogview4. Loss still needs work. 2025-03-04 18:43:52 -07:00
Jaret Burkett
acc79956aa WIP create new class to add new models more easily 2025-03-01 13:49:02 -07:00
Jaret Burkett
58f9d01c2b Added adafactor implementation that handles stochastic rounding of update and accumulation 2024-10-30 05:25:57 -06:00
Jaret Burkett
5d47244c57 Added support for pixart sigma loras 2024-06-16 11:56:30 -06:00
Jaret Burkett
3f3636b788 Bug fixes and little improvements here and there. 2024-06-08 06:24:20 -06:00
Jaret Burkett
5a70b7f38d Added pixart sigma support, but it wont work until i address breaking changes with lora code in diffusers so it can be upgraded. 2024-04-20 10:46:56 -06:00
Jaret Burkett
427847ac4c Small tweaks and fixes for specialized ip adapter training 2024-03-26 11:35:26 -06:00
Jaret Burkett
b01e8d889a Added stochastic rounding to adafactor. ILora adjustments 2024-03-05 07:07:09 -07:00