Jaret Burkett
|
f6e16e582a
|
Added Differential Output Preservation Loss to trainer and ui
|
2025-02-25 20:12:36 -07:00 |
|
Jaret Burkett
|
56d8d6bd81
|
Capture speed from the timer for the ui
|
2025-02-23 14:38:46 -07:00 |
|
Jaret Burkett
|
60f848a877
|
Send more data when loading the model to the ui
|
2025-02-23 12:49:54 -07:00 |
|
Jaret Burkett
|
b366e46f1c
|
Added more settings to the training config
|
2025-02-23 12:34:52 -07:00 |
|
Jaret Burkett
|
adcf884c0f
|
Built out the ui trainer plugin with db comminication
|
2025-02-21 05:53:35 -07:00 |
|
Jaret Burkett
|
33fdfd6091
|
Added beginning or lokr
|
2025-02-20 12:47:42 -07:00 |
|
Jaret Burkett
|
9f6030620f
|
Dataset uploads working
|
2025-02-20 12:47:01 -07:00 |
|
Jaret Burkett
|
4af6c5cf30
|
Work on supporting flex.2 potential arch
|
2025-02-17 14:10:25 -07:00 |
|
Jaret Burkett
|
1f7784510d
|
WIP Flex 2 pipeline
|
2025-02-16 14:54:29 -07:00 |
|
Jaret Burkett
|
87e557cf1e
|
Bug fixes and improvements to llmadapter
|
2025-02-15 07:18:07 -07:00 |
|
Jaret Burkett
|
bd8d7dc081
|
fixed various issues with llm attention masking. Added block training on the llm adapter.
|
2025-02-14 11:24:01 -07:00 |
|
Jaret Burkett
|
2be6926398
|
Added back syustem prompt for llm and remove those tokens from the embeddings
|
2025-02-14 07:23:37 -07:00 |
|
Jaret Burkett
|
87ac031859
|
Remove system prompt, shouldnt be necessary fo rhow it works.
|
2025-02-13 08:42:48 -07:00 |
|
Jaret Burkett
|
7679105d52
|
Added llm text encoder adapter
|
2025-02-13 08:28:32 -07:00 |
|
Jaret Burkett
|
2622de1e01
|
DFE tweaks. Adding support for more llms as text encoders
|
2025-02-13 04:31:49 -07:00 |
|
Jaret Burkett
|
0b8a32def7
|
merged in lumina2 branch
|
2025-02-12 09:33:03 -07:00 |
|
Jaret Burkett
|
787bb37e76
|
Small fixed for DFE, polar guidance, and other things
|
2025-02-12 09:27:44 -07:00 |
|
Jaret Burkett
|
9a7266275d
|
Wokr on lumina2
|
2025-02-08 14:52:39 -07:00 |
|
Jaret Burkett
|
d138f07365
|
Imitial lumina3 support
|
2025-02-08 10:59:53 -07:00 |
|
Jaret Burkett
|
c6d8eedb94
|
Added ability to use consistent noise for each image in a dataset by hashing the path and using that as a seed.
|
2025-02-08 07:13:48 -07:00 |
|
Jaret Burkett
|
216ab164ce
|
Experimental features and bug fixes
|
2025-02-04 13:36:34 -07:00 |
|
Jaret Burkett
|
e6180d1e1d
|
Bug fixes
|
2025-01-31 13:23:01 -07:00 |
|
Jaret Burkett
|
15a57bc89f
|
Add new version of DFE. Kitchen sink
|
2025-01-31 11:42:27 -07:00 |
|
Jaret Burkett
|
34a1c6947a
|
Added flux_shift as timestep type
|
2025-01-27 07:35:00 -07:00 |
|
Jaret Burkett
|
2141c6e06c
|
Merge remote-tracking branch 'origin/main' into accelerate-multi-gpu
|
2025-01-26 11:19:34 -07:00 |
|
Jaret Burkett
|
1188cf1e8a
|
Adjust flux sample sampler to handle some new breaking changes in diffusers.
|
2025-01-26 18:09:21 +00:00 |
|
Jaret Burkett
|
5e663746b8
|
Working multi gpu training. Still need a lot of tweaks and testing.
|
2025-01-25 16:46:20 -07:00 |
|
Jaret Burkett
|
bbfba0c188
|
Added v2 of dfp
|
2025-01-22 16:32:13 -07:00 |
|
Jaret Burkett
|
e1549ad54d
|
Update dfe model arch
|
2025-01-22 10:37:23 -07:00 |
|
Jaret Burkett
|
04abe57c76
|
Added weighing to DFE
|
2025-01-22 08:50:57 -07:00 |
|
Jaret Burkett
|
89dd041b97
|
Added ability to pair samples with a closer noise with optimal_noise_pairing_samples
|
2025-01-21 18:30:10 -07:00 |
|
Jaret Burkett
|
29122b1a54
|
Added code to handle diffusion feature extraction loss
|
2025-01-21 14:21:34 -07:00 |
|
Jaret Burkett
|
fadb2f3a76
|
Allow quantizing the te independently on flux. added lognorm_blend timestep schedule
|
2025-01-18 18:02:31 -07:00 |
|
Jaret Burkett
|
4723f23c0d
|
Added ability to split up flux across gpus (experimental). Changed the way timestep scheduling works to prep for more specific schedules.
|
2024-12-31 07:06:55 -07:00 |
|
Jaret Burkett
|
8ef07a9c36
|
Added training for an experimental decoratgor embedding. Allow for turning off guidance embedding on flux (for unreleased model). Various bug fixes and modifications
|
2024-12-15 08:59:27 -07:00 |
|
Jaret Burkett
|
92ce93140e
|
Adjustments to defaults for automagic
|
2024-11-29 10:28:06 -07:00 |
|
Jaret Burkett
|
f213996aa5
|
Fixed saving and displaying for automagic
|
2024-11-29 08:00:22 -07:00 |
|
Jaret Burkett
|
cbe31eaf0a
|
Initial work on a auto adjusting optimizer
|
2024-11-29 04:48:58 -07:00 |
|
Jaret Burkett
|
67c2e44edb
|
Added support for training flux redux adapters
|
2024-11-21 20:01:52 -07:00 |
|
Jaret Burkett
|
96d418bb95
|
Added support for full finetuning flux with randomized param activation. Examples coming soon
|
2024-11-21 13:05:32 -07:00 |
|
Jaret Burkett
|
894374b2e9
|
Various bug fixes and optimizations for quantized training. Added untested custom adam8bit optimizer. Did some work on LoRM (dont use)
|
2024-11-20 09:16:55 -07:00 |
|
Jaret Burkett
|
6509ba4484
|
Fix seed generation to make it deterministic so it is consistant from gpu to gpu
|
2024-11-15 12:11:13 -07:00 |
|
Jaret Burkett
|
025ee3dd3d
|
Added ability for adafactor to fully fine tune quantized model.
|
2024-10-30 16:38:07 -06:00 |
|
Jaret Burkett
|
58f9d01c2b
|
Added adafactor implementation that handles stochastic rounding of update and accumulation
|
2024-10-30 05:25:57 -06:00 |
|
Jaret Burkett
|
e72b59a8e9
|
Added experimental 8bit version of prodigy with stochastic rounding and stochastic gradient accumulation. Still testing.
|
2024-10-29 14:28:28 -06:00 |
|
Jaret Burkett
|
4aa19b5c1d
|
Only quantize flux T5 is also quantizing model. Load TE from original name and path if fine tuning.
|
2024-10-29 14:25:31 -06:00 |
|
Jaret Burkett
|
4747716867
|
Fixed issue with adapters not providing gradients with new grad activator
|
2024-10-29 14:22:10 -06:00 |
|
Jaret Burkett
|
22cd40d7b9
|
Improvements for full tuning flux. Added debugging launch config for vscode
|
2024-10-29 04:54:08 -06:00 |
|
Jaret Burkett
|
3400882a80
|
Added preliminary support for SD3.5-large lora training
|
2024-10-22 12:21:36 -06:00 |
|
Jaret Burkett
|
9f94c7b61e
|
Added experimental param multiplier to the ema module
|
2024-10-22 09:25:52 -06:00 |
|