Jaret Burkett
|
93ea955d7c
|
Added refiner fine tuning. Works, but needs some polish.
|
2023-11-05 17:15:03 -07:00 |
|
Jaret Burkett
|
8a9e8f708f
|
Added base for using guidance during training. Still not working right.
|
2023-11-05 04:03:32 -07:00 |
|
Jaret Burkett
|
d35733ac06
|
Added support for training ssd-1B. Added support for saving models into diffusers format. We can currently save in safetensors format for ssd-1b, but diffusers cannot load it yet.
|
2023-11-03 05:01:16 -06:00 |
|
Jaret Burkett
|
a899ec91c8
|
Added some split prompting started code, adamw8bit, replacements improving, learnable snr gos. A lot of good stuff.
|
2023-11-01 06:52:21 -06:00 |
|
Jaret Burkett
|
436a09430e
|
Added flat snr gamma vs min. Fixes timestep timing
|
2023-10-29 15:41:55 -06:00 |
|
Jaret Burkett
|
298001439a
|
Added gradient accumulation finally
|
2023-10-28 13:14:29 -06:00 |
|
Jaret Burkett
|
6f3e0d5af2
|
Improved lorm extraction and training
|
2023-10-28 08:21:59 -06:00 |
|
Jaret Burkett
|
0a79ac9604
|
Added lorm. WIP
|
2023-10-26 18:23:51 -06:00 |
|
Jaret Burkett
|
002279cec3
|
Allow short and long caption combinations like form the new captioning system. Merge the network into the model before inference and reextract when done. Doubles inference speed on locon models during inference. allow splitting a batch into individual components and run them through alone. Basicallt gradient accumulation with single batch size.
|
2023-10-24 16:02:07 -06:00 |
|
Jaret Burkett
|
07bf7bd7de
|
Allow augmentations and targeting different loss types fron the config file
|
2023-10-18 03:04:57 -06:00 |
|
Jaret Burkett
|
da6302ada8
|
added a method to apply multipliers to noise and latents prior to combining
|
2023-10-17 06:09:16 -06:00 |
|
Jaret Burkett
|
38e441a29c
|
allow flipping for point of interesting autocropping. allow num repeats. Fixed some bugs with new free u
|
2023-10-12 21:02:47 -06:00 |
|
Jaret Burkett
|
63ceffae24
|
Massive speed increases and ram optimizations
|
2023-10-10 06:07:55 -06:00 |
|
Jaret Burkett
|
1d3de678aa
|
fixed bug with trigger word embedding. Allow control images to load from the dataloader or legacy way
|
2023-10-09 06:21:49 -06:00 |
|
Jaret Burkett
|
cac8754399
|
Allow for training loras on onle one text encoder for sdxl
|
2023-10-06 08:11:56 -06:00 |
|
Jaret Burkett
|
f73402473b
|
Bug fixes. Added some functionality to help with private extensions
|
2023-10-05 07:09:34 -06:00 |
|
Jaret Burkett
|
579650eaf8
|
Fixed big issue with bucketing dataloader and added random cripping to a point of interest
|
2023-10-02 18:31:08 -06:00 |
|
Jaret Burkett
|
8d9450ad7c
|
Compatability fixes
|
2023-09-29 14:07:37 -06:00 |
|
Jaret Burkett
|
abf7cd221d
|
allow setting adapter weight in prompts
|
2023-09-24 06:51:54 -06:00 |
|
Jaret Burkett
|
e5153d87c9
|
Fixed issues with dataloader bucketing. Allow using standard base image for t2i adapters.
|
2023-09-24 05:19:57 -06:00 |
|
Jaret Burkett
|
830e87cb87
|
Added IP adapter training. Not functioning correctly yet
|
2023-09-24 02:39:43 -06:00 |
|
Jaret Burkett
|
19255cdc7c
|
Bugfixes. Added small augmentations to dataloader. Will switch to abluminations soon though. Added ability to adjust step count on start to override what is in the file
|
2023-09-20 05:30:10 -06:00 |
|
Jaret Burkett
|
0f105690cc
|
Added some further extendability for plugins
|
2023-09-19 05:41:44 -06:00 |
|
Jaret Burkett
|
61badf85a7
|
t2i training working from what I can tell at least
|
2023-09-17 15:56:43 -06:00 |
|
Jaret Burkett
|
27f343fc08
|
Added base setup for training t2i adapters. Currently untested, saw something else shiny i wanted to finish sirst. Added content_or_style to the training config. It defaults to balanced, which is standard uniform time step sampling. If style or content is passed, it will use cubic sampling for timesteps to favor timesteps that are beneficial for training them. for style, favor later timesteps. For content, favor earlier timesteps.
|
2023-09-16 08:30:38 -06:00 |
|
Jaret Burkett
|
17e4fe40d7
|
Prevent lycoris network moduels if not training that part of network. Skew timesteps to favor later steps. It performs better
|
2023-09-14 15:13:24 -06:00 |
|
Jaret Burkett
|
569d7464d5
|
implemented device placement preset system more places. Vastly improved speed on setting network multiplier and activating network. Fixed timing issues on progress bar
|
2023-09-14 08:31:54 -06:00 |
|
Jaret Burkett
|
4e945917df
|
added dropout to LoRA networks
|
2023-09-13 15:23:07 -06:00 |
|
Jaret Burkett
|
ae70200d3c
|
Bug fixes, speed improvements, compatability adjustments withdiffusers updates
|
2023-09-13 07:03:53 -06:00 |
|
Jaret Burkett
|
d74dd636ee
|
Memory optimizations. Default to using cudamalloc when torch 2.0 for mem allocation
|
2023-09-12 04:30:23 -06:00 |
|
Jaret Burkett
|
34bfeba229
|
Massive speed increase. Added latent caching both to disk and to memory
|
2023-09-10 08:54:49 -06:00 |
|
Jaret Burkett
|
626ed2939a
|
bug fixes
|
2023-09-09 15:04:44 -06:00 |
|
Jaret Burkett
|
2128ac1e08
|
fixed issue with embed name, save whole config to dir instead of just process so it can be easily shared. Only make one config, no timesteps
|
2023-09-09 12:24:08 -06:00 |
|
Jaret Burkett
|
be804c9cf5
|
Save embeddings as their trigger to match auto and comfy style loading. Also, FINALLY found why gradients were wonkey and fixed it. The root problem is dropping out of network state before backward pass.
|
2023-09-09 12:02:07 -06:00 |
|
Jaret Burkett
|
408c50ead1
|
actually got gradient checkpointing working, again, again, maybe
|
2023-09-09 11:27:42 -06:00 |
|
Jaret Burkett
|
b01ab5d375
|
FINALLY fixed gradient checkpointing issue. Big batches baby.
|
2023-09-08 15:21:46 -06:00 |
|
Jaret Burkett
|
ce4f9fe02a
|
Bug fixes and improvements to token injection
|
2023-09-08 06:10:59 -06:00 |
|
Jaret Burkett
|
92a086d5a5
|
Fixed issue with token replacements
|
2023-09-07 13:42:39 -06:00 |
|
Jaret Burkett
|
3feb663a51
|
Switched to new bucket system that matched sdxl trained buckets. Fixed requirements. Updated embeddings to work with sdxl. Added method to train lora with an embedding at the trigger. Still testing but works amazingly well from what I can see
|
2023-09-07 13:06:18 -06:00 |
|
Jaret Burkett
|
64a5441832
|
Fully tested and now supporting locon on sdxl. If you have the ram
|
2023-09-04 14:05:10 -06:00 |
|
Jaret Burkett
|
a4c3507a62
|
Added LoCON from LyCORIS
|
2023-09-04 08:48:07 -06:00 |
|
Jaret Burkett
|
fa8fc32c0a
|
Corrected key saving and loading to better match kohya
|
2023-09-04 00:22:34 -06:00 |
|
Jaret Burkett
|
22ed539321
|
Allow special args for schedulers
|
2023-09-03 20:38:44 -06:00 |
|
Jaret Burkett
|
2a40937b4f
|
reworked samplers. Trying to find what is wrong with diffusers sampling is sdxl
|
2023-09-03 07:56:09 -06:00 |
|
Jaret Burkett
|
836fee47a6
|
Fixed some mismatched weights by adjusting tolerance. The mismatch ironically made the models better lol
|
2023-08-29 15:20:03 -06:00 |
|
Jaret Burkett
|
14ff51ceb4
|
fixed issues with converting and saving models. Cleaned keys. Improved testing for cycle load saving.
|
2023-08-29 12:31:19 -06:00 |
|
Jaret Burkett
|
714854ee86
|
Hude rework to move the batch to a DTO to make it far more modular to the future ui
|
2023-08-29 10:22:19 -06:00 |
|
Jaret Burkett
|
bd758ff203
|
Cleanup and small bug fixes
|
2023-08-29 05:45:49 -06:00 |
|
Jaret Burkett
|
a008d9e63b
|
Fixed issue with loadin models after resume function added. Added additional flush if not training text encoder to clear out vram before grad accum
|
2023-08-28 17:56:30 -06:00 |
|
Jaret Burkett
|
e866c75638
|
Built base interfaces for a DTO to handle batch infomation transports for the dataloader
|
2023-08-28 12:43:31 -06:00 |
|