Jaret Burkett
|
792a5e37e2
|
Numerous fixes for time sampling. Still not perfect
|
2023-11-28 07:34:43 -07:00 |
|
Jaret Burkett
|
fbec68681d
|
Added timestep modifications to lcm scheduler for more evenly spaced timesteps
|
2023-11-17 23:26:52 -07:00 |
|
Jaret Burkett
|
ad50921c41
|
Sampling tests and added fixes for cleanups
|
2023-11-16 08:33:23 -07:00 |
|
Jaret Burkett
|
e47006ed70
|
Added some features for an LCM condenser plugin
|
2023-11-15 08:56:45 -07:00 |
|
Jaret Burkett
|
4f9cdd916a
|
added prompt dropout to happen indempendently on each TE
|
2023-11-14 05:26:51 -07:00 |
|
Jaret Burkett
|
1ee62562a4
|
diffirential guidance is WORKING (from what I can tell)
|
2023-11-07 19:24:12 -07:00 |
|
Jaret Burkett
|
dc8448d958
|
Added a way pass refiner ratio to sample config
|
2023-11-06 09:22:58 -07:00 |
|
Jaret Burkett
|
93ea955d7c
|
Added refiner fine tuning. Works, but needs some polish.
|
2023-11-05 17:15:03 -07:00 |
|
Jaret Burkett
|
d35733ac06
|
Added support for training ssd-1B. Added support for saving models into diffusers format. We can currently save in safetensors format for ssd-1b, but diffusers cannot load it yet.
|
2023-11-03 05:01:16 -06:00 |
|
Jaret Burkett
|
ceaf1d9454
|
Various bug fixes, wip stuff, and tweaks
|
2023-11-02 18:19:20 -06:00 |
|
Jaret Burkett
|
7d707b2fe6
|
Added masking to slider training. Something is still weird though
|
2023-11-01 14:51:29 -06:00 |
|
Jaret Burkett
|
6f3e0d5af2
|
Improved lorm extraction and training
|
2023-10-28 08:21:59 -06:00 |
|
Jaret Burkett
|
0a79ac9604
|
Added lorm. WIP
|
2023-10-26 18:23:51 -06:00 |
|
Jaret Burkett
|
9636194c09
|
Added fuyu captioning
|
2023-10-25 14:14:53 -06:00 |
|
Jaret Burkett
|
002279cec3
|
Allow short and long caption combinations like form the new captioning system. Merge the network into the model before inference and reextract when done. Doubles inference speed on locon models during inference. allow splitting a batch into individual components and run them through alone. Basicallt gradient accumulation with single batch size.
|
2023-10-24 16:02:07 -06:00 |
|
Jaret Burkett
|
9905a1e205
|
Fixes and longer prompts
|
2023-10-22 08:57:37 -06:00 |
|
Jaret Burkett
|
b1cfafa0c6
|
always prompt both even when training only one encoder
|
2023-10-07 11:23:09 -06:00 |
|
Jaret Burkett
|
cac8754399
|
Allow for training loras on onle one text encoder for sdxl
|
2023-10-06 08:11:56 -06:00 |
|
Jaret Burkett
|
f73402473b
|
Bug fixes. Added some functionality to help with private extensions
|
2023-10-05 07:09:34 -06:00 |
|
Jaret Burkett
|
76c764af49
|
Fixed issues with adapter and disabled telementry for now
|
2023-09-24 12:51:29 -06:00 |
|
Jaret Burkett
|
abf7cd221d
|
allow setting adapter weight in prompts
|
2023-09-24 06:51:54 -06:00 |
|
Jaret Burkett
|
830e87cb87
|
Added IP adapter training. Not functioning correctly yet
|
2023-09-24 02:39:43 -06:00 |
|
Jaret Burkett
|
61badf85a7
|
t2i training working from what I can tell at least
|
2023-09-17 15:56:43 -06:00 |
|
Jaret Burkett
|
c698837241
|
Fixes to esrgan trainer. Moved logic for sd prompt embeddings out of diffusers pipeline so I can manipulate it
|
2023-09-16 17:41:07 -06:00 |
|
Jaret Burkett
|
27f343fc08
|
Added base setup for training t2i adapters. Currently untested, saw something else shiny i wanted to finish sirst. Added content_or_style to the training config. It defaults to balanced, which is standard uniform time step sampling. If style or content is passed, it will use cubic sampling for timesteps to favor timesteps that are beneficial for training them. for style, favor later timesteps. For content, favor earlier timesteps.
|
2023-09-16 08:30:38 -06:00 |
|
Jaret Burkett
|
569d7464d5
|
implemented device placement preset system more places. Vastly improved speed on setting network multiplier and activating network. Fixed timing issues on progress bar
|
2023-09-14 08:31:54 -06:00 |
|
Jaret Burkett
|
ae70200d3c
|
Bug fixes, speed improvements, compatability adjustments withdiffusers updates
|
2023-09-13 07:03:53 -06:00 |
|
Jaret Burkett
|
34bfeba229
|
Massive speed increase. Added latent caching both to disk and to memory
|
2023-09-10 08:54:49 -06:00 |
|
Jaret Burkett
|
cb91b0d6da
|
Changed model download from HF to fp16
|
2023-09-08 07:57:19 -06:00 |
|
Jaret Burkett
|
ce4f9fe02a
|
Bug fixes and improvements to token injection
|
2023-09-08 06:10:59 -06:00 |
|
Jaret Burkett
|
92a086d5a5
|
Fixed issue with token replacements
|
2023-09-07 13:42:39 -06:00 |
|
Jaret Burkett
|
3feb663a51
|
Switched to new bucket system that matched sdxl trained buckets. Fixed requirements. Updated embeddings to work with sdxl. Added method to train lora with an embedding at the trigger. Still testing but works amazingly well from what I can see
|
2023-09-07 13:06:18 -06:00 |
|
Jaret Burkett
|
436bf0c6a3
|
Added experimental concept replacer, replicate converter, bucket maker, and other goodies
|
2023-09-06 18:50:32 -06:00 |
|
Jaret Burkett
|
64a5441832
|
Fully tested and now supporting locon on sdxl. If you have the ram
|
2023-09-04 14:05:10 -06:00 |
|
Jaret Burkett
|
2a40937b4f
|
reworked samplers. Trying to find what is wrong with diffusers sampling is sdxl
|
2023-09-03 07:56:09 -06:00 |
|
Jaret Burkett
|
33267e117c
|
Reworked bucket loader to scale buckets to pixels amounts not just minimum size. Makes the network more consistant
|
2023-08-30 14:52:12 -06:00 |
|
Jaret Burkett
|
836fee47a6
|
Fixed some mismatched weights by adjusting tolerance. The mismatch ironically made the models better lol
|
2023-08-29 15:20:03 -06:00 |
|
Jaret Burkett
|
14ff51ceb4
|
fixed issues with converting and saving models. Cleaned keys. Improved testing for cycle load saving.
|
2023-08-29 12:31:19 -06:00 |
|
Jaret Burkett
|
b79ced3e10
|
Merge branch 'main' into development
|
2023-08-28 16:21:51 -06:00 |
|
Jaret Burkett
|
bee0b6a235
|
Added converters for all stable diffusion models to convert back to ldm format from diffusers.
|
2023-08-28 16:12:32 -06:00 |
|
Jaret Burkett
|
e866c75638
|
Built base interfaces for a DTO to handle batch infomation transports for the dataloader
|
2023-08-28 12:43:31 -06:00 |
|
Jaret Burkett
|
c446f768ea
|
Huge memory optimizations, many big fixes
|
2023-08-27 17:48:02 -06:00 |
|
Jaret Burkett
|
9b164a8688
|
Fixed issue with bucket dataloader corpping in too much. Added normalization capabilities to LoRA modules. Testing effects, but should prevent them from burning and also make them more compatable with stacking many LoRAs
|
2023-08-27 09:40:01 -06:00 |
|
Jaret Burkett
|
6bd3851058
|
Fixed issue with prompt token replace adding more than one replacement
|
2023-08-26 18:52:23 -06:00 |
|
Jaret Burkett
|
3367ab6b2c
|
Moved SD batch processing to a shared method and added it for use in slider training. Still testing if it affects quality over sampling
|
2023-08-26 08:55:00 -06:00 |
|
Jaret Burkett
|
b408f9f3eb
|
Fixed issue with timestep I broke for sliders
|
2023-08-23 16:15:30 -06:00 |
|
Jaret Burkett
|
7157c316af
|
Added support for training lora, dreambooth, and fine tuning. Still need testing and docs
|
2023-08-23 15:37:00 -06:00 |
|
Jaret Burkett
|
d298240cec
|
Tied in ant tested TI script
|
2023-08-23 13:26:28 -06:00 |
|
Jaret Burkett
|
2e6c55c720
|
WIP creating textual inversion training script
|
2023-08-22 21:02:38 -06:00 |
|
Jaret Burkett
|
36ba08d3fa
|
Added a converter back to ldm from diffusers for sdxl. Can finally get to training it properly
|
2023-08-21 16:22:01 -06:00 |
|