Jaret Burkett
|
f80cf99f40
|
Hidream is training, but has a memory leak
|
2025-04-13 23:28:18 +00:00 |
|
Jaret Burkett
|
2b901cca39
|
Small tweaks and bug fixes and future proofing
|
2025-04-05 12:39:45 -06:00 |
|
Jaret Burkett
|
f6e16e582a
|
Added Differential Output Preservation Loss to trainer and ui
|
2025-02-25 20:12:36 -07:00 |
|
Jaret Burkett
|
93b52932c1
|
Added training for pixart-a
|
2024-02-13 16:00:04 -07:00 |
|
Jaret Burkett
|
0f8daa5612
|
Bug fixes, work on maing IP adapters more customizable.
|
2023-12-24 08:32:39 -07:00 |
|
Jaret Burkett
|
560251a24f
|
fixed issue with down block residuals when doing slider cfg on sdxl with t2i adapter assisted training
|
2023-10-01 07:32:48 -06:00 |
|
Jaret Burkett
|
8509da60cb
|
Added a way to add a t2i adapter guided slider training for more consitant images
|
2023-09-28 14:08:56 -06:00 |
|
Jaret Burkett
|
c698837241
|
Fixes to esrgan trainer. Moved logic for sd prompt embeddings out of diffusers pipeline so I can manipulate it
|
2023-09-16 17:41:07 -06:00 |
|
Jaret Burkett
|
34bfeba229
|
Massive speed increase. Added latent caching both to disk and to memory
|
2023-09-10 08:54:49 -06:00 |
|
Jaret Burkett
|
a008d9e63b
|
Fixed issue with loadin models after resume function added. Added additional flush if not training text encoder to clear out vram before grad accum
|
2023-08-28 17:56:30 -06:00 |
|
Jaret Burkett
|
e866c75638
|
Built base interfaces for a DTO to handle batch infomation transports for the dataloader
|
2023-08-28 12:43:31 -06:00 |
|
Jaret Burkett
|
aeaca13d69
|
Fixed issue with shuffeling permutations
|
2023-08-23 22:02:00 -06:00 |
|
Jaret Burkett
|
bef5551ea5
|
Ultimate slider training built, still needs tuning
|
2023-08-19 18:54:34 -06:00 |
|
Jaret Burkett
|
c6675e2801
|
Added shuffeling to prompts
|
2023-08-19 07:57:30 -06:00 |
|
Jaret Burkett
|
df48f0a843
|
Moved some of the job config into base process so it will be easier to extend extensions
|
2023-08-10 12:14:05 -06:00 |
|
Jaret Burkett
|
8c90fa86c6
|
Complete reqork of how slider training works and optimized it to hell. Can run entire algorythm in 1 batch now with less VRAM consumption than a quarter of it used to take
|
2023-08-05 18:46:08 -06:00 |
|