Commit Graph

59 Commits

Author SHA1 Message Date
hako-mikan
daee4c0d8f Add force refresh to LoRA Loader refresh function (#2584) 2025-01-28 16:04:44 -05:00
layerdiffusion
f40930c55b fix 2024-09-08 17:24:53 -07:00
layerdiffusion
44eb4ea837 Support T5&Clip Text Encoder LoRA from OneTrainer
requested by #1727
and some cleanups/licenses
PS: LoRA request must give download URL to at least one LoRA
2024-09-08 01:39:29 -07:00
layerdiffusion
a8a81d3d77 fix offline quant lora precision 2024-08-31 13:12:23 -07:00
layerdiffusion
ec7917bd16 fix 2024-08-30 15:37:15 -07:00
layerdiffusion
d1d0ec46aa Maintain patching related
1. fix several problems related to layerdiffuse not unloaded
2. fix several problems related to Fooocus inpaint
3. Slightly speed up on-the-fly LoRAs by precomputing them to computation dtype
2024-08-30 15:18:21 -07:00
layerdiffusion
4c9380c46a Speed up quant model loading and inference ...
... based on 3 evidences:
1. torch.Tensor.view on one big tensor is slightly faster than calling torch.Tensor.to on multiple small tensors.
2. but torch.Tensor.to with dtype change is significantly slower than torch.Tensor.view
3. “baking” model on GPU is significantly faster than computing on CPU when model load.

mainly influence inference of Q8_0, Q4_0/1/K and loading of all quants
2024-08-30 00:49:05 -07:00
layerdiffusion
3d62fa9598 reduce prints 2024-08-29 20:17:32 -07:00
layerdiffusion
95e16f7204 maintain loading related
1. revise model moving orders
2. less verbose printing
3. some misc minor speedups
4. some bnb related maintain
2024-08-29 19:05:48 -07:00
layerdiffusion
388b70134b fix offline loras 2024-08-25 20:28:40 -07:00
layerdiffusion
13d6f8ed90 revise GGUF by precomputing some parameters
rather than computing them in each diffusion iteration
2024-08-25 14:30:09 -07:00
layerdiffusion
f23ee63cb3 always set empty cache signal as long as any patch happens 2024-08-23 08:56:57 -07:00
layerdiffusion
2ab19f7f1c revise lora patching 2024-08-22 11:59:43 -07:00
layerdiffusion
4e3c78178a [revised] change some dtype behaviors based on community feedbacks
only influence old devices like 1080/70/60/50.
please remove cmd flags if you are on 1080/70/60/50 and previously used many cmd flags to tune performance
2024-08-21 10:23:38 -07:00
layerdiffusion
1419ef29aa Revert "change some dtype behaviors based on community feedbacks"
This reverts commit 31bed671ac.
2024-08-21 10:10:49 -07:00
layerdiffusion
31bed671ac change some dtype behaviors based on community feedbacks
only influence old devices like 1080/70/60/50.
please remove cmd flags if you are on 1080/70/60/50 and previously used many cmd flags to tune performance
2024-08-21 08:46:52 -07:00
layerdiffusion
2f1d04759f avoid some mysteries problems when using lots of python local delegations 2024-08-19 09:47:04 -07:00
layerdiffusion
d38e560e42 Implement some rethinking about LoRA system
1. Add an option to allow users to use UNet in fp8/gguf but lora in fp16.
2. All FP16 loras do not need patch. Others will only patch again when lora weight change.
3. FP8 unet + fp16 lora are available (somewhat only available) in Forge now. This also solves some “LoRA too subtle” problems.
4. Significantly speed up all gguf models (in Async mode) by using independent thread (CUDA stream) to compute and dequant at the same time, even when low-bit weights are already on GPU.
5. View “online lora” as a module similar to ControlLoRA so that it is moved to GPU together with model when sampling, achieving significant speedup and perfect low VRAM management simultaneously.
2024-08-19 04:31:59 -07:00
layerdiffusion
53cd00d125 revise 2024-08-17 23:03:50 -07:00
layerdiffusion
db5a876d4c completely solve all LoRA OOMs 2024-08-17 22:43:20 -07:00
layerdiffusion
ab4b0d5b58 fix some mem leak 2024-08-17 00:19:43 -07:00
layerdiffusion
3da7de418a fix layerdiffuse 2024-08-16 21:37:25 -07:00
layerdiffusion
9973d5dc09 better prints 2024-08-16 21:13:09 -07:00
layerdiffusion
f3e211d431 fix bnb lora 2024-08-16 21:09:14 -07:00
layerdiffusion
12369669cf only load lora one time 2024-08-16 02:02:22 -07:00
layerdiffusion
f510f51303 speedup lora patching 2024-08-15 06:51:52 -07:00
layerdiffusion
141cf81c23 sometimes it is not diffusion model 2024-08-15 06:36:59 -07:00
layerdiffusion
021428da26 fix nf4 lora gives pure noise on some devices 2024-08-15 06:35:15 -07:00
layerdiffusion
3d751eb69f move file 2024-08-15 05:46:35 -07:00
layerdiffusion
1bd6cf0e0c Support LoRAs for Q8/Q5/Q4 GGUF Models
what a crazy night of math
2024-08-15 05:34:46 -07:00
layerdiffusion
d336597fa5 add note to lora
but loras for NF4 is done already!
2024-08-15 00:42:48 -07:00
layerdiffusion
d8b83a9501 gguf preview 2024-08-15 00:03:32 -07:00
layerdiffusion
59790f2cb4 simplify codes 2024-08-14 20:48:39 -07:00
layerdiffusion
4b66cf1126 fix possible OOM again 2024-08-14 20:45:58 -07:00
layerdiffusion
a29875206f Revert "simplify codes"
This reverts commit e7567efd4b.
2024-08-14 20:39:05 -07:00
layerdiffusion
e7567efd4b simplify codes 2024-08-14 20:34:02 -07:00
layerdiffusion
bbd0d76b28 fix possible oom 2024-08-14 20:27:05 -07:00
layerdiffusion
cb889470ba experimental LoRA support for NF4 Model
method may change later depending on result quality
2024-08-14 19:52:19 -07:00
layerdiffusion
70a5acd8ad doc 2024-08-14 19:12:02 -07:00
layerdiffusion
aff742b597 speed up lora using cuda profile 2024-08-14 19:09:35 -07:00
layerdiffusion
c73dd119be typo 2024-08-13 16:03:17 -07:00
layerdiffusion
88d0300883 add note 2024-08-13 16:02:40 -07:00
layerdiffusion
a0849953bd revise 2024-08-13 15:13:39 -07:00
layerdiffusion
00f1cd36bd multiple lora implementation sources 2024-08-13 07:13:32 -07:00
lllyasviel
61f83dd610 support all flux models 2024-08-13 05:42:17 -07:00
lllyasviel
cfa5242a75 forge 2.0.0
see also discussions
2024-08-10 19:24:19 -07:00
layerdiffusion
a91a81d8e6 revise structure 2024-08-07 20:44:34 -07:00
layerdiffusion
e1df7a1bae revise kernel 2024-08-07 17:24:22 -07:00
layerdiffusion
b61bf553ea revise inference dtype 2024-08-07 17:08:47 -07:00
lllyasviel
14a759b5ca revise kernel 2024-08-07 13:28:12 -07:00