Commit Graph

  • 8aabd7c8c0 SaveLora node can now save "full diff" lora format. comfyanonymous 2024-09-07 03:21:02 -04:00
  • a09b29ca11 Add an option to the SaveLora node to store the bias diff. comfyanonymous 2024-09-07 02:56:24 -04:00
  • 9bfee68773 LoraSave node now supports generating text encoder loras. comfyanonymous 2024-09-07 02:30:12 -04:00
  • ea77750759 Support a generic Comfy format for text encoder loras. comfyanonymous 2024-09-07 02:13:13 -04:00
  • c27ebeb1c2 Fix onnx export not working on flux. comfyanonymous 2024-09-06 03:21:52 -04:00
  • 0c7c98a965 Nodes using UNIQUE_ID as input are NOT_IDEMPOTENT (#4793) v0.2.2 guill 2024-09-05 16:33:02 -07:00
  • dc2eb75b85 Update stable release workflow to latest pytorch with cuda 12.4. comfyanonymous 2024-09-05 19:21:52 -04:00
  • fa34efe3bd Update frontend to v1.2.47 (#4798) Chenlei Hu 2024-09-05 15:56:01 -07:00
  • 5cbaa9e07c Mistoline flux controlnet support. comfyanonymous 2024-09-05 00:04:52 -04:00
  • c7427375ee Prioritize freeing partially offloaded models first. comfyanonymous 2024-09-04 19:47:32 -04:00
  • 22d1241a50 Add an experimental LoraSave node to extract model loras. comfyanonymous 2024-09-04 16:38:38 -04:00
  • f04229b84d Add emb_patch support to UNetModel forward (#4779) Jedrzej Kosinski 2024-09-04 13:35:15 -05:00
  • f067ad15d1 Make live preview size a configurable launch argument (#4649) Silver 2024-09-04 01:16:38 +02:00
  • 483004dd1d Support newer glora format. v0.2.1 comfyanonymous 2024-09-03 17:02:19 -04:00
  • 00a5d08103 Lower fp8 lora memory usage. comfyanonymous 2024-09-03 01:25:05 -04:00
  • d043997d30 Flux onetrainer lora. comfyanonymous 2024-09-02 08:22:15 -04:00
  • f1c2301697 fix typo in stale-issues (#4735) v0.2.0 Alex "mcmonkey" Goodwin 2024-09-01 14:44:49 -07:00
  • 8d31a6632f Speed up inference on nvidia 10 series on Linux. comfyanonymous 2024-09-01 17:29:31 -04:00
  • b643eae08b Make minimum_inference_memory() depend on --reserve-vram comfyanonymous 2024-09-01 01:01:54 -04:00
  • baa6b4dc36 Update manual install instructions. comfyanonymous 2024-08-31 03:59:42 -04:00
  • d4aeefc297 add github action to automatically handle stale user support issues (#4683) Alex "mcmonkey" Goodwin 2024-08-30 22:57:18 -07:00
  • 587e7ca654 Remove github buttons. comfyanonymous 2024-08-31 01:45:41 -04:00
  • c90459eba0 Update ComfyUI_frontend to 1.2.40 (#4691) Chenlei Hu 2024-08-30 19:32:10 -04:00
  • 04278afb10 feat: return import_failed from init_extra_nodes function (#4694) Vedat Baday 2024-08-31 02:26:47 +03:00
  • 935ae153e1 Cleanup. comfyanonymous 2024-08-30 12:48:42 -04:00
  • e91662e784 Get logs endpoint & system_stats additions (#4690) Chenlei Hu 2024-08-30 12:46:37 -04:00
  • 63fafaef45 Fix potential issue with hydit controlnets. comfyanonymous 2024-08-30 04:58:41 -04:00
  • ec28cd9136 swap legacy sdv15 link (#4682) Alex "mcmonkey" Goodwin 2024-08-29 16:48:48 -07:00
  • 6eb5d64522 Fix glora lowvram issue. comfyanonymous 2024-08-29 19:07:23 -04:00
  • 10a79e9898 Implement model part of flux union controlnet. comfyanonymous 2024-08-29 18:41:22 -04:00
  • ea3f39bd69 InstantX depth flux controlnet. comfyanonymous 2024-08-29 02:14:19 -04:00
  • b33cd61070 InstantX canny controlnet. comfyanonymous 2024-08-28 18:56:33 -04:00
  • 34eda0f853 fix: remove redundant useless loop (#4656) Dr.Lt.Data 2024-08-29 06:46:30 +09:00
  • d31e226650 Unify RMSNorm code. comfyanonymous 2024-08-28 16:18:39 -04:00
  • b79fd7d92c ComfyUI supports more than just stable diffusion. comfyanonymous 2024-08-28 16:12:24 -04:00
  • 38c22e631a Fix case where model was not properly unloaded in merging workflows. comfyanonymous 2024-08-27 18:46:55 -04:00
  • 6bbdcd28ae Support weight padding on diff weight patch (#4576) Chenlei Hu 2024-08-27 13:55:37 -04:00
  • ab130001a8 Do RMSNorm in native type. comfyanonymous 2024-08-27 02:41:56 -04:00
  • ca4b8f30e0 Cleanup empty dir if frontend zip download failed (#4574) Chenlei Hu 2024-08-27 02:07:25 -04:00
  • 70b84058c1 Add relative file path to the progress report. (#4621) Robin Huang 2024-08-26 23:06:12 -07:00
  • 2ca8f6e23d Make the stochastic fp8 rounding reproducible. comfyanonymous 2024-08-26 15:12:06 -04:00
  • 7985ff88b9 Use less memory in float8 lora patching by doing calculations in fp16. comfyanonymous 2024-08-26 12:33:57 -04:00
  • c6812947e9 Fix potential memory leak. v0.1.3 comfyanonymous 2024-08-26 02:07:32 -04:00
  • 9230f65823 Fix some controlnets OOMing when loading. comfyanonymous 2024-08-25 05:43:55 -04:00
  • 6ab1e6fd4a [Bug #4529] Fix graph partial validation failure (#4588) guill 2024-08-24 12:34:58 -07:00
  • 07dcbc3a3e Clarify how to use high quality previews. comfyanonymous 2024-08-24 02:31:03 -04:00
  • 8ae23d8e80 Fix onnx export. comfyanonymous 2024-08-23 17:52:47 -04:00
  • 7df42b9a23 Fix dora. v0.1.2 comfyanonymous 2024-08-23 04:58:59 -04:00
  • 5d8bbb7281 Cleanup. comfyanonymous 2024-08-23 04:06:27 -04:00
  • 2c1d2375d6 Fix. comfyanonymous 2024-08-23 04:04:55 -04:00
  • 64ccb3c7e3 Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). (#4562) Simon Lui 2024-08-23 00:59:57 -07:00
  • 9465b23432 Added SD15_Inpaint_Diffusers model support for unet_config_from_diffusers_unet function (#4565) Scorpinaus 2024-08-23 15:57:08 +08:00
  • bb4416dd5b Fix task.status.status_str caused by #2666 (#4551) v0.1.1 Chenlei Hu 2024-08-22 17:38:30 -04:00
  • c0b0da264b Missing imports. comfyanonymous 2024-08-22 17:20:39 -04:00
  • c26ca27207 Move calculate function to comfy.lora comfyanonymous 2024-08-22 17:12:00 -04:00
  • 7c6bb84016 Code cleanups. comfyanonymous 2024-08-22 17:05:12 -04:00
  • c54d3ed5e6 Fix issue with models staying loaded in memory. comfyanonymous 2024-08-22 15:57:40 -04:00
  • c7ee4b37a1 Try to fix some lora issues. comfyanonymous 2024-08-22 15:15:47 -04:00
  • 7b70b266d8 Generalize MacOS version check for force-upcast-attention (#4548) David 2024-08-22 12:24:21 -05:00
  • 8f60d093ba Fix issue. comfyanonymous 2024-08-22 10:38:24 -04:00
  • dafbe321d2 Fix a bug where cached outputs affected IS_CHANGED (#4535) guill 2024-08-21 20:38:46 -07:00
  • 5f84ea63e8 Add a shortcut to the nightly package to run with --fast. comfyanonymous 2024-08-21 23:36:58 -04:00
  • 843a7ff70c fp16 is actually faster than fp32 on a GTX 1080. comfyanonymous 2024-08-21 23:23:50 -04:00
  • a60620dcea Fix slow performance on 10 series Nvidia GPUs. comfyanonymous 2024-08-21 16:38:26 -04:00
  • 015f73dc49 Try a different type of flux fp16 fix. comfyanonymous 2024-08-21 16:17:15 -04:00
  • 904bf58e7d Make --fast work on pytorch nightly. v0.1.0 comfyanonymous 2024-08-21 14:01:41 -04:00
  • 5f50263088 Replace use of .view with .reshape (#4522) Svein Ove Aas 2024-08-21 16:21:48 +01:00
  • 5e806f555d add a get models list api route (#4519) Alex "mcmonkey" Goodwin 2024-08-20 23:04:42 -07:00
  • f07e5bb522 Add GET /internal/files. (#4295) Robin Huang 2024-08-20 22:25:06 -07:00
  • 03ec517afb Remove useless line, adjust windows default reserved vram. comfyanonymous 2024-08-21 00:47:19 -04:00
  • f257fc999f Add optional deprecated/experimental flag to node class (#4506) Chenlei Hu 2024-08-21 00:01:34 -04:00
  • bb50e69839 Update frontend to 1.2.30 (#4513) Chenlei Hu 2024-08-21 00:00:49 -04:00
  • 510f3438c1 Speed up fp8 matrix mult by using better code. comfyanonymous 2024-08-20 22:53:26 -04:00
  • ea63b1c092 Simpletrainer lycoris format. comfyanonymous 2024-08-20 12:05:13 -04:00
  • 9953f22fce Add --fast argument to enable experimental optimizations. comfyanonymous 2024-08-20 11:49:33 -04:00
  • d1a6bd6845 Support loading long clipl model with the CLIP loader node. comfyanonymous 2024-08-20 10:42:40 -04:00
  • 83dbac28eb Properly set if clip text pooled projection instead of using hack. comfyanonymous 2024-08-20 10:00:16 -04:00
  • 538cb068bc Make cast_to a nop if weight is already good. comfyanonymous 2024-08-20 00:50:39 -04:00
  • 1b3eee672c Fix potential issue with multi devices. comfyanonymous 2024-08-20 00:31:04 -04:00
  • 5a69f84c3c Update README.md (Add shield badges) (#4490) Chenlei Hu 2024-08-19 18:25:20 -04:00
  • 9eee470244 New load_text_encoder_state_dicts function. comfyanonymous 2024-08-19 17:36:35 -04:00
  • 045377ea89 Add a --reserve-vram argument if you don't want comfy to use all of it. comfyanonymous 2024-08-19 17:16:18 -04:00
  • 4d341b78e8 Bug fixes. comfyanonymous 2024-08-19 16:28:55 -04:00
  • 6138f92084 Use better dtype for the lowvram lora system. comfyanonymous 2024-08-19 15:35:25 -04:00
  • be0726c1ed Remove duplication. comfyanonymous 2024-08-19 15:24:07 -04:00
  • 766ae119a8 CheckpointSave node name. comfyanonymous 2024-08-19 14:00:56 -04:00
  • fc90ceb6ba Update issue template config.yml to direct frontend issues to frontend repos (#4486) Yoland Yan 2024-08-19 10:41:30 -07:00
  • 4506ddc86a Better subnormal fp8 stochastic rounding. Thanks Ashen. comfyanonymous 2024-08-19 13:38:03 -04:00
  • 20ace7c853 Code cleanup. comfyanonymous 2024-08-19 12:48:59 -04:00
  • b29b3b86c5 Update README to include frontend section (#4468) Chenlei Hu 2024-08-19 07:12:32 -04:00
  • 22ec02afc0 Handle subnormal numbers in float8 rounding. comfyanonymous 2024-08-19 05:19:59 -04:00
  • 39f114c44b Less broken non blocking? comfyanonymous 2024-08-18 16:53:17 -04:00
  • 6730f3e1a3 Disable non blocking. comfyanonymous 2024-08-18 14:38:09 -04:00
  • 73332160c8 Enable non blocking transfers in lowvram mode. comfyanonymous 2024-08-18 10:29:33 -04:00
  • 2622c55aff Automatically use RF variant of dpmpp_2s_ancestral if RF model. comfyanonymous 2024-08-18 00:47:25 -04:00
  • 1beb348ee2 dpmpp_2s_ancestral_RF for rectified flow (Flux, SD3 and Auraflow). Ashen 2024-08-17 17:32:27 -07:00
  • 9aa39e743c Add new shortcuts to readme (#4442) bymyself 2024-08-17 20:52:56 -07:00
  • d31df04c8a Indentation. comfyanonymous 2024-08-17 23:00:44 -04:00
  • e68763f40c Add Flux model support for InstantX style controlnet residuals (#4444) Xrvk 2024-08-18 05:58:23 +03:00
  • 310ad09258 Add a ModelSave node. comfyanonymous 2024-08-17 21:31:15 -04:00