pythongosssss
00219ec6d7
print -> logger
2026-02-16 02:41:06 -08:00
pythongosssss
460a68bd02
rebuild blueprints
2026-02-16 02:41:06 -08:00
pythongosssss
838e5da95f
more fixes
2026-02-16 02:41:06 -08:00
pythongosssss
127d265693
add multipass for faster blur
2026-02-16 02:41:06 -08:00
pythongosssss
26fecd614e
shader nit iteration
2026-02-16 02:41:06 -08:00
pythongosssss
76fd31ff2f
add glsl shader update system
2026-02-16 02:41:06 -08:00
pythongosssss
bb0bbf9ddc
hsb
2026-02-16 02:41:06 -08:00
pythongosssss
1423dbf60e
brightness/contrast
2026-02-16 02:41:06 -08:00
pythongosssss
d209a50006
Add glow
2026-02-16 02:41:06 -08:00
pythongosssss
935f548855
Add channels
2026-02-16 02:41:06 -08:00
pythongosssss
d7da49d76c
Add image operation blueprints
2026-02-16 02:41:06 -08:00
pythongosssss
bca59ff52c
add diagnostics, update mac initialization
2026-02-16 02:41:06 -08:00
pythongosssss
0a075f194d
fix ci
...
perf: only read required outputs
2026-02-16 02:41:06 -08:00
pythongosssss
690792477a
add additional support for egl & osmesa backends
2026-02-16 02:41:06 -08:00
pythongosssss
6598139bc5
tidy
2026-02-16 02:41:06 -08:00
pythongosssss
3265f40f14
remove cpu support
2026-02-16 02:41:06 -08:00
pythongosssss
34e938d537
convert to using PyOpenGL and glfw
2026-02-16 02:41:06 -08:00
pythongosssss
7dada35de7
fix line endings
2026-02-16 02:41:06 -08:00
pythongosssss
42e591e3a6
fix casing
2026-02-16 02:41:06 -08:00
pythongosssss
54d41ac432
Try fix build
2026-02-16 02:41:06 -08:00
pythongosssss
8d3f4272dc
Support multiple outputs
2026-02-16 02:41:06 -08:00
pythongosssss
d318df883e
tidy
2026-02-16 02:41:06 -08:00
pythongosssss
cf11e01ccf
adds support for executing simple glsl shaders
...
using moderngl package
2026-02-16 02:41:06 -08:00
comfyanonymous
88e6370527
Remove workaround for old pytorch. ( #12480 )
2026-02-15 20:43:53 -05:00
rattus
c0370044cd
MPDynamic: force load flux img_in weight (Fixes flux1 canny+depth lora crash) ( #12446 )
...
* lora: add weight shape calculations.
This lets the loader know if a lora will change the shape of a weight
so it can take appropriate action.
* MPDynamic: force load flux img_in weight
This weight is a bit special, in that the lora changes its geometry.
This is rather unique, not handled by existing estimate and doesn't
work for either offloading or dynamic_vram.
Fix for dynamic_vram as a special case. Ideally we can fully precalculate
these lora geometry changes at load time, but just get these models
working first.
2026-02-15 20:30:09 -05:00
rattus
ecd2a19661
Fix lora Extraction in offload conditions (+ dynamic_vram mode) ( #12479 )
...
* lora_extract: Add a trange
If you bite off more than your GPU can chew, this kinda just hangs.
Give a rough indication of progress counting the weights in a trange.
* lora_extract: Support on-the-fly patching
Use the on-the-fly approach from the regular model saving logic for
lora extraction too. Switch off force_cast_weights accordingly.
This gets extraction working in dynamic vram while also supporting
extraction on GPU offloaded.
2026-02-15 20:28:51 -05:00
Alexander Piskun
2c1d06a4e3
feat(api-nodes): add Bria RMBG nodes ( #12465 )
...
Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com >
2026-02-15 17:22:30 -08:00
Alexander Piskun
e2c71ceb00
feat(api-nodes-Tencent): add ModelTo3DUV, 3DTextureEdit, 3DParts nodes ( #12428 )
...
* feat(api-nodes-Tencent): add ModelTo3DUV, 3DTextureEdit, 3DParts nodes
* add image output to TencentModelTo3DUV node
* commented out two nodes
* added rate_limit check to other hunyuan3d nodes
2026-02-15 05:33:18 -08:00
Jedrzej Kosinski
596ed68691
Node Replacement API ( #12014 )
2026-02-15 02:12:30 -08:00
Alexander Piskun
ce4a1ab48d
chore(api-nodes): remove "gpt-4o" model ( #12467 )
2026-02-15 01:31:59 -08:00
comfyanonymous
e1ede29d82
Remove unsafe pickle loading code that was used on pytorch older than 2.4 ( #12473 )
...
ComfyUI hasn't started on pytorch 2.4 since last month.
2026-02-14 22:53:52 -05:00
Christian Byrne
df1e5e8514
Update frontend package to 1.38.14 ( #12469 )
2026-02-14 11:01:10 -08:00
krigeta
dc9822b7df
Add working Qwen 2512 ControlNet (Fun ControlNet) support ( #12359 )
2026-02-13 22:23:52 -05:00
comfyanonymous
712efb466b
Add left padding to LTXAV text encoder. ( #12456 )
2026-02-13 21:56:54 -05:00
comfyanonymous
726af73867
Fix some custom nodes. ( #12455 )
2026-02-13 20:21:10 -05:00
comfyanonymous
831351a29e
Support generating attention masks for left padded text encoders. ( #12454 )
2026-02-13 20:15:23 -05:00
comfyanonymous
e1add563f9
Use torch RMSNorm for flux models and refactor hunyuan video code. ( #12432 )
2026-02-13 15:35:13 -05:00
rattus
8902907d7a
dynamic_vram: Training fixes ( #12442 )
2026-02-13 15:29:37 -05:00
comfyanonymous
e03fe8b591
Update command to install AMD stable linux pytorch. ( #12437 )
2026-02-12 23:29:12 -05:00
rattus
ae79e33345
llama: use a more efficient rope implementation ( #12434 )
...
Get rid of the cat and unary negation and inplace add-cmul the two
halves of the rope. Precompute -sin once at the start of the model
rather than every transformer block.
This is slightly faster on both GPU and CPU bound setups.
2026-02-12 19:56:42 -05:00
rattus
117e214354
ModelPatcherDynamic: force load non leaf weights ( #12433 )
...
The current behaviour of the default ModelPatcher is to .to a model
only if its fully loaded, which is how random non-leaf weights get
loaded in non-LowVRAM conditions.
The however means they never get loaded in dynamic_vram. In the
dynamic_vram case, force load them to the GPU.
2026-02-12 19:51:50 -05:00
Alexander Piskun
4a93a62371
fix(api-nodes): add separate retry budget for 429 rate limit responses ( #12421 )
2026-02-12 01:38:51 -08:00
comfyanonymous
66c18522fb
Add a tip for common error. ( #12414 )
2026-02-11 22:12:16 -05:00
askmyteapot
e5ae670a40
Update ace15.py to allow min_p sampling ( #12373 )
2026-02-11 20:28:48 -05:00
rattus
3fe61cedda
model_patcher: guard against none model_dtype ( #12410 )
...
Handle the case where the _model_dtype exists but is none with the
intended fallback.
2026-02-11 14:54:02 -05:00
rattus
2a4328d639
ace15: Use dynamic_vram friendly trange ( #12409 )
...
Factor out the ksampler trange and use it in ACE LLM to prevent the
silent stall at 0 and rate distortion due to first-step model load.
2026-02-11 14:53:42 -05:00
rattus
d297a749a2
dynamic_vram: Fix windows Aimdo crash + Fix LLM performance ( #12408 )
...
* model_management: lazy-cache aimdo_tensor
These tensors cosntructed from aimdo-allocations are CPU expensive to
make on the pytorch side. Add a cache version that will be valid with
signature match to fast path past whatever torch is doing.
* dynamic_vram: Minimize fast path CPU work
Move as much as possible inside the not resident if block and cache
the formed weight and bias rather than the flat intermediates. In
extreme layer weight rates this adds up.
2026-02-11 14:50:16 -05:00
Alexander Piskun
2b7cc7e3b6
[API Nodes] enable Magnific Upscalers ( #12179 )
...
* feat(api-nodes): enable Magnific Upscalers
* update price badges
---------
Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com >
2026-02-11 11:30:19 -08:00
Benjamin Lu
4993411fd9
Dispatch desktop auto-bump when a ComfyUI release is published ( #12398 )
...
* Dispatch desktop auto-bump on ComfyUI release publish
* Fix release webhook secret checks in step conditions
* Require desktop dispatch token in release webhook
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
---------
Co-authored-by: Luke Mino-Altherr <lminoaltherr@gmail.com >
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com >
Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com >
2026-02-11 11:15:13 -08:00
Alexander Piskun
2c7cef4a23
fix(api-nodes): retry on connection errors during polling instead of aborting ( #12393 )
2026-02-11 10:51:49 -08:00