- Fix missing import for compute_filename_for_reference in ingest.py
- Apply code review fixes across routes, queries, scanner, seeder,
hashing, ingest, path_utils, main, and server
- Update and add tests for sync references and seeder
Amp-Thread-ID: https://ampcode.com/threads/T-019cb61a-ed54-738c-a05f-9b5242e513f3
Co-authored-by: Amp <amp@ampcode.com>
Replace --disable-assets-autoscan with --enable-assets so the assets
system (API routes, database sync, background scanning) is off by
default and must be explicitly opted into. Expose the flag as an
"assets" entry in SERVER_FEATURE_FLAGS so the frontend can read it
from GET /features.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Filter hidden files/directories (dot-prefixed) in collect_models_files()
using is_visible(), matching the existing behavior for input/output roots
- Exclude the 'custom_nodes' folder name from get_comfy_models_folders();
custom nodes that register their own paths under other folder names
will still be scanned as expected
Amp-Thread-ID: https://ampcode.com/threads/T-019c924b-591a-725e-b8b7-0d49ba1a5591
Co-authored-by: Amp <amp@ampcode.com>
commonpath raises ValueError on Windows when comparing paths on different
drives (e.g. C:\models vs D:\extra_models). Replace all usages in the
asset scanner with Path.is_relative_to() which handles cross-drive paths,
case-insensitivity, and prefix traps natively without try/except.
Amp-Thread-ID: https://ampcode.com/threads/T-019c9224-d83c-7797-8c02-e1e1ae2ee452
Co-authored-by: Amp <amp@ampcode.com>
get_comfy_models_folders() previously filtered by startswith(models_root),
excluding extra model paths outside the main models directory. Now includes
every category with non-empty paths from folder_names_and_paths.
Amp-Thread-ID: https://ampcode.com/threads/T-019c9224-d83c-7797-8c02-e1e1ae2ee452
Co-authored-by: Amp <amp@ampcode.com>
list_files_recursively now uses followlinks=True so symlinked
directories under input/ and output/ roots are traversed, matching
the existing behavior of folder_paths.recursive_search for models.
Tracks (st_dev, st_ino) pairs of visited directories to detect and
break circular symlink loops safely.
Amp-Thread-ID: https://ampcode.com/threads/T-019c9220-21b8-7678-b428-9215ff1bb011
Co-authored-by: Amp <amp@ampcode.com>
* model_management: Remove non-comfy dynamic _v caster
* Force pre-load non-comfy weights to GPU in ModelPatcherDynamic
Non-comfy weights may expect to be pre-cast to the target
device without in-model casting. Previously they were allocated in
the vbar with _v which required the _v fault path in cast_to.
Instead, back up the original CPU weight and move it directly to GPU
at load time.
* draft zeta (z-image pixel space)
* revert gitignore
* model loaded and able to run however vector direction still wrong tho
* flip the vector direction to original again this time
* Move wrongly positioned Z image pixel space class
* inherit Radiance LatentFormat class
* Fix parameters in classes for Zeta x0 dino
* remove arbitrary nn.init instances
* Remove unused import of lru_cache
---------
Co-authored-by: silveroxides <ishimarukaito@gmail.com>
Comfy Aimdo 0.2.4 fixes a VRAM buffer alignment issue that happens in
someworkflows where action is able to bypass the pytorch allocator
and go straight to the cuda hook.
This was previously considering the pool of dynamic models as one giant
entity for the sake of smart memory, but that isnt really the useful
or what a user would reasonably expect. Make Dynamic VRAM properly purge
its models just like the old --disable-smart-memory but conditioning
the dynamic-for-dynamic bypass on smart memory.
Re-enable dynamic smart memory.
Multi-step samplers (eg. dpmpp_2s_ancestral) call the model at intermediate sigma values not present in the schedule. This caused set_step to crash with "No sample_sigmas matched current timestep" when context windows were enabled.
The fix is to keep self._step from the last exact match when a substep sigma is encountered, since substeps are still logically part of their parent step and should use the same context windows.
Co-authored-by: ozbayb <17261091+ozbayb@users.noreply.github.com>
* sd: add support for clip model reconstruction
* nodes: SetClipHooks: Demote the dynamic model patcher
* mp: Make dynamic_disable more robust
The backup need to not be cloned. In addition add a delegate object
to ModelPatcherDynamic so that non-cloning code can do
ModelPatcherDynamic demotion
* sampler_helpers: Demote to non-dynamic model patcher when hooking
* code rabbit review comments
Allow non QuantizedTensor layer to set want_requant to get the post lora
calculation stochastic cast down to the original input dtype.
This is then used by the legacy fp8 Linear implementation to set the
compute_dtype to the preferred lora dtype but then want_requant it back
down to fp8.
This fixes the issue with --fast fp8_matrix_mult is combined with
--fast dynamic_vram which doing a lora on an fp8_ non QT model.
Users should either use the cu126 one or the regular one (cu130 at the moment)
The cu128 portable is still included in the latest github release but I will stop including it as soon as it becomes slightly annoying to deal with. This might happen as soon as next week.
Some custom node packs are naughty, and violate the
dont-load-torch-on-load rule. This causes aimdo to lose preference on
its allocator hook on linux.
Go super early on the aimdo first-stage init before custom nodes
are mentioned at all.