* api nodes: price badges moved to nodes code
* added price badges for 4 more node-packs
* added price badges for 10 more node-packs
* added new price badges for Omni STD mode
* add support for autogrow groups
* use full names for "widgets", "inputs" and "groups"
* add strict typing for JSONata rules
* add price badge for WanReferenceVideoApi node
* add support for DynamicCombo
* sync price badges changes (https://github.com/Comfy-Org/ComfyUI_frontend/pull/7900)
* sync badges for Vidu2 nodes
* fixed incorrect price for RecraftCrispUpscaleNode
* fixed incorrect price badges for LTXV nodes
* fixed price badge for MinimaxHailuoVideoNode
* fixed price badges for PixVerse nodes
Use vae.spacial_compression_encode() instead of directly accessing
downscale_ratio to handle both standard VAEs (int) and WAN VAEs (tuple).
Addresses reviewer feedback on PR #11259.
Co-authored-by: ChrisFab16 <christopher@fabritius.dk>
* Brought over minimal elements from PR 10045 to reproduce seed_assets and register_assets_system without adding anything to the DB or server routes yet, for now making everything sync (can introduce async once everything is cleaned up and brought over)
* Added db script to insert assets stuff, cleaned up some code; assets (models) now get added/rescanned
* Added support for 5 http endpoints for assets
* Replaced Optional with | None in schemas_in.py and schemas_out.py
* Remove two routes that will not be relevant yet in this PR: HEAD /api/assets/hash/<hash> and PUT /api/assets/<id>/preview
* Remove some functions the two deleted endpoints were using
* Don't show assets scan message upon calling /object_info endpoint
* removed unsued import to satisfy ruff
* Simplified hashing function tpye hint and _hash_file_obj
* Satisfied ruff
This logic was checking comfy_cast_weights, and going straight to
to the forward_comfy_cast_weights implementation without
attempting to downscale input to fp8 in the event comfy_cast_weights
is set.
The main reason comfy_cast_weights would be set would be for async
offload, which is not a good reason to nix FP8MM.
So instead, and together the underlying exclusions for FP8MM which
are:
* having a weight_function (usually LowVramPatch)
* force_cast_weights (compute dtype override)
* the weight is not Quantized
* the input is already quantized
* the model or layer has MM explictily disabled.
If you get past all of those exclusions, quantize the input tensor.
Then hand the new input, quantized or not off to
forward_comfy_cast_weights to handle it. If the weight is offloaded
but input is quantized you will get an offloaded MM8.