mirror of
https://github.com/comfyanonymous/ComfyUI.git
synced 2026-03-13 00:59:59 +00:00
feat(isolation): DynamicVRAM compatibility for process isolation
DynamicVRAM's on-demand model loading/offloading conflicted with process isolation in three ways: RPC tensor transport stalls from mid-call GPU offload, race conditions between model lifecycle and active RPC operations, and false positive memory leak detection from changed finalizer patterns. - Marshal CUDA tensors to CPU before RPC transport for dynamic models - Add operation state tracking + quiescence waits at workflow boundaries - Distinguish proxy reference release from actual leaks in cleanup_models_gc - Fix init order: DynamicVRAM must initialize before isolation proxies - Add RPC timeouts to prevent indefinite hangs on model unavailability - Prevent proxy-of-proxy chains from DynamicVRAM model reload cycles - Add torch.device/torch.dtype serializers for new DynamicVRAM RPC paths - Guard isolation overhead so non-isolated workflows are unaffected - Migrate env var to PYISOLATE_CHILD
This commit is contained in:
93028
.pyisolate_venvs/ComfyUI-IsolationToolkit/cache/node_info.json
vendored
Normal file
93028
.pyisolate_venvs/ComfyUI-IsolationToolkit/cache/node_info.json
vendored
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user