* Implement seek and read for pins
Source pins from an mmap is pad because its its a CPU->CPU copy that
attempts to fully buffer the same data twice. Instead, use seek and
read which avoids the mmap buffering while usually being a faster
read in the first place (avoiding mmap faulting etc).
* pinned_memory: Use Aimdo pinner
The aimdo pinner bypasses pytorches CPU allocator which can leak
windows commit charge.
* ops: bypass init() of weight for embedding layer
This similarly consumes large commit charge especially for TEs. It can
cause a permanement leaked commit charge which can destabilize on
systems close to the commit ceiling and generally confuses the RAM
stats.
* model_patcher: implement pinned memory counter
Implement a pinned memory counter for better accounting of what volume
of memory pins have.
* implement touch accounting
Implement accounting of touching mmapped tensors.
* mm+mp: add residency mmap getter
* utils: use the aimdo mmap to load sft files
* model_management: Implement tigher RAM pressure semantics
Implement a pressure release on entire MMAPs as windows does perform
faster when mmaps are unloaded and model loads free ramp into fully
unallocated RAM.
Make the concept of freeing for pins a completely separate concept.
Now that pins are loadable directly from original file and don' touch
the mmap, tighten the freeing budget to just the current loaded model
- what you have left over. This still over-frees pins, but its a lot
better than before.
So after the pins are freed with that algorithm, bounce entire MMAPs
to free RAM based on what the model needs, deducting off any known
resident-in-mmap tensors to the free quota to keep it as tight as
possible.
* comfy-aimdo 0.2.11
Comfy aimdo 0.2.11
* mm: Implement file_slice path for QT
* ruff
* ops: put meta-tensors in place to allow custom nodes to check geo
Integrate comfy-aimdo 0.2 which takes a different approach to
installing the memory allocator hook. Instead of using the complicated
and buggy pytorch MemPool+CudaPluggableAlloctor, cuda is directly hooked
making the process much more transparent to both comfy and pytorch. As
far as pytorch knows, aimdo doesnt exist anymore, and just operates
behind the scenes.
Remove all the mempool setup stuff for dynamic_vram and bump the
comfy-aimdo version. Remove the allocator object from memory_management
and demote its use as an enablment check to a boolean flag.
Comfy-aimdo 0.2 also support the pytorch cuda async allocator, so
remove the dynamic_vram based force disablement of cuda_malloc and
just go back to the old settings of allocators based on command line
input.