feat: add VRAM requirement estimation for workflow templates

Add a frontend heuristic that estimates peak VRAM consumption by
detecting model-loading nodes in the workflow graph and summing
approximate memory costs per model category (checkpoints, LoRAs,
ControlNets, VAEs, etc.). The estimate uses only the largest base
model (checkpoint or diffusion_model) since ComfyUI offloads others,
plus all co-resident models and a flat runtime overhead.

Surfaces the estimate in three places:

1. Template publishing wizard (metadata step) — auto-detects VRAM on
   mount using the same graph traversal pattern as custom node
   detection, with a manual GB override input for fine-tuning.

2. Template marketplace cards — displays a VRAM badge in the top-left
   corner of template thumbnails using the existing SquareChip and
   CardTop slot infrastructure.

3. Workflow editor — floating indicator in the bottom-right of the
   graph canvas showing estimated VRAM for the current workflow.

Bumps version to 1.46.0.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
John Haugeland
2026-02-24 17:09:32 -08:00
parent 8361122586
commit bf63a5cc71
10 changed files with 489 additions and 3 deletions

View File

@@ -267,6 +267,16 @@
/>
</div>
</template>
<template v-if="template.vram" #top-left>
<SquareChip
:label="formatSize(template.vram)"
:title="t('templateWorkflows.vramEstimateTooltip')"
>
<template #icon>
<i class="icon-[lucide--cpu] h-3 w-3" />
</template>
</SquareChip>
</template>
<template #bottom-right>
<template v-if="template.tags && template.tags.length > 0">
<SquareChip
@@ -387,6 +397,7 @@
<script setup lang="ts">
import { useAsyncState } from '@vueuse/core'
import { formatSize } from '@/utils/formatUtil'
import ProgressSpinner from 'primevue/progressspinner'
import { computed, onBeforeUnmount, onMounted, provide, ref, watch } from 'vue'
import { useI18n } from 'vue-i18n'