Compare commits

..

110 Commits

Author SHA1 Message Date
GitHub Action
bc53c5a065 [automated] Apply ESLint and Oxfmt fixes 2026-04-06 12:43:21 +00:00
dante01yoon
e4180514dc fix: resolve hub CI failures
- og.ts: use SatoriNode type instead of unknown for satori() param
- Pin all GitHub Action refs to commit SHAs (validate-pins)
- Add vercel to knip ignoreBinaries (hub-preview-cron uses npx vercel)
2026-04-06 21:39:44 +09:00
dante01yoon
d92acd81b6 feat: add hub CI/CD workflows
Migrate all 5 site-related GitHub Actions from workflow_templates repo.

- hub-ci.yaml: lint, astro check, build verification, SEO audit
- hub-deploy.yaml: production deploy to Vercel with template sync + AI
- hub-preview.yaml: PR preview deploy to Vercel
- hub-cron-rebuild.yaml: 15min production rebuild for UGC content
- hub-preview-cron.yaml: 15min preview rebuild with PR discovery matrix

Template data is fetched via sparse checkout of Comfy-Org/workflow_templates.
Reuses .github/actions/setup-frontend (no separate setup action needed).
2026-04-06 21:20:01 +09:00
dante01yoon
bbd0a6b201 feat: migrate workflow template site as apps/hub
Migrate workflow_templates/site into the frontend monorepo as apps/hub
so the hub can use @comfyorg/design-system and shared packages.

Changes to existing files:
- pnpm-workspace.yaml: add @astrojs/sitemap, @astrojs/vercel, lucide-vue-next
- eslint.config.ts: add hub ignores and i18n/import rule overrides
- .oxlintrc.json: add hub scripts to ignore patterns
- knip.config.ts: add hub workspace config

apps/hub adaptations from source:
- Replace local cn() with @comfyorg/tailwind-utils (19 files)
- Integrate @comfyorg/design-system/css/base.css in global.css
- Make TEMPLATES_DIR configurable via HUB_TEMPLATES_DIR env var
- Add HUB_SKIP_SYNC flag for builds without template data
- Remove Vite 8-incompatible rollupOptions.output.manualChunks
- Fix stylelint violations (modern color notation, number precision)
- Gitignore generated content (thumbnails, synced templates, AI cache)
2026-04-06 20:53:13 +09:00
Comfy Org PR Bot
6c1bf7a3cf 1.43.12 (#10782)
Patch version increment to 1.43.12

**Base branch:** `main`

---------

Co-authored-by: christian-byrne <72887196+christian-byrne@users.noreply.github.com>
Co-authored-by: github-actions <github-actions@github.com>
2026-04-04 23:48:20 -07:00
Benjamin Lu
b61e15293c test: address review comments on new browser tests (#10852) 2026-04-04 19:26:55 -07:00
Dante
899660b135 test: add queue overlay and workflow search E2E tests (#10802)
## Summary
- Add queue overlay E2E tests: toggle, filter tabs, completed filter,
close (5 tests)
- Add workflow sidebar search E2E tests: search input, filter by name,
clear, no matches (4 tests)
- Fix AssetsHelper mock timestamps from seconds to milliseconds
(matching backend's `int(time.time() * 1000)`)
- Type AssetsHelper response pagination with `JobsListResponse` from
`@comfyorg/ingest-types`

## Test plan
- [ ] CI passes all Playwright shards
- [ ] `pnpm typecheck:browser` passes
- [ ] `pnpm lint` passes

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10802-test-add-queue-overlay-and-workflow-search-E2E-tests-3356d73d365081018df8c7061c0854ee)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Benjamin Lu <benjaminlu1107@gmail.com>
2026-04-04 13:15:17 -07:00
Benjamin Lu
aeafff1ead fix: virtualize cloud job queue history list (#10592)
## Summary

Virtualize the shared job queue history list so opening the jobs panel
does not eagerly mount the full history on cloud.

## Changes

- **What**: Virtualize the shared queue history list used by the overlay
and sidebar, flatten date headers plus job rows into a single virtual
stream, and preserve hover/menu behavior with updated queue list tests.
- **Why `@tanstack/vue-virtual` instead of Reka virtualizers**: the
installed `reka-ui@2.5.0` does not expose a generic list virtualizer. It
only exposes `ListboxVirtualizer`, `ComboboxVirtualizer`, and
`TreeVirtualizer`, and those components inject `ListboxRoot`/`TreeRoot`
context and carry listbox or tree selection/keyboard semantics. The job
history UI is a flat grouped action list, not a selectable listbox or
navigable tree, so this uses the same TanStack virtualizer layer
directly without forcing the wrong semantics onto the component.

## Review Focus

Please verify the virtual row sizing and inter-group spacing behavior
across date headers and the last row in each group.

> [!TIP]
> Diff reads much cleaner through vscode's unified view with show
leading/trailing whitespace differences enabled

Linear: COM-304

https://tanstack.com/virtual/latest/docs/api/virtualizer

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10592-fix-virtualize-cloud-job-queue-history-list-3306d73d3650819d956bf4b2d8b59a40)
by [Unito](https://www.unito.io)
2026-04-04 12:33:47 -07:00
Johnpaul Chiwetelu
f4fb7a458e test: add browser tests for selection toolbox button actions (#10764) 2026-04-04 14:03:50 +01:00
Yourz
71a3bd92b4 fix: add delete/bookmark actions for blueprints in V2 node library sidebar (#10827)
## Summary

Add missing delete and bookmark actions for user blueprints in the V2
node library sidebar, fixing parity with the V1 sidebar.

## Changes

- **What**: 
- Add delete button (inline + context menu) for user blueprints in
`TreeExplorerV2Node` and `TreeExplorerV2`
- Extract `isUserBlueprint()` helper in `subgraphStore` for DRY usage
across V1/V2 sidebars


![Kapture 2026-04-03 at 00 12
09](https://github.com/user-attachments/assets/3f1f3f41-ed2b-4250-953f-511d39e54e45)

## Review Focus

- `isUserBlueprint` consolidates logic previously duplicated between
`NodeTreeLeaf` and the new V2 components
- Context menu guard `contextMenuNode?.data` prevents showing empty
menus
- Folder `@contextmenu` handler clears stale `contextMenuNode` to
prevent wrong actions

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10827-fix-add-delete-bookmark-actions-for-blueprints-in-V2-node-library-sidebar-3366d73d36508111afd2c2c7d8ff0220)
by [Unito](https://www.unito.io)
2026-04-04 20:14:32 +08:00
Dante
17d2870ef4 test(modelLibrary): add E2E tests for model library sidebar tab (#10789)
## Summary
- Add `ModelLibraryHelper` mock helper for `/experiment/models` and
`/view_metadata` endpoints
- Add `ModelLibrarySidebarTab` page object fixture with search, folder,
and leaf locators
- Add 11 E2E test scenarios covering tab open/close, folder display,
folder expansion, search with debounce, refresh, load all folders, and
empty state

## Test plan
- [ ] CI passes all Playwright shards
- [ ] `pnpm typecheck:browser` passes
- [ ] `pnpm lint` passes

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10789-test-modelLibrary-add-E2E-tests-for-model-library-sidebar-tab-3356d73d365081b49a7ed752512164da)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-03 08:04:46 +09:00
Dante
7a68943839 test(assets): strengthen pagination E2E assertions (#10773)
## Summary

The existing pagination smoke test only asserts `count >= 1`, which
passes even if the sidebar eagerly loads all items or ignores page
boundaries entirely.

### What changed

**Before:**
- Created 30 mock jobs (less than BATCH_SIZE of 200) — all loaded in one
request, `has_more: false`
- Asserted `count >= 1` — redundant with the grid-render smoke test

**After — two targeted assertions:**

1. **Initial batch < total**: Mock 250 jobs (> BATCH_SIZE 200). First
`/api/jobs?limit=200&offset=0` returns 200 items with `has_more: true`.
Assert `initialCount < 250`.

2. **Scroll triggers second fetch**: Scroll `VirtualGrid` container to
bottom → `approach-end` event → `handleApproachEnd()` →
`assetsStore.loadMoreHistory()` → `/api/jobs?limit=200&offset=200`
fetches remaining 50. Assert `finalCount > initialCount` via
`expect.poll()`.

### Types
Mock data uses `RawJobListItem` from
`src/platform/remote/comfyui/jobs/jobTypes.ts` (Zod-inferred). This is
the correct source-of-truth per `docs/guidance/playwright.md` —
`/api/jobs` is a Python backend endpoint not covered by
`@comfyorg/ingest-types`.

## Test plan
- [ ] CI E2E tests pass
- [ ] `initial batch is smaller than total job count` validates
pagination boundary
- [ ] `scrolling to the end loads additional items` triggers actual
second API call

Fixes #10649

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-03 08:00:50 +09:00
Dante
8912f4159a test: add E2E tests for workflow tab operations (#10796)
## Summary

Add Playwright E2E tests for workflow tab interactions in the topbar.

## Changes

- **What**: New test file
`browser_tests/tests/topbar/workflowTabs.spec.ts` with 5 tests covering
default tab visibility, tab creation, switching, closing, and context
menu. Added `newWorkflowButton`, `getTab()`, and `getActiveTab()`
locators to `Topbar.ts` fixture.

## Review Focus

Tests are focused on tab UI interactions only (sidebar workflow
operations are already covered in `workflows.spec.ts`). Context menu
assertion uses Reka UI's `data-reka-context-menu-content` attribute.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10796-test-add-E2E-tests-for-workflow-tab-operations-3356d73d36508170a657ef816e23b71c)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-03 07:07:31 +09:00
Dante
794b986954 test: add E2E tests for Node Library V2 sidebar (#10798)
## Summary

- Adds Playwright E2E tests for the Node Library V2 sidebar tab
(`Comfy.NodeLibrary.NewDesign: true`)
- Adds `NodeLibrarySidebarTabV2` fixture class with V2-specific locators
(search input, tab buttons, node cards)
- Exposes `menu.nodeLibraryTabV2` on `ComfyPage` for test access
- Tests cover: tab visibility, default tab selection, tab switching,
folder expansion, search filtering, and sort button presence

## Test plan

- [ ] Run `pnpm test:browser:local -- --grep "Node library sidebar V2"`
against a running ComfyUI server with the V2 node library
- [ ] Verify tests pass in CI

Fixes #9079

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10798-test-add-E2E-tests-for-Node-Library-V2-sidebar-3356d73d36508185a11feaf95e32225b)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-03 07:04:56 +09:00
Terry Jia
a7b3515692 chore: add @jtydhr88 as code owner for GLSL renderer (#10742)
## Summary
add myself as glsl owner

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10742-chore-add-jtydhr88-as-code-owner-for-GLSL-renderer-3336d73d3650816f84deebf3161aee7a)
by [Unito](https://www.unito.io)
2026-04-02 13:09:58 -07:00
Johnpaul Chiwetelu
26f3f11a3e test: replace raw CSS selectors with TestIds in context menu spec (#10760)
## Summary
- Replace raw CSS selectors (`.lg-node-header`, `.p-contextmenu`,
`.node-title-editor input`, `.image-preview img`) with centralized
`TestIds` constants and existing fixtures in the context menu E2E spec
- Add `data-testid="title-editor-input"` to TitleEditor overlay for
stable selector targeting
- Use `NodeLibrarySidebarTab` fixture for node library sidebar
interaction

## Changes
- `browser_tests/fixtures/selectors.ts`: add `pinIndicator`,
`innerWrapper`, `titleEditorInput`, `mainImage` to `TestIds.node`
- `browser_tests/fixtures/utils/vueNodeFixtures.ts`: add `pinIndicator`
getter
- `src/components/graph/TitleEditor.vue`: add `data-testid` via
`input-attrs`
- `browser_tests/.../contextMenu.spec.ts`: replace all raw selectors
with TestIds/fixtures

## Test plan
- [x] All 23 context menu E2E tests pass locally
- [x] Typecheck passes
- [x] Lint passes

Fixes #10750
Fixes #10749

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10760-test-replace-raw-CSS-selectors-with-TestIds-in-context-menu-spec-3336d73d3650818790c0e32e0b6f1e98)
by [Unito](https://www.unito.io)
2026-04-02 21:04:50 +01:00
jaeone94
d9466947b2 feat: detect and resolve missing media inputs in error tab (#10309)
## Summary

Add detection and resolution UI for missing image/video/audio inputs
(LoadImage, LoadVideo, LoadAudio nodes) in the Errors tab, mirroring the
existing missing model pipeline.

## Changes

- **What**: New `src/platform/missingMedia/` module — scan pipeline
detects missing media files on workflow load (sync for OSS, async for
cloud), surfaces them in the error tab with upload dropzone, thumbnail
library select, and 2-step confirm flow
- **Detection**: `scanAllMediaCandidates()` checks combo widget values
against options; cloud path defers to `verifyCloudMediaCandidates()` via
`assetsStore.updateInputs()`
- **UI**: `MissingMediaCard` groups by media type; `MissingMediaRow`
shows node name (single) or filename+count (multiple), upload dropzone
with drag & drop, `MissingMediaLibrarySelect` with image/video
thumbnails
- **Resolution**: Upload via `/upload/image` API or select from library
→ status card → checkmark confirm → widget value applied, item removed
from error list
- **Integration**: `executionErrorStore` aggregates into
`hasAnyError`/`totalErrorCount`; `useNodeErrorFlagSync` flags nodes on
canvas; `useErrorGroups` renders in error tab
- **Shared**: Extract `ACCEPTED_IMAGE_TYPES`/`ACCEPTED_VIDEO_TYPES` to
`src/utils/mediaUploadUtil.ts`; extract `resolveComboValues` to
`src/utils/litegraphUtil.ts` (shared across missingMedia + missingModel
scan)
- **Reverse clearing**: Widget value changes on nodes auto-remove
corresponding missing media errors (via `clearWidgetRelatedErrors`)

## Testing

### Unit tests (22 tests)
- `missingMediaScan.test.ts` (12): groupCandidatesByName,
groupCandidatesByMediaType (ordering, multi-name),
verifyCloudMediaCandidates (missing/present, abort before/after
updateInputs, already resolved true/false, no-pending skip, updateInputs
spy)
- `missingMediaStore.test.ts` (10): setMissingMedia, clearMissingMedia
(full lifecycle with interaction state), missingMediaNodeIds,
hasMissingMediaOnNode, removeMissingMediaByWidget
(match/no-match/last-entry), createVerificationAbortController

### E2E tests (10 scenarios in `missingMedia.spec.ts`)
- Detection: error overlay shown, Missing Inputs group in errors tab,
correct row count, dropzone + library select visibility, no false
positive for valid media
- Upload flow: file picker → uploading status card → confirm → row
removed
- Library select: dropdown → selected status card → confirm → row
removed
- Cancel: pending selection → returns to upload/library UI
- All resolved: Missing Inputs group disappears
- Locate node: canvas pans to missing media node

## Review Focus

- Cloud verification path: `verifyCloudMediaCandidates` compares widget
value against `asset_hash` — implicit contract
- 2-step confirm mirrors missing model pattern (`pendingSelection` →
confirm/cancel)
- Event propagation guard on dropzone (`@drop.prevent.stop`) to prevent
canvas LoadImage node creation
- `clearAllErrors()` intentionally does NOT clear missing media (same as
missing models — preserves pending repairs)
- `runMissingMediaPipeline` is now `async` and `await`-ed, matching
model pipeline

## Test plan

- [x] OSS: load workflow with LoadImage referencing non-existent file →
error tab shows it
- [x] Upload file via dropzone → status card shows "Uploaded" → confirm
→ widget updated, error removed
- [x] Select from library with thumbnail preview → confirm → widget
updated, error removed
- [x] Cancel pending selection → returns to upload/library UI
- [x] Load workflow with valid images → no false positives
- [x] Click locate-node → canvas navigates to the node
- [x] Multiple nodes referencing different missing files → correct row
count
- [x] Widget value change on node → missing media error auto-removed

## Screenshots


https://github.com/user-attachments/assets/631c0cb0-9706-4db2-8615-f24a4c3fe27d
2026-04-01 17:59:02 +09:00
jaeone94
bb96e3c95c fix: resolve subgraph promoted widget panel regressions (#10648)
## Summary

Fix four bugs in the subgraph promoted widget panel where linked
promotions were not distinguished from independent ones, causing
incorrect UI state in both the SubgraphEditor (Settings) panel and the
Parameters tab WidgetActions menu.

## Changes

- **What**: Add `isLinkedPromotion` helper to correctly identify widgets
driven by subgraph input connections. Fix `disambiguatingSourceNodeId`
lookup mismatch that broke `isWidgetShownOnParents` and
`handleHideInput` for non-nested promoted widgets. Replace fragile CSS
icon selectors with `data-testid` attributes.

## Bugs fixed

Companion fix PR for #10502 (red-green test PR). All 4 E2E tests from
#10502 now pass:

| Bug | Root cause | Fix |
|-----|-----------|-----|
| Linked promoted widgets have hide toggle enabled | `SubgraphEditor`
only checked `node.id === -1` (physical) — linked promotions from
subgraph input connections were not detected | Added `isLinkedPromotion`
helper that checks `input._widget` bindings; `SubgraphNodeWidget`
`:is-physical` prop now covers both physical and linked cases |
| Linked promoted widgets show eye icon instead of link icon | Same root
cause as above — `isPhysical` prop was only true for `node.id === -1` |
Extended the `:is-physical` condition to include `isLinkedPromotion`
check |
| Widget labels show raw names instead of renamed values |
`SubgraphEditor` passed `widget.name` instead of `widget.label \|\|
widget.name` | Changed `:widget-name` binding to prefer `widget.label` |
| WidgetActions menu shows Hide/Show for linked promotions |
`v-if="hasParents"` didn't exclude linked promotions | Added
`canToggleVisibility` computed that combines `hasParents` with
`!isLinked` check via `isLinkedPromotion` |

### Additional bugs discovered and fixed

| Bug | Root cause | Fix |
|-----|-----------|-----|
| "Show input" always displayed instead of "Hide input" for promoted
widgets | `SectionWidgets.isWidgetShownOnParents` used
`getSourceNodeId(widget)` which falls back to `widget.sourceNodeId` when
`disambiguatingSourceNodeId` is undefined — this mismatches the
promotion store key (`undefined`) | Changed to
`widget.disambiguatingSourceNodeId` directly |
| "Hide input" click does nothing | `WidgetActions.handleHideInput` used
`getSourceNodeId(widget)` for the same reason — `demote()` couldn't find
the entry to remove | Same fix — use `widget.disambiguatingSourceNodeId`
directly |

## Tests added

### E2E (Playwright) —
`browser_tests/tests/subgraphPromotedWidgetPanel.spec.ts`

| Test | What it verifies |
|------|-----------------|
| linked promoted widgets have hide toggle disabled | All toggle buttons
in SubgraphEditor shown section are disabled for linked widgets (covers
1-level and 2-level nested promotions via `subgraph-nested-promotion`
fixture) |
| linked promoted widgets show link icon instead of eye icon | Link
icons appear for linked widgets, no eye icons present |
| widget labels display renamed values instead of raw names |
`widget.label` is displayed when set, not `widget.name` |
| linked promoted widget menu should not show Hide/Show input |
Three-dot menu on Parameters tab omits Hide/Show options for linked
promotions, Rename is still available |

### Unit (Vitest) — `src/core/graph/subgraph/promotionUtils.test.ts`

7 tests covering `isLinkedPromotion`: basic matching, negative cases,
nested subgraph with `disambiguatingSourceNodeId`, multiple inputs, and
mixed linked/independent state.

### Unit (Vitest) —
`src/components/rightSidePanel/parameters/WidgetActions.test.ts`

- Added `isSubgraphNode: () => false` to mock nodes to prevent crash
from new `isLinked` computed

## Review Focus

- `isLinkedPromotion` reads `input._widget` (WeakRef-backed,
non-reactive) directly in the template. This is intentional — `_widget`
bindings are set during subgraph initialization before the user opens
the panel, so stale reads don't occur in practice. A computed-based
approach was attempted but reverted because `_widget` changes cannot
trigger Vue reactivity.
- `getSourceNodeId` removal in `SectionWidgets` and `WidgetActions` is
intentional — the old fallback (`?? widget.sourceNodeId`) caused key
mismatches with the promotion store for non-nested widgets.

## Screenshots
Before
<img width="723" height="935" alt="image"
src="https://github.com/user-attachments/assets/09862578-a0d1-45b4-929c-f22f7494ebe2"
/>

After
<img width="999" height="952" alt="image"
src="https://github.com/user-attachments/assets/ed8fe604-6b44-46b9-a315-6da31d6b405a"
/>
2026-04-01 17:10:30 +09:00
jaeone94
df42b7a2a8 fix: collapsed node connection link positions (#10641)
## Summary

Fix connection links rendering at wrong positions when nodes are
collapsed in Vue nodes mode.

## Changes

- **What**: Fall back to `clientPosToCanvasPos` for collapsed node slot
positioning since DOM-relative scale derivation is invalid when layout
store preserves expanded size. Clear stale `cachedOffset` on collapse
and defer sync when canvas is not yet initialized.
- 3 unit tests for collapsed node slot sync fallback
(clientPosToCanvasPos, cachedOffset clearing, canvas-not-initialized
deferral)
- 3 E2E tests for collapsed node link positions (within bounds, after
position change, after expand recovery)

## Review Focus

- `clientPosToCanvasPos` fallback is safe for collapsed nodes because
collapse is user-initiated (no loading-time transform desync risk that
#9121 originally fixed)
- `cachedOffset` clearing prevents stale expanded-state offsets during
collapsed node drag
- Regression from #9121 (DOM-relative scale) combined with #9680
(collapsed node ResizeObserver skip)

## Screenshots 
Before
<img width="1030" height="434" alt="image"
src="https://github.com/user-attachments/assets/2f8b8a1f-ed22-4588-ab62-72b89880e53f"
/>

After
<img width="1029" height="476" alt="image"
src="https://github.com/user-attachments/assets/52dbbf7c-61ed-465b-ae19-a9781513e7e8"
/>


┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10641-fix-collapsed-node-connection-link-positions-3316d73d365081f4aee3fecb92c83b91)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Alexander Brown <drjkl@comfy.org>
Co-authored-by: Alexander Brown <DrJKL0424@gmail.com>
2026-04-01 13:49:12 +09:00
Kelly Yang
4f3a5ae184 fix(load3d): fix squashed controls in 3D inspector side panel (#10768)
## Summary

Fixes squashed `input controls` (color picker, sliders, dropdowns) in
the 3D asset inspector side panel.

## Screenshots 

before
<img width="3012" height="1580" alt="image"
src="https://github.com/user-attachments/assets/edc6fadc-cdc5-4a4e-92e7-57faabfeb1a4"
/>

after
<img width="4172" height="2062" alt="image"
src="https://github.com/user-attachments/assets/766324ce-e8f7-43fc-899e-ae275f880e59"
/>

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10768-fix-load3d-fix-squashed-controls-in-3D-inspector-side-panel-3346d73d365081e8b438de8115180685)
by [Unito](https://www.unito.io)
2026-04-01 00:34:37 -04:00
Dante
c77c8a9476 test: migrate fromAny to fromPartial for type-checked test mocks (#10788)
## Summary
- Convert `fromAny` → `fromPartial` in 7 test files where object
literals or interfaces are passed
- `fromPartial` type-checks the provided fields, unlike `fromAny` which
bypasses all checking (same as `as unknown as`)
- Class-based types (`LGraphNode`, `LGraph`) remain `fromAny` due to
shoehorn's `PartialDeep` incompatibility with class constructors

## Changes
- **Pure conversions** (all `fromAny` → `fromPartial`):
`domWidgetZIndex`, `matchPromotedInput`, `promotionUtils`,
`subgraphNavigationStore`
- **Mixed** (some converted, some kept): `promotedWidgetView`,
`widgetUtil`
- **Cleanup**: `nodeOutputStore` type param normalization

Follows up on #10761.

## Test plan
- [x] `pnpm typecheck` passes
- [x] `pnpm vitest run` on all 7 changed files — 169 tests pass
- [x] `pnpm lint` passes

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10788-test-migrate-fromAny-to-fromPartial-for-type-checked-test-mocks-3356d73d365081f7bf61d48a47af530c)
by [Unito](https://www.unito.io)
2026-03-31 21:11:50 -07:00
Dante
380fae9a0d chore(test): remove dead QueueHelper from browser tests (#10771)
## Summary
- Remove unused `QueueHelper` class and its `comfyPage.queue` property
- `QueueHelper` mocks the legacy `/api/queue` tuple format which the app
no longer uses (now `/api/jobs` via `fetchQueue()`)
- `comfyPage.queue.*` is never called in any test

Fixes #10670

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10771-chore-test-remove-dead-QueueHelper-from-browser-tests-3346d73d36508117bb19db9492bcbed3)
by [Unito](https://www.unito.io)
2026-03-31 19:55:52 +09:00
pythongosssss
515f234143 fix: Ensure all save/save as buttons are the same width (#10681)
## Summary

Makes the save/save as buttons in the builder footer toolbar all a fixed
size so when switching states the elements dont jump

## Changes

- **What**: 
- Apply widths from design to the buttons
- Add tests that measure the sizes of the buttons

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10681-fix-Ensure-all-save-save-as-buttons-are-the-same-width-3316d73d36508187bb74c5a977ea876f)
by [Unito](https://www.unito.io)
2026-03-31 02:47:27 -07:00
Dante
61049425a3 fix(DisplayCarousel): use back button in grid view and remove hover icons (#10655)
## Summary
- Grid view top-left icon changed from square to back arrow
(`arrow-left`) per Figma spec
- Back button is always visible in grid view (no longer
hover-dependent), uses sticky positioning
- Removed hover opacity effect on grid thumbnails

## Related
- Figma:
https://www.figma.com/design/vALUV83vIdBzEsTJAhQgXq/Comfy-Design-System?node-id=6008-83034&m=dev
- Figma:
https://www.figma.com/design/vALUV83vIdBzEsTJAhQgXq/Comfy-Design-System?node-id=6008-83069&m=dev

## Test plan
- [x] All 31 existing DisplayCarousel tests pass
- [ ] Visual check: grid view shows back arrow icon (top-left, always
visible)
- [ ] Visual check: hovering grid thumbnails shows no overlay icons
- [ ] Verify back button stays visible when scrolling through many grid
items

## Screenshot
### Before
<img width="492" height="364" alt="스크린샷 2026-03-28 오후 4 31 54"
src="https://github.com/user-attachments/assets/f9f36521-e993-45de-b692-59fba22a026d"
/>
<img width="457" height="400" alt="스크린샷 2026-03-28 오후 4 32 03"
src="https://github.com/user-attachments/assets/004f6380-8ad7-4167-b1f4-ebc4bdb559cc"
/>

### After
<img width="596" height="388" alt="스크린샷 2026-03-28 오후 4 31 43"
src="https://github.com/user-attachments/assets/e5585887-ad36-42e3-a6c0-e6eacb90dad7"
/>

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10655-fix-DisplayCarousel-use-back-button-in-grid-view-and-remove-hover-icons-3316d73d365081c5826afd63c50994ba)
by [Unito](https://www.unito.io)
2026-03-31 12:17:24 +09:00
Alexander Brown
661e3d7949 test: migrate as unknown as to @total-typescript/shoehorn (#10761)
*PR Created by the Glary-Bot Agent*

---

## Summary

- Replace all `as unknown as Type` assertions in 59 unit test files with
type-safe `@total-typescript/shoehorn` functions
- Use `fromPartial<Type>()` for partial mock objects where deep-partial
type-checks (21 files)
- Use `fromAny<Type>()` for fundamentally incompatible types: null,
undefined, primitives, variables, class expressions, and mocks with
test-specific extra properties that `PartialDeepObject` rejects
(remaining files)
- All explicit type parameters preserved so TypeScript return types are
correct
- Browser test `.spec.ts` files excluded (shoehorn unavailable in
`page.evaluate` browser context)

## Verification

- `pnpm typecheck` 
- `pnpm lint` 
- `pnpm format` 
- Pre-commit hooks passed (format + oxlint + eslint + typecheck)
- Migrated test files verified passing (ran representative subset)
- No test behavior changes — only type assertion syntax changed
- No UI changes — screenshots not applicable

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10761-test-migrate-as-unknown-as-to-total-typescript-shoehorn-3336d73d365081f6b8adc44db5dcc380)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Glary-Bot <glary-bot@users.noreply.github.com>
Co-authored-by: Amp <amp@ampcode.com>
2026-03-30 19:20:18 -07:00
Alexander Brown
1624750a02 fix(test): fix bulk context menu test using correct Playwright patterns (#10762)
*PR Created by the Glary-Bot Agent*

---

## Summary

Fixes the `Bulk context menu shows when multiple assets selected` test
that is failing on main.

**Root cause — two issues:**

1. `click({ modifiers: ['ControlOrMeta'] })` does not fire `keydown`
events that VueUse's `useKeyModifier('Control')` tracks (used in
`useAssetSelection.ts`). Multi-select silently fails because the
composable never sees the Control key pressed. Fix: use
`keyboard.down('Control')` / `keyboard.up('Control')` around the click.

2. `click({ button: 'right' })` can be intercepted by canvas overlays
(documented gotcha in `browser_tests/AGENTS.md`). Fix: use
`dispatchEvent('contextmenu', { bubbles: true, cancelable: true })`
which bypasses overlay interception.

Also removed the `toPass()` retry wrapper since the root causes are now
addressed directly.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10762-fix-test-fix-bulk-context-menu-test-using-correct-Playwright-patterns-3346d73d3650811c843ee4a39d3ab305)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Glary-Bot <glary-bot@users.noreply.github.com>
2026-03-30 18:38:25 -07:00
Comfy Org PR Bot
4cbf4994e9 1.43.11 (#10763)
Patch version increment to 1.43.11

**Base branch:** `main`

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10763-1-43-11-3346d73d3650814f922fd9405cde85b1)
by [Unito](https://www.unito.io)

---------

Co-authored-by: christian-byrne <72887196+christian-byrne@users.noreply.github.com>
Co-authored-by: github-actions <github-actions@github.com>
2026-03-30 17:51:39 -07:00
Benjamin Lu
86a3938d11 test: add runtime-safe browser_tests alias (#10735)
## What changed

Added a runtime-safe `#e2e/*` alias for `browser_tests`, updated the
browser test docs, and migrated a representative fixture/spec import
path to the new convention.

## Why

`@/*` only covers `src/`, so browser test imports were falling back to
deep relative paths. `#e2e/*` resolves in both Node/Playwright runtime
and TypeScript.

## Validation

- `pnpm format`
- `pnpm typecheck:browser`
- `pnpm exec playwright test browser_tests/tests/actionbar.spec.ts
--list`

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10735-test-add-runtime-safe-browser_tests-alias-3336d73d36508122b253cb36a4ead1c1)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Alexander Brown <drjkl@comfy.org>
2026-03-30 19:24:09 +00:00
jaeone94
e11a1776ed fix: prevent saving active workflow content to inactive tab on close (#10745)
## Summary

- Closing an inactive workflow tab and clicking "Save" overwrites that
workflow with the **active** tab's content, causing permanent data loss
- `saveWorkflow()` and `saveWorkflowAs()` call `checkState()` which
serializes `app.rootGraph` (the active canvas) into the inactive
workflow's `changeTracker.activeState`
- Guard `checkState()` to only run when the workflow being saved is the
active one — in both `saveWorkflow` and `saveWorkflowAs`

## Linked Issues

- Fixes https://github.com/Comfy-Org/ComfyUI/issues/13230

## Root Cause

PR #9137 (commit `9fb93a5b0`, v1.41.7) added
`workflow.changeTracker?.checkState()` inside `saveWorkflow()` and
`saveWorkflowAs()`. `checkState()` always serializes `app.rootGraph` —
the graph on the canvas. When called on an inactive tab's change
tracker, it captures the active tab's data instead.

## Test plan

- [x] E2E: "Closing an inactive tab with save preserves its own content"
— persisted workflow B with added node, close while A is active, re-open
and verify
- [x] E2E: "Closing an inactive unsaved tab with save preserves its own
content" — temporary workflow B with added node, close while A is
active, save-as with filename, re-open and verify
- [x] Manual: open A and B, edit B, switch to A, close B tab, click
Save, re-open B — content should be B's not A's
2026-03-30 12:12:38 -07:00
Benjamin Lu
161522b138 chore: remove stale tests-ui config (#10736)
## What changed

Removed stale `tests-ui` configuration and documentation references from
the repo.

## Why

`tests-ui/` no longer exists, but the repo still carried:
- a dead `@tests-ui/*` tsconfig path
- stale `tests-ui/**/*` include
- a Vite watch ignore for a missing directory
- documentation examples that still referenced the old path

## Validation

- `pnpm format:check`
- `pnpm typecheck`

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10736-chore-remove-stale-tests-ui-config-3336d73d3650814a98bedfc113b6eb9b)
by [Unito](https://www.unito.io)
2026-03-30 11:59:00 -07:00
Johnpaul Chiwetelu
61144ea1d5 test: add 23 E2E tests for Vue node context menu actions (#10603)
## Summary
- Add 23 Playwright E2E tests for all right-click context menu actions
on Vue nodes
- **Single node (7 tests)**: rename, copy/paste, duplicate, pin/unpin,
bypass/remove bypass, minimize/expand, convert to subgraph
- **Image node (4 tests)**: copy image to clipboard, paste image from
clipboard, open image in new tab, download via save image
- **Subgraph (3 tests)**: convert + unpack roundtrip, edit subgraph
widgets opens properties panel, add to library and find in node library
search
- **Multi-node (9 tests)**: batch rename, copy/paste, duplicate,
pin/unpin, bypass/remove bypass, minimize/expand, frame nodes, convert
to group node, convert to subgraph
- Uses `ControlOrMeta` modifier for multi-node selection

## Test plan
- [x] All 23 tests pass locally (`pnpm test:browser:local`)
- [x] TypeScript type check passes (`pnpm typecheck:browser`)
- [x] ESLint passes
- [x] CodeRabbit review: no findings

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10603-test-add-23-E2E-tests-for-Vue-node-context-menu-actions-3306d73d3650818a932fc62205ac6fa8)
by [Unito](https://www.unito.io)
2026-03-30 19:31:51 +01:00
Dante
3ac08fd1da test(assets-sidebar): add comprehensive E2E tests for Assets browser panel (#10616)
## Summary
- Extend `AssetsSidebarTab` page object with selectors for search, view
mode, asset cards, selection footer, context menu, and folder view
navigation
- Add mock data factories (`createMockJob`, `createMockJobs`,
`createMockImportedFiles`) to `AssetsHelper` for generating realistic
test fixtures
- Write 30 E2E test cases across 10 categories covering the Assets
browser sidebar panel

## Test coverage added

| Category | Tests | Details |
|----------|-------|---------|
| Empty states | 3 | Generated/Imported empty copy, zero cards |
| Tab navigation | 3 | Default tab, switching, search reset on tab
change |
| Grid view display | 2 | Generated card rendering, Imported tab assets
|
| View mode toggle | 2 | Grid↔List switching via settings menu |
| Search | 4 | Input visibility, filtering, clearing, no-match state |
| Selection | 5 | Click select, Ctrl+click multi, footer, deselect all,
tab-switch clear |
| Context menu | 7 | Right-click menu,
Download/Inspect/Delete/CopyJobID/Workflow actions, bulk menu |
| Bulk actions | 3 | Download/Delete buttons, selection count display |
| Pagination | 1 | Large job set initial load |
| Settings menu | 1 | View mode options visibility |

## Context
Part of [FixIt
Burndown](https://www.notion.so/comfy-org/FixIt-Burndown-32e6d73d365080609a81cdc9bc884460)
— "Untested Side Panels: Assets browser" assigned to @dante01yoon.

## Test plan
- [ ] Run `npx playwright test
browser_tests/tests/sidebar/assets.spec.ts` against local ComfyUI
backend
- [ ] Verify all 30 tests pass
- [ ] CI green

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10616-test-assets-sidebar-add-comprehensive-E2E-tests-for-Assets-browser-panel-3306d73d365081eeb237e559f56689bf)
by [Unito](https://www.unito.io)
2026-03-30 21:15:14 +09:00
Terry Jia
f1d5337181 Feat/glsl live preview (#10349)
## Summary
replacement for https://github.com/Comfy-Org/ComfyUI_frontend/pull/9201

the first commit squashed
https://github.com/Comfy-Org/ComfyUI_frontend/pull/9201 and fixed
conflict.

the second commit change needed by:
- Enable GLSL live preview on SubgraphNodes by detecting the inner
GLSLShader and rendering its preview directly on the parent SubgraphNode
- Previously, SubgraphNodes containing a GLSLShader showed no live
preview at all To achieve this:
- Read shader source, uniform values, and renderer config from the inner
GLSLShader's widgets
- Trace IMAGE inputs through the subgraph boundary so the inner shader
can use images connected to the SubgraphNode's outer inputs
- Set preview output using the inner node's locator ID so the promoted
preview system picks it up on the SubgraphNode
- Extract setNodePreviewsByLocatorId from nodeOutputStore to support
setting previews by locator ID directly
- Fix graphId to use rootGraph.id for widget store lookups (was using
graph.id, which broke lookups for nodes inside subgraphs)
- Read uniform values from connected upstream nodes, not just local
widgets
- Fix blob URL lifecycle: use the store's
createSharedObjectUrl/releaseSharedObjectUrl reference-counting system
instead of manual revoke, preventing leaks on composable re-creation
        

## Screenshot


https://github.com/user-attachments/assets/9623fa32-de39-4a3a-b8b3-28688851390b

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10349-Feat-glsl-live-preview-3296d73d3650814b83aef52ab1962a77)
by [Unito](https://www.unito.io)
2026-03-29 22:26:42 -04:00
Comfy Org PR Bot
c289640e99 1.43.10 (#10726)
Patch version increment to 1.43.10

**Base branch:** `main`

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10726-1-43-10-3336d73d36508179a69cf7affcc0070e)
by [Unito](https://www.unito.io)

---------

Co-authored-by: christian-byrne <72887196+christian-byrne@users.noreply.github.com>
Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Christian Byrne <cbyrne@comfy.org>
2026-03-29 17:47:48 -07:00
Christian Byrne
dc7c97c5ac feat: add Wave 3 homepage sections (11 Vue components) [3/3] (#10142)
## Summary
Adds all 11 homepage section components for the comfy.org marketing
site.

## Changes (incremental from #10141)
- HeroSection.vue: C monogram left, headline right, CTAs
- SocialProofBar.vue: 12 enterprise logos + metrics
- ProductShowcase.vue: PLACEHOLDER workflow demo
- ValuePillars.vue: Build/Customize/Refine/Automate/Run
- UseCaseSection.vue: PLACEHOLDER industries
- CaseStudySpotlight.vue: PLACEHOLDER bento grid
- TestimonialsSection.vue: Filterable by industry
- GetStartedSection.vue: 3-step flow
- CTASection.vue: Desktop/Cloud/API cards
- ManifestoSection.vue: Method Not Magic
- AcademySection.vue: Learning paths CTA
- Updated index.astro + zh-CN/index.astro

## Stack (via Graphite)
- #10140 [1/3] Scaffold
- #10141 [2/3] Layout Shell
- **[3/3] Homepage Sections** ← this PR (top of stack)

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10142-feat-add-Wave-3-homepage-sections-11-Vue-components-3-3-3266d73d36508194aa8ee9385733ddb9)
by [Unito](https://www.unito.io)
2026-03-29 17:30:49 -07:00
Christian Byrne
8340d7655f refactor: extract auth-routing from workspaceApi to auth domain (#10484)
## Summary

Extract auth-routing logic (`getAuthHeaderOrThrow`,
`getFirebaseAuthHeaderOrThrow`) from `workspaceApi.ts` into
`authStore.ts`, eliminating a layering violation where the workspace API
re-implemented auth header resolution.

## Changes

- **What**: Moved `getAuthHeaderOrThrow` and
`getFirebaseAuthHeaderOrThrow` from `workspaceApi.ts` to `authStore.ts`.
`workspaceApi.ts` now calls through `useAuthStore()` instead of
re-implementing token resolution. Added tests for the new methods in
`authStore.test.ts`. Updated `authStoreMock.ts` with the new methods.
- **Files**: 4 files changed

## Review Focus

- The `getAuthHeaderOrThrow` / `getFirebaseAuthHeaderOrThrow` methods
throw `AuthStoreError` (auth domain error) — callers in workspace can
catch and re-wrap if needed
- `workspaceApi.ts` is simplified by ~19 lines

## Stack

PR 2/5: #10483 → **→ This PR** → #10485#10486#10487
2026-03-29 17:18:49 -07:00
Christian Byrne
1ffd92f910 config: add vitest coverage include pattern + lcov reporter (#10575)
## What

- Add `include: ['src/**/*.{ts,vue}']` to vitest coverage config so ALL
source files appear in reports (previously only imported files showed
up)
- Add `lcov` reporter for CI integration and VS Code coverage gutter
- Add `exclude` patterns for test files, locales, litegraph, assets,
declarations, stories
- Add `test:coverage` npm script

## Why

Coverage reports currently only show files that are imported during test
runs. Adding the `include` pattern reveals the true gap — files with
zero coverage that were previously invisible. The lcov reporter enables
IDE integration and future CI coverage comments (Codecov/Coveralls).

## Testing

`npx tsc --noEmit` passes. No behavioral changes — this only affects
coverage reporting configuration.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10575-config-add-vitest-coverage-include-pattern-lcov-reporter-32f6d73d365081c8b59ad2316dd2b198)
by [Unito](https://www.unito.io)
2026-03-29 16:05:45 -07:00
Christian Byrne
81d3ef22b0 refactor: extract comfyExpect and makeMatcher from ComfyPage (#10652)
## Summary

Extract `makeMatcher` and `comfyExpect` from `ComfyPage.ts` into the
standalone `browser_tests/fixtures/utils/customMatchers.ts` module,
reducing the page-object file by ~50 lines.

## Changes

- **What**: Removed duplicate `makeMatcher`/`comfyExpect` definitions
from `ComfyPage.ts`; the canonical implementation now lives in
`customMatchers.ts`. A backward-compatible re-export keeps all existing
imports working.

## Review Focus

- The re-export ensures `import { comfyExpect } from
'../fixtures/ComfyPage'` continues to resolve for all ~25 spec files.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10652-refactor-extract-comfyExpect-and-makeMatcher-from-ComfyPage-3316d73d365081bf8e7cd7fa324bf9a6)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Alexander Brown <drjkl@comfy.org>
Co-authored-by: GitHub Action <action@github.com>
2026-03-29 16:05:10 -07:00
Christian Byrne
2d99fb446c test: add QueueClearHistoryDialog E2E tests (DLG-02) (#10586)
## Summary
Adds Playwright E2E tests for the QueueClearHistoryDialog component.

## Tests added
- Dialog opens from queue panel history actions menu
- Dialog shows confirmation message with title, description, and assets
note
- Cancel button closes dialog without clearing history
- Close (X) button closes dialog without clearing history
- Confirm clear action triggers queue history clear API call
- Dialog state resets properly after close/reopen

## Task
Part of Test Coverage Q2 Overhaul (DLG-02).

## Conventions
- Uses Vue nodes with new menu enabled (`Comfy.UseNewMenu: 'Top'`)
- Tests read as user stories
- No full-page screenshots
- Proper waits, no sleeps

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10586-test-add-QueueClearHistoryDialog-E2E-tests-DLG-02-3306d73d36508174a07bd9782340a0f7)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
2026-03-29 16:02:19 -07:00
Christian Byrne
dee236cd60 test: comprehensive properties panel E2E tests (PNL-01) (#10548)
## Summary
Comprehensive Playwright E2E tests for the properties panel (right
sidebar).

Part of the **Test Coverage Q2 Overhaul** initiative (Phase 2: PNL-01).

## What's included
- **PropertiesPanelHelper** page object in `browser_tests/helpers/` —
locators + action methods for all panel elements
- **35 test cases** covering:
  - Open/close via actionbar toggle
- Workflow Overview (no selection): tabs, title, nodes list, global
settings
  - Single node selection: title, parameters, info tab, widgets display
  - Multi-node selection: item count, node listing, hidden Info tab
  - Title editing: pencil icon, edit mode, rename, visibility rules
  - Search filtering: query, clear, empty state
- Settings tab: Normal/Bypass/Mute state, color swatches, pinned toggle
  - Selection transitions: no-selection ↔ single ↔ multi
  - Nodes tab: list all, search filter
  - Tab label changes based on selection count
  - **Errors tab scaffold** (for @jaeone94 ADD-03)

## Testing
- All tests use Vue nodes with new menu enabled
- Zero flaky tests (proper waits, no sleeps)
- Screenshots scoped to panel elements

## Unblocks
- **ADD-03** (error systems by @jaeone94) — errors tab scaffold ready to
extend

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10548-test-comprehensive-properties-panel-E2E-tests-PNL-01-32f6d73d36508199a216fd8d953d8e18)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
2026-03-29 15:57:42 -07:00
Christian Byrne
b12b20b5ab test: add 12 workflow persistence playwright tests (#10547)
## What

12 regression tests covering 10 workflow persistence bug gaps, including
the **critical data corruption fix in PR #9531** (pythongosssss) which
previously had ZERO test coverage.

## Why

Deep scan of 37 workflow persistence bugs found 12 E2E-testable gaps
with no regression tests. Workflow persistence is a core reliability
concern — data corruption bugs are the highest risk category.

## Tests

### 🔴 Critical
| Bug | PR | Tests | Description |
|-----|----|-------|-------------|
| Data corruption | #9531 | 2 | checkState during graph loading corrupts
workflow data |
| State desync | #9533 | 2 | Rapid tab switching desyncs workflow/graph
state |

### 🟡 Medium  
| Bug | PR/Commit | Tests | Description |
|-----|-----------|-------|-------------|
| Lost previews | #9380 | 1 | Node output previews lost on tab switch |
| Stale canvas | 44bb6f13 | 1 | Canvas not cleared before loading new
workflow |
| Widget loss | #7648 | 1 | Widget values lost on graph change |
| API format | #9694 | 1 | API format workflows fail with missing nodes
|
| Paste duplication | #8259 | 1 | Middle-click paste duplicates workflow
|
| Blob URLs | #8715 | 1 | Transient blob: URLs in serialization |

### 🟢 Low
| Bug | PR/Commit | Tests | Description |
|-----|-----------|-------|-------------|
| Locale break | #8963 | 1 | Locale change breaks workflows |
| Panel drift | — | 1 | Splitter panel size drift |

## Conventions
- All tests use Vue nodes + new menu enabled
- Each test documents which PR/commit it regresses
- Proper waits (no sleeps)
- Screenshots scoped to relevant elements
- Tests read like user stories

## 🎉 Shoutout
PR #9531 by @pythongosssss was a critical data corruption fix that now
has regression test coverage for the first time.

Part of: Test Coverage Q2 Overhaul (REG-01)

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10547-test-12-workflow-persistence-regression-tests-incl-critical-PR-9531-32f6d73d3650818796c6c5950c77f6d1)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
2026-03-29 15:57:13 -07:00
Christian Byrne
04f90b7a05 test: add mock data fixtures for backend API responses (#10662)
## Summary

Add deterministic mock data fixtures for browser tests so they can use
`page.route()` to intercept API calls without depending on a live
backend.

## Changes

- **`browser_tests/fixtures/data/nodeDefinitions.ts`** — Mock
`ComfyNodeDef` objects for KSampler, CheckpointLoaderSimple, and
CLIPTextEncode
- **`browser_tests/fixtures/data/systemStats.ts`** — Mock `SystemStats`
with realistic RTX 4090 GPU info
- **`browser_tests/fixtures/data/README.md`** — Usage guide for
`page.route()` interception

All fixtures are typed against the Zod schemas in `src/schemas/` and
pass `pnpm typecheck:browser`.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10662-test-add-mock-data-fixtures-for-backend-API-responses-3316d73d3650813ea5c8c1faa215db63)
by [Unito](https://www.unito.io)

---------

Co-authored-by: dante01yoon <bunggl@naver.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: GitHub Action <action@github.com>
2026-03-29 15:55:50 -07:00
Christian Byrne
1e7c8d9889 test: add SignIn dialog E2E tests (DLG-04) (#10587)
## Summary
Adds Playwright E2E tests for the SignIn dialog component and its
sub-forms.

## Tests added
- Dialog opens from login button in topbar
- Sign In form is the default view with email/password fields
- Toggle between Sign In and Sign Up forms
- API Key form navigation (forward and back)
- Terms of Service and Privacy Policy links present
- Form field presence verification
- Dialog close behavior (close button and Escape key)
- Forgot password link presence
- 'Or continue with' divider and API key button

## Notes
- Tests focus on UI navigation and element presence (no real Firebase
auth in test env)
- Dialog opened via `extensionManager.dialog.showSignInDialog()` API
- All selectors use stable IDs from the component source
(`#comfy-org-sign-in-email`, etc.)

## Task
Part of Test Coverage Q2 Overhaul (DLG-04).

## Conventions
- Uses Vue nodes with new menu enabled (`Comfy.UseNewMenu: 'Top'`)
- Tests read as user stories
- No full-page screenshots
- Proper waits, no sleeps

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10587-test-add-SignIn-dialog-E2E-tests-DLG-04-3306d73d3650815db171f8c5228e2cf3)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
2026-03-29 15:37:37 -07:00
Christian Byrne
367f810702 feat: expose renderMarkdownToHtml on ExtensionManager (#10700)
## Summary

Expose `renderMarkdownToHtml()` on the `ExtensionManager` interface so
custom node extensions can render markdown to sanitized HTML without
bundling their own copies of `marked`/`DOMPurify`.

## Motivation

Multiple custom node packs (KJNodes, comfy_mtb, rgthree-comfy) bundle
their own markdown rendering libraries to implement help popups on
nodes. This causes:

- **Cloud breakage**: KJNodes uses a `kjweb_async` pattern (custom
aiohttp static route) to lazily load `marked.min.js` and
`purify.min.js`. This 404s on Cloud because the custom route is not
registered.
- **Redundant bundling**: Both `marked` (^15.0.11) and `dompurify`
(^3.2.5) are already direct dependencies of the frontend, used
internally by `markdownRendererUtil.ts`, `NodePreview.vue`,
`WhatsNewPopup.vue`, etc.
- **XSS risk**: Custom nodes using raw `marked` without `DOMPurify`
could introduce XSS vulnerabilities.

By exposing the existing `renderMarkdownToHtml()` through the official
`ExtensionManager` API, custom nodes can:
```js
const html = app.extensionManager.renderMarkdownToHtml(nodeData.description)
```
...instead of bundling and loading their own copies.

## Changes

- **`src/types/extensionTypes.ts`**: Add `renderMarkdownToHtml(markdown:
string, baseUrl?: string): string` to the `ExtensionManager` interface
with JSDoc.
- **`src/stores/workspaceStore.ts`**: Import and re-export
`renderMarkdownToHtml` from `@/utils/markdownRendererUtil`.

## Impact

- **Zero bundle size increase** — the function and its dependencies are
already bundled in the `vendor-markdown` chunk.
- **No breaking changes** — purely additive to the `ExtensionManager`
interface.
- **Follows existing pattern** — same approach as `toast`, `dialog`,
`command`, `setting` on `ExtensionManager`.

Related: #TBD (long-term plan for custom node extension library
dependencies)

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10700-feat-expose-renderMarkdownToHtml-on-ExtensionManager-3326d73d36508149bc1dc6bb45e7c077)
by [Unito](https://www.unito.io)
2026-03-29 14:51:45 -07:00
Kelly Yang
798f6de4a9 fix: image compare node displays wrong height with mismatched resolut… (#10714)
## Summary

Revert `object-cover` to `object-contain` so images are never cropped
when the container is short, and add imagecompare to `EXPANDING_TYPES`
so the widget row grows to fill the full node body instead of collapsing
to `min-content`.


## Screenshots
before
<img width="2674" height="2390" alt="image"
src="https://github.com/user-attachments/assets/8fa5cf41-f393-4a7d-a767-75ce944d00d4"
/>

after




https://github.com/user-attachments/assets/46e1fffc-5f65-4b69-9303-fe6255d9de79

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10714-fix-image-compare-node-displays-wrong-height-with-mismatched-resolut-3326d73d3650818293d3c716cb8fafb5)
by [Unito](https://www.unito.io)

---------

Co-authored-by: github-actions <github-actions@github.com>
2026-03-29 14:45:56 -07:00
Terry Jia
752641cc67 chore: add @jtydhr88 as code owner for image crop, image compare, painter, mask editor, and 3D (#10713)
## Summary
add myself as owner to the components I worked on

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10713-chore-add-jtydhr88-as-code-owner-for-image-crop-image-compare-painter-mask-editor--3326d73d365081a5aaedf67168a32c7e)
by [Unito](https://www.unito.io)
2026-03-29 14:45:09 -07:00
Christian Byrne
af0f7cb945 refactor: extract assetPath as standalone pure function (#10651)
## Summary

Extract `assetPath` from a `ComfyPage` method to a standalone pure
function, removing unnecessary coupling to the page object.

## Changes

- **What**: Moved `assetPath` to
`browser_tests/fixtures/utils/paths.ts`. `DragDropHelper` and
`WorkflowHelper` import it directly instead of receiving it via
`ComfyPage`. `ComfyPage.assetPath` kept as thin delegate for backward
compat.

## Review Focus

Structural-only refactor — no behavioral changes. The function was
already pure (no `this`/`page` usage).

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10651-refactor-extract-assetPath-as-standalone-pure-function-3316d73d365081c0b0e0ce6dde57ef8e)
by [Unito](https://www.unito.io)
2026-03-29 00:27:28 -07:00
Christian Byrne
ac0175aa6a docs: add convention for new assertions — prefer page objects over custom matchers (#10660)
## Summary

Add guidance to `docs/guidance/playwright.md` that new node-specific
assertions should be methods on page objects/helpers rather than new
`comfyExpect` custom matchers.

## Changes

- **What**: New "Custom Assertions" section in Playwright guidance
documenting that existing `comfyExpect` matchers are fine to use, but
new assertions should go on the page object for IntelliSense
discoverability.

## Review Focus

Documentation-only change. No code refactoring — this is a convention
for new code only.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10660-docs-add-convention-for-new-assertions-prefer-page-objects-over-custom-matchers-3316d73d3650816d97a8fbbdc33f6b75)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
2026-03-29 00:27:13 -07:00
Christian Byrne
1e1b3884c5 refactor: include backend-mirrored endpoints in ingest-types codegen (#10697)
## Summary

Remove the exclusion filter that prevented backend-mirrored endpoint
types from being generated in `@comfyorg/ingest-types`.

## Changes

- **What**: The `openapi-ts.config.ts` excluded all endpoints shared
with the ComfyUI Python backend (system_stats, object_info, prompt,
queue, history, settings, userdata, etc.). Since the cloud ingest API
mirrors the backend, these types should be generated from the OpenAPI
spec as the canonical source. This adds ~250 new types and Zod schemas
covering previously excluded endpoints.
- **Breaking**: None. This only adds new exported types — no existing
types or imports are changed.

## Review Focus

- The cloud ingest API is designed to mirror the ComfyUI Python backend.
The original exclusion filter was added to avoid duplication with
`src/schemas/apiSchema.ts`, but the generated types should be the
canonical source since they are auto-generated from the OpenAPI spec.
- A follow-up PR will migrate imports in `src/` from `apiSchema.ts` to
`@comfyorg/ingest-types` where applicable.
- Webhooks and internal analytics endpoints remain excluded
(server-to-server, not frontend-relevant).

Related: #10662

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10697-refactor-include-backend-mirrored-endpoints-in-ingest-types-codegen-3326d73d365081569614f743ab6f074d)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
2026-03-29 00:21:05 -07:00
Dante
bce7a168de fix: type API mock responses in browser tests (#10668)
## Motivation

Browser tests mock API responses with `route.fulfill()` using untyped
inline JSON. When the OpenAPI spec changes, these mocks silently drift —
mismatches aren't caught at compile time and only surface as test
failures at runtime.

We already have auto-generated types from OpenAPI and manual Zod
schemas. This PR makes those types the source of truth for test mock
data.

From Mar 27 PR review session action item: "instruct agents to use
schemas and types when writing browser tests."

## Type packages and their API coverage

The frontend has two OpenAPI-generated type packages, each targeting a
different backend API with a different code generation tool:

| Package | Target API | Generator | TS types | Zod schemas |
|---------|-----------|-----------|----------|-------------|
| `@comfyorg/registry-types` | Registry API (node packages, releases,
subscriptions, customers) | `openapi-typescript` | Yes | **No** |
| `@comfyorg/ingest-types` | Ingest API (hub workflows, asset uploads,
workspaces) | `@hey-api/openapi-ts` | Yes | Yes |

Additionally, Python backend endpoints (`/api/queue`, `/api/features`,
`/api/settings`, etc.) are typed via manual Zod schemas in
`src/schemas/apiSchema.ts`.

This PR applies **compile-time type checking** using these existing
types. Runtime validation via Zod `.parse()` is not yet possible for all
endpoints because `registry-types` does not generate Zod schemas — this
requires a separate migration of `registry-types` to
`@hey-api/openapi-ts` (#10674).

## Summary

- Add "Typed API Mocks" guideline to `docs/guidance/playwright.md` with
a sources-of-truth table mapping endpoint categories to their type
packages
- Add rule to `AGENTS.md` Playwright section requiring typed mock data
- Refactor `releaseNotifications.spec.ts` to use `ReleaseNote` type
(from `registry-types`) via `createMockRelease()` factory
- Annotate template mock in `templates.spec.ts` with
`WorkflowTemplates[]` type

Refs #10656

## Example workflow: writing a new typed E2E test mock

When adding a new `route.fulfill()` mock, follow these steps:

### 1. Identify the type source

Check which API the endpoint belongs to:

| Endpoint category | Type source | Zod available |
|---|---|---|
| Ingest API (hub, billing, workflows) | `@comfyorg/ingest-types` | Yes
— use `.parse()` |
| Registry API (releases, nodes, publishers) |
`@comfyorg/registry-types` | Not yet (#10674) — TS type only |
| Python backend (queue, history, settings) | `src/schemas/apiSchema.ts`
| Yes — use `z.infer` |
| Templates | `src/platform/workflow/templates/types/template.ts` | No —
TS type only |

### 2. Create a typed factory (with Zod when available)

**Ingest API endpoints** — Zod schemas exist, use `.parse()` for runtime
validation:

```typescript
import { zBillingStatusResponse } from '@comfyorg/ingest-types/zod'
import type { BillingStatusResponse } from '@comfyorg/ingest-types'

function createMockBillingStatus(
  overrides?: Partial<BillingStatusResponse>
): BillingStatusResponse {
  return zBillingStatusResponse.parse({
    plan: 'free',
    credits_remaining: 100,
    renewal_date: '2026-04-28T00:00:00Z',
    ...overrides
  })
}
```

**Registry API endpoints** — TS type only (Zod not yet generated):

```typescript
import type { ReleaseNote } from '../../src/platform/updates/common/releaseService'

function createMockRelease(
  overrides?: Partial<ReleaseNote>
): ReleaseNote {
  return {
    id: 1,
    project: 'comfyui',
    version: 'v0.3.44',
    attention: 'medium',
    content: '## New Features',
    published_at: new Date().toISOString(),
    ...overrides
  }
}
```

### 3. Use in test

```typescript
test('should show upgrade banner for free plan', async ({ comfyPage }) => {
  await comfyPage.page.route('**/billing/status', async (route) => {
    await route.fulfill({
      status: 200,
      contentType: 'application/json',
      body: JSON.stringify(createMockBillingStatus({ plan: 'free' }))
    })
  })

  await comfyPage.setup()
  await expect(comfyPage.page.getByText('Upgrade')).toBeVisible()
})
```

The factory pattern keeps test bodies focused on **what varies** (the
override) rather than the full response shape.

## Scope decisions

| File | Decision | Reason |
|------|----------|--------|
| `releaseNotifications.spec.ts` | Typed | `ReleaseNote` type available
from `registry-types` |
| `templates.spec.ts` | Typed | `WorkflowTemplates` type available in
`src/platform/workflow/templates/types/` |
| `QueueHelper.ts` | Skipped | Dead code — instantiated but never called
in any test |
| `FeatureFlagHelper.ts` | Skipped | Response type is inherently
`Record<string, unknown>`, no stronger type exists |
| Fixture factories | Deferred | Coordinate with Ben's fixture
restructuring work to avoid duplication |

## Follow-up work

Sub-issues of #10656:

- #10670 — Clean up dead `QueueHelper` or rewrite against `/api/jobs`
endpoint
- #10671 — Expand typed factory pattern to more endpoints
- #10672 — Evaluate OpenAPI generation for excluded Python backend
endpoints
- #10674 — Migrate `registry-types` from `openapi-typescript` to
`@hey-api/openapi-ts` to enable Zod schema generation

## Test plan

- [x] `pnpm typecheck:browser` passes
- [x] `pnpm lint` passes
- [ ] Existing `releaseNotifications` and `templates` tests pass in CI
2026-03-29 15:45:06 +09:00
Christian Byrne
e7c2cd04f4 perf: add FPS, p95 frame time, and target thresholds to CI perf report (#10516)
## Summary

Enhances the CI performance report with explicit FPS metrics, percentile
frame times, and milestone target thresholds.

### Changes

**PerformanceHelper** (data collection):
- `measureFrameDurations()` now returns individual frame durations
instead of just the average, enabling percentile computation
- Computes `p95FrameDurationMs` from sorted frame durations
- Strips `allFrameDurationsMs` from serialized JSON to avoid bloating
artifacts

**perf-report.ts** (report rendering):
- **Headline summary** at top of report with key metrics per test
scenario
- **FPS display**: derives avg FPS and P5 FPS from frame duration
metrics
- **Target thresholds**: shows P5 FPS ≥ 52 target with / pass/fail
indicator
- **p95 frame time**: added as a tracked metric in the comparison table
- Metrics reordered to show frame time/FPS first (what people look for)

### Target

From the Nodes 2.0 Perf milestone: **P5 ≥ 52 FPS** on 245-node workflow
(equivalent to P95 frame time ≤ 19.2ms).

### Example headline output

```
> **vue-large-graph-pan**: 60 avg FPS · 58 P5 FPS  (target: ≥52) · 12ms TBT · 45.2 MB heap
> **canvas-zoom-sweep**: 45 avg FPS · 38 P5 FPS  (target: ≥52) · 85ms TBT · 52.1 MB heap
```

Follow-up to #10477 (merged).

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10516-perf-add-FPS-p95-frame-time-and-target-thresholds-to-CI-perf-report-32e6d73d365081a2a2a6ceae7d6e9be5)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
2026-03-28 23:29:19 -07:00
Terry Jia
391a6db056 test: add minimap e2e tests for close button, viewport, and pan (#10596)
## Summary
add more basic tests for minimap

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10596-test-add-minimap-e2e-tests-for-close-button-viewport-and-pan-3306d73d365081b9bf64dc7a3951d65f)
by [Unito](https://www.unito.io)
2026-03-28 23:19:04 -07:00
Christian Byrne
4d4dca2a46 docs: document fixture/page-object separation in browser tests (#10645)
## Summary

Document the agreed-upon architectural separation for browser test
fixtures:

- `fixtures/data/` — Static test data (mock API responses, workflow
JSONs, node definitions)
- `fixtures/components/` — Page object components (locators, user
interactions)
- `fixtures/helpers/` — Focused helper classes (domain-specific actions)
- `fixtures/utils/` — Pure utility functions (no page dependency)

## Changes

- **`browser_tests/AGENTS.md`** — Added architectural separation section
with clear rules for each directory
- **`browser_tests/fixtures/data/README.md`** (new) — Explains the data
directory purpose and what belongs here vs `assets/`

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10645-docs-document-fixture-page-object-separation-in-browser-tests-3316d73d365081febf52d165282c68f6)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
2026-03-28 23:11:34 -07:00
Christian Byrne
ba9f3481fb test(infra): cloud Playwright project with @cloud/@oss tagging (#10546)
## What

Adds a `cloud` Playwright project so E2E tests can run against
`DISTRIBUTION=cloud` builds, with `@cloud` / `@oss` test tagging.

## Why

100+ usages of `isCloud` / `DISTRIBUTION` across 9 categories (API
routing, UI visibility, settings, auth). Zero cloud test infrastructure
existed — cloud-specific UI components (LoginButton, SubscribeButton,
etc.) had no E2E coverage path.

## Investigation: Runtime Toggle

Investigated whether `isCloud` could be made runtime-toggleable in
dev/test mode (via `window.__FORCE_CLOUD__`). **Not feasible** —
`__DISTRIBUTION__` is a Vite `define` compile-time constant used for
dead-code elimination. Runtime override would break tree-shaking in
production.

Full investigation:
`research/architecture/cloud-runtime-toggle-investigation.md`

## What's included

### Playwright Config
- New `cloud` project alongside existing `chromium`
- Cloud project: `grep: /@cloud/` — only runs `@cloud` tagged tests
- Chromium project: `grepInvert: /@cloud/` — excludes cloud tests

### Build Script
- `npm run build:cloud` → `DISTRIBUTION=cloud vite build`

### Test Tagging Convention
```typescript
test('works in both', async () => { ... });
test('subscription button visible @cloud', async () => { ... });
test('install manager prompt @oss', async () => { ... });
```

### Example Tests
- 2 cloud-only tests validating cloud UI visibility

## NOT included (future work)
- CI workflow job for cloud tests (separate PR)
- Cloud project is opt-in — not run by default locally

## Unblocks
- Cloud-specific E2E tests for entire team
- TB-03 LoginButton, TB-04 SubscribeButton (@Kaili Yang)
- DLG-04 SignIn, DLG-06 CancelSubscription

Part of: Test Coverage Q2 Overhaul

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10546-test-infra-cloud-Playwright-project-with-cloud-oss-tagging-32f6d73d3650810ebb59dea8ce4891e9)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: Alexander Brown <drjkl@comfy.org>
2026-03-28 22:34:37 -07:00
Comfy Org PR Bot
7cbd61aaea 1.43.9 (#10693)
Patch version increment to 1.43.9

**Base branch:** `main`

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10693-1-43-9-3326d73d3650815d8e77e1db06a91b53)
by [Unito](https://www.unito.io)

---------

Co-authored-by: christian-byrne <72887196+christian-byrne@users.noreply.github.com>
Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Christian Byrne <cbyrne@comfy.org>
2026-03-28 22:18:45 -07:00
Christian Byrne
b09562a1bf docs: document Playwright fixture injection pattern for new helpers (#10653)
## Summary

Document the recommended pattern for adding new domain-specific test
helpers as Playwright fixtures via `base.extend()` instead of attaching
them to `ComfyPage`.

## Changes

- **What**: Added "Creating New Test Helpers" section to
`docs/guidance/playwright.md` with fixture extension example and rules

## Review Focus

Documentation-only change. Verify the example code matches the existing
pattern in `browser_tests/fixtures/ComfyPage.ts`.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10653-docs-document-Playwright-fixture-injection-pattern-for-new-helpers-3316d73d36508145b402cf02a5c2c696)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Alexander Brown <drjkl@comfy.org>
2026-03-28 21:37:02 -07:00
Christian Byrne
cc8ef09d28 docs: add arrange/act/assert pattern guidance for browser tests (#10657)
## Summary

Document the arrange/act/assert pattern for Playwright browser tests to
keep mock setup out of test bodies.

## Changes

- **What**: Added "Test Structure: Arrange/Act/Assert" section to
`docs/guidance/playwright.md` documenting that mock setup belongs in
`beforeEach`/fixtures, test bodies should only act and assert, and
`clearAllMocks` should never be called mid-test. Includes good/bad
examples.

## Review Focus

Docs-only change — no code impact.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10657-docs-add-arrange-act-assert-pattern-guidance-for-browser-tests-3316d73d365081aa92c0fb6442084484)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: Alexander Brown <drjkl@comfy.org>
2026-03-28 21:23:06 -07:00
Christian Byrne
64917e5b6c feat: add Playwright E2E agent check for reviewing browser tests (#10684)
Adds `.agents/checks/playwright-e2e.md` — a reviewer-focused agent check
for Playwright E2E tests.

**19 checks across 4 severity tiers:**

- **Major (1-7)** — Flakiness risks: `waitForTimeout`, missing
`nextFrame()`, unfocused keyboard, coordinate fragility, shared state,
server cleanup, double-click timing
- **Medium (8-11)** — Fixture/API misuse: reimplementing helpers, wrong
imports, programmatic graph building, missing `TestIds`
- **Minor (12-16)** — Convention violations: missing tags, `as any`,
unmasked screenshots, missing cleanup, debug helpers
- **Nitpick (17-19)** — Test design: screenshot-over-functional, large
workflows, Vue/LiteGraph mismatch

Hyperlinks to existing docs (`browser_tests/README.md`, `AGENTS.md`,
`docs/guidance/playwright.md`, writing skill) rather than duplicating
content. Scoped to reviewer concerns (not writer guidance).

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10684-feat-add-Playwright-E2E-agent-check-for-reviewing-browser-tests-3316d73d365081d88f9dea4467ee04cf)
by [Unito](https://www.unito.io)
2026-03-28 21:14:16 -07:00
Alexander Brown
0e7cab96b7 test: reorganize subgraph E2E tests into domain-organized directory (#10695)
## Summary

From the primordial entropy of 17 scattered spec files — a formless
sprawl of mixed concerns and inconsistent naming — emerges a clean,
domain-organized hierarchy. Order triumphs over chaos.

## Changes

- **What**: Reorganize all subgraph E2E tests from 17 flat files in
`browser_tests/tests/` into 10 domain-grouped files under
`browser_tests/tests/subgraph/`.

| File | Tests | Domain |
|------|-------|--------|
| `subgraphSlots` | 16 | I/O slot CRUD, rename, alignment, promoted slot
position |
| `subgraphPromotion` | 22 | Auto-promote, visibility, reactivity,
context menu, cleanup |
| `subgraphSerialization` | 16 | Hydration, round-trip, legacy formats,
ID remapping |
| `subgraphNavigation` | 10 | Breadcrumb, viewport, hotkeys, progress
state |
| `subgraphNested` | 9 | Configure order, duplicate names, pack values,
stale proxies |
| `subgraphLifecycle` | 7 | Source removal cleanup, pseudo-preview
lifecycle |
| `subgraphPromotionDom` | 6 | DOM widget persistence, cleanup,
positioning |
| `subgraphCrud` | 5 | Create, delete, copy, unpack |
| `subgraphSearch` | 3 | Search aliases, description, persistence |
| `subgraphOperations` | 2 | Copy/paste inside, undo/redo inside |

Where once the monolith `subgraph.spec.ts` (856 lines) mixed slot CRUD
with hotkeys, DOM widgets with navigation, and copy/paste with undo/redo
— now each behavioral domain has its sovereign territory.

Where once `subgraph-rename-dialog.spec.ts`,
`subgraphInputSlotRename.spec.ts`, and
`subgraph-promoted-slot-position.spec.ts` scattered rename concerns
across three kingdoms — now they answer to one crown:
`subgraphSlots.spec.ts`.

Where once `kebab-case` and `camelCase` warred for dominion — now a
single convention reigns.

All 96 test cases preserved. Zero test logic changes. Purely structural.

## Review Focus

- Verify no tests were lost in the consolidation
- Confirm import paths all resolve correctly at the new depth
(`../../fixtures/`)
- The `import.meta.dirname` asset path in `subgraphSlots.spec.ts` (slot
alignment test) updated for new directory depth

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10695-test-reorganize-subgraph-E2E-tests-into-domain-organized-directory-3326d73d36508197939be8825b69ea88)
by [Unito](https://www.unito.io)

Co-authored-by: Amp <amp@ampcode.com>
2026-03-28 21:06:00 -07:00
Christian Byrne
e0d16b7ee9 docs: add Fixture Data & Schemas section to Playwright test guidance (#10642)
## Summary

Add a "Fixture Data & Schemas" section to `docs/guidance/playwright.md`
so agents reference existing Zod schemas and TypeScript types when
creating test fixture data.

## Changes

- **What**: New section listing key schema/type locations (`apiSchema`,
`nodeDefSchema`, `jobTypes`, `workflowSchema`, etc.) to keep test
fixtures in sync with production types.

## Review Focus

Documentation-only change; no runtime impact.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10642-docs-add-Fixture-Data-Schemas-section-to-Playwright-test-guidance-3316d73d365081f5a234e4672b3dc4b9)
by [Unito](https://www.unito.io)
2026-03-28 19:03:06 -07:00
Christian Byrne
8eb1525171 feat: add assertHasItems and openFor to ContextMenu page object (#10659)
## Summary

Add composite assertion and scoped opening methods to the `ContextMenu`
Playwright page object.

## Changes

- **What**: Added `assertHasItems(items: string[])` using
`expect.soft()` per item, and `openFor(locator: Locator)` which
right-clicks and waits for menu visibility. Fully backward-compatible.

## Review Focus

Both methods reuse existing locators (`primeVueMenu`, `litegraphMenu`,
`getByRole("menuitem")`). `openFor` uses `.or()` to handle both menu
types.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10659-feat-add-assertHasItems-and-openFor-to-ContextMenu-page-object-3316d73d36508193af45da7d3af4f50c)
by [Unito](https://www.unito.io)
2026-03-28 18:58:16 -07:00
Christian Byrne
48219109d3 [chore] Update Comfy Registry API types from comfy-api@2d2ea96 (#10690)
## Automated API Type Update

This PR updates the Comfy Registry API types from the latest comfy-api
OpenAPI specification.

- API commit: 2d2ea96
- Generated on: 2026-03-28T20:41:08Z

These types are automatically generated using openapi-typescript.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10690-chore-Update-Comfy-Registry-API-types-from-comfy-api-2d2ea96-3316d73d365081659d9de146bcc419a7)
by [Unito](https://www.unito.io)
2026-03-28 17:31:55 -07:00
Alexander Brown
81e6282599 Chore: pnpm build ignores and version centralization (#10687)
## Summary

Just pnpm pieces. Centralize the pnpm version for corepack/actions.
Ignore builds from some recent deps.
2026-03-28 16:38:02 -07:00
Dante
b8480f889e feat: add Tag component from design system and rename SquareChip (#10650)
## Summary
- Add `Tag` component based on Figma design system with CVA variants
  - `square` (rounded-sm) and `rounded` (pill) shapes
- `overlay` shape for tags on image thumbnails (pending Figma
confirmation)
  - `default`, `unselected`, `selected` states matching Figma
  - `removable` prop with X close button and `remove` event
  - Icon slot support
- Rename `SquareChip` → `Tag` across all consumers
(WorkflowTemplateSelectorDialog, SampleModelSelector)
- Update all Storybook stories (Tag, Card, BaseModalLayout)
- Delete old `SquareChip.vue` and `SquareChip.stories.ts`
- Add E2E screenshot test for template card overlay tags

Foundation for migrating PrimeVue `Chip` and `Tag` components in
follow-up PRs.

## Test plan
- [x] Unit tests pass (5 tests: rendering, removable, icon slot)
- [x] E2E screenshot test: template cards with overlay tags
- [x] Typecheck passes
- [x] Lint passes
- [ ] Verify Tag stories render correctly in Storybook
- [ ] Verify WorkflowTemplateSelectorDialog tags display correctly
- [ ] Verify SampleModelSelector chips display correctly

## Follow-up work
- **PR 4** (#10673): Migrate PrimeVue `Chip` → custom `Tag`
(SearchFilterChip, NodeSearchItem, DownloadItem)
- **PR 5** (planned): Migrate PrimeVue `Tag` → custom `Tag` (~14 files)
2026-03-29 08:27:53 +09:00
Christian Byrne
b49ea9fabd feat: add getNodesByTitle and getNodeByTitleNth helpers to VueNodeHelpers (#10666)
## Summary

Add helpers for safely interacting with nodes that share the same title
without hitting Playwright strict mode.

## Changes

- **What**: Added `getNodesByTitle(title)` and `getNodeByTitleNth(title,
index)` to `VueNodeHelpers`. Updated `docs/guidance/playwright.md` with
a gotcha note about duplicate node names.

## Review Focus

These are purely additive helpers — no existing behavior changes.
`getNodesByTitle` returns all matching nodes (callers use `.nth()` to
pick), and `getNodeByTitleNth` is a convenience wrapper. The existing
`selectNodes(nodeIds)` by-ID method is unchanged.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10666-feat-add-getNodesByTitle-and-getNodeByTitleNth-helpers-to-VueNodeHelpers-3316d73d3650812eabe6e56a768a34d2)
by [Unito](https://www.unito.io)
2026-03-28 16:09:18 -07:00
Christian Byrne
8da4640a76 docs: add assertion best practices to Playwright guide (#10663)
## Summary

Document custom expect messages and soft assertions as Playwright best
practices.

## Changes

- **What**: Added "Assertion Best Practices" section to
`docs/guidance/playwright.md` covering custom messages, `expect.soft()`,
and guidelines for when to use each.

## Review Focus

Documentation-only change — no code impact.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10663-docs-add-assertion-best-practices-to-Playwright-guide-3316d73d365081309d83f95bb9b86fe1)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
2026-03-28 16:00:07 -07:00
Christian Byrne
65f18d17af feat: ban useVirtualList from @vueuse/core via ESLint (#10643)
## Summary

Add ESLint `no-restricted-imports` rule to prevent usage of
`useVirtualList` from `@vueuse/core`.

## Changes

- **What**: New ESLint config block banning `useVirtualList` in
`**/*.{ts,vue}` files. The team standardized on TanStack Virtual (via
Reka UI virtualizer or `@tanstack/vue-virtual`) for all virtualization.
`useVirtualList` requires uniform item heights and is no longer desired.
This is a preventive ban — no existing usage exists.

## Review Focus

Straightforward lint rule addition following the existing
`no-restricted-imports` pattern in `eslint.config.ts`.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10643-feat-ban-useVirtualList-from-vueuse-core-via-ESLint-3316d73d365081d5adf0ec926aab6e28)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Benjamin Lu <benjaminlu1107@gmail.com>
2026-03-28 15:36:41 -07:00
pythongosssss
54a00aac75 test/refactor: App mode - Refactor & Save As tests (#10680)
## Summary

Adds e2e Save As tests for #10679.
Refactors tests to remove getByX and other locators in tests to instead
be in fixtures.

## Changes

- **What**: 
- extract app mode fixtures
- add test ids where required
- add new tests

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10680-test-refactor-App-mode-Refactor-Save-As-tests-3316d73d3650815b9d49ccfbb6d54416)
by [Unito](https://www.unito.io)
2026-03-28 15:02:23 -07:00
Alexander Brown
d2358c83e8 test: extract shared subgraph E2E test utilities (#10629)
## Summary

Extract repeated patterns from 12 subgraph Playwright spec files into
shared test utilities, reducing duplication by ~142 lines.

## Changes

- **What**: New shared helpers for common subgraph test operations:
- `SubgraphHelper`: `getSlotCount()`, `getSlotLabel()`, `removeSlot()`,
`findSubgraphNodeId()`
  - `NodeReference`: `delete()`
- `subgraphTestUtils`: `serializeAndReload()`,
`convertDefaultKSamplerToSubgraph()`, `expectWidgetBelowHeader()`,
`collectConsoleWarnings()`, `packAllInteriorNodes()`
- Replaced ~72 inline `page.evaluate` blocks and multi-line sequences
with single helper calls across 12 spec files

## Review Focus

- Behavioral equivalence: every replacement is a mechanical extraction
with no test logic changes
- API surface of new helpers: naming, parameter types, placement in
existing utility classes
- Whether any remaining inline patterns in the spec files would benefit
from further extraction

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10629-test-extract-shared-subgraph-E2E-test-utilities-3306d73d365081b0b6b5db52ed0a4552)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Amp <amp@ampcode.com>
2026-03-28 21:12:26 +00:00
Christian Byrne
b2f848893a test: add perf test for viewport pan sweep GC churn (#10479)
## Summary

Adds a `@perf` test to establish a baseline for viewport panning GC
churn on large graphs.

## Changes

- **What**: New `large graph viewport pan sweep` perf test that pans
aggressively back and forth across a 245-node graph, forcing many nodes
to cross the viewport boundary. Measures style recalcs, forced layouts,
task duration, heap delta, and DOM node count.

## Review Focus

This is **PR 1 of 2** (perf-fix-with-proof pattern). The fix (viewport
culling) will follow in a separate PR once this baseline is established
on main. CI will then show the delta proving the improvement.

The test uses 120 steps out + 120 steps back at 8px/step = ~960px total
displacement, enough to sweep across a significant portion of the large
graph layout.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10479-test-add-perf-test-for-viewport-pan-sweep-GC-churn-32d6d73d365081cc9f15fe3d5890675d)
by [Unito](https://www.unito.io)
2026-03-28 13:54:26 -07:00
Christian Byrne
5c0e15f403 feat: add layout shell — BaseLayout, SiteNav, SiteFooter [2/3] (#10141)
## Summary
Adds the layout shell for the marketing site: SEO head, analytics, nav,
and footer.

## Changes (incremental from #10140)
- BaseLayout.astro: SEO meta (OG/Twitter), GTM (GTM-NP9JM6K7), Vercel
Analytics, ClientRouter, i18n
- SiteNav.vue: Fixed nav with logo, Enterprise/Gallery/About/Careers
links, COMFY CLOUD + COMFY HUB CTAs, mobile hamburger with ARIA
- SiteFooter.vue: Product/Resources/Company/Legal columns, social icons

## Stack (via Graphite)
- #10140 [1/3] Scaffold ← merge first
- **[2/3] Layout Shell** ← this PR
- #10142 [3/3] Homepage Sections

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10141-feat-add-layout-shell-BaseLayout-SiteNav-SiteFooter-2-3-3266d73d365081aeb2d7e598943a8e17)
by [Unito](https://www.unito.io)
2026-03-28 13:29:16 -07:00
Christian Byrne
dc09eb60e4 fix: add deprecation warning for widget.inputEl on STRING multiline widgets (#9808)
## Summary

Add a deprecation warning when custom nodes access `widget.inputEl` on
STRING multiline widgets, directing them to use `widget.element`
instead.

## Changes

- **What**: Add a reusable `defineDeprecatedProperty` helper in
`feedback.ts` that creates an ODP getter/setter proxy from a deprecated
property to its replacement, logging via the existing `warnDeprecated`
utility (deduplicates: warns once per unique message per session). Use
it to deprecate `widget.inputEl` → `widget.element`.

## Review Focus

- `defineDeprecatedProperty` is generic and can be reused for future
property deprecations across the codebase.
- `warnDeprecated` already handles deduplication via a `Set`, so heavy
access patterns (e.g. custom nodes reading `widget.inputEl` in tight
loops) won't spam.
- `enumerable: false` keeps the deprecated alias out of `Object.keys()`
/ `for...in` / `JSON.stringify`.

Fixes Comfy-Org/ComfyUI#12893

<!-- Pipeline-Ticket: 6b291ba2-694c-42d6-ac0c-fcbdcba9373a -->

---------

Co-authored-by: Dante <bunggl@naver.com>
2026-03-28 13:24:49 -07:00
Christian Byrne
30b17407db fix: use v-show for frequently toggled canvas overlay components (#9401)
## What
Replace `v-if` with `v-show` on SelectionRectangle and NodeTooltip
components.

## Why
Firefox profiler shows 687 Vue `insert` markers from mount/unmount
cycling during canvas interaction. These components toggle frequently
during drag and mouse move events.

## How
- **SelectionRectangle**: `v-if` → `v-show` (single element, safe to
keep in DOM)
- **NodeTooltip**: `v-if` → `v-show` + no-op guard on `hideTooltip()` to
skip redundant reactivity triggers

## Perf Impact
Expected reduction: ~687 Vue insert/remove operations per profiling
session

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-9401-fix-use-v-show-for-frequently-toggled-canvas-overlay-components-31a6d73d365081aba2d7fce079bde7e9)
by [Unito](https://www.unito.io)
2026-03-28 13:23:20 -07:00
Alexander Brown
5b4ebf4d99 test: audit skipped tests — prune stale, re-enable stable, remove dead code (#10312)
## Summary

Audit all skipped/fixme tests: delete stale tests whose underlying
features were removed, re-enable tests that pass with minimal fixes, and
remove orphaned production code that only the deleted tests exercised.
Net result: **−2,350 lines** across 50 files.

## Changes

- **Pruned stale skipped tests** (entire files deleted):
- `LGraph.configure.test.ts`, `LGraph.constructor.test.ts` — tested
removed LGraph constructor paths
- `LGraphCanvas.ghostAutoPan.test.ts`,
`LGraphCanvas.linkDragAutoPan.test.ts`, `useAutoPan.test.ts`,
`useSlotLinkInteraction.autoPan.test.ts` — tested removed auto-pan
feature
- `useNodePointerInteractions.test.ts` — single skipped test for removed
callback
  - `ImageLightbox.test.ts` — component replaced by `MediaLightbox`
- `appModeWidgetRename.spec.ts` (E2E) — feature removed; helper
`AppModeHelper.ts` also deleted
- `domWidget.spec.ts`, `widget.spec.ts` (E2E) — tested removed widget
behavior

- **Removed orphaned production code** surfaced by test pruning:
- `useAutoPan.ts` — composable + 93 lines of auto-pan logic in
`LGraphCanvas.ts`
  - `ImageLightbox.vue` — replaced by `MediaLightbox`
- Auto-pan integration in `useSlotLinkInteraction.ts` and
`useNodeDrag.ts`
- Dead settings (`LinkSnapping.AutoPanSpeed`,
`LinkSnapping.AutoPanMargin`) in `coreSettings.ts` and
`useLitegraphSettings.ts`
- Unused subgraph methods (`SubgraphNode.getExposedInput`,
`SubgraphInput.getParentInput`)
- Dead i18n key, dead API schema field, dead fixture exports
(`dirtyTest`, `basicSerialisableGraph`)
  - Dead test utility `litegraphTestUtils.ts`

- **Re-enabled skipped tests with minimal fixes**:
  - `useBrowserTabTitle.test.ts` — removed skip, test passes as-is
- `eventUtils.test.ts` — replaced MSW dependency with direct `fetch`
mock
- `SubscriptionPanel.test.ts` — stabilized button selectors,
timezone-safe date assertion
- `LinkConnector.test.ts` — removed stale describe blocks, kept passing
suite
- `widgetUtil.test.ts` — removed skipped tests for deleted functionality
- `comfyManagerStore.test.ts` — removed skipped `isPackInstalling` /
`action buttons` / `loading states` blocks

- **Re-enabled then re-skipped 3 flaky E2E tests** (fail in CI for
pre-existing reasons):
- `browserTabTitle.spec.ts` — canvas click timeout (element not visible)
  - `groupNode.spec.ts` — screenshot diff (stale golden image)
  - `nodeSearchBox.spec.ts` — `p-dialog-mask` intercepts pointer events

- **Simplified production code** alongside test cleanup:
- `useNodeDrag.ts` — removed auto-pan integration, simplified from
170→100 lines
- `DropZone.vue` — refactored URL-drop handling, removed unused code
path
- `ToInputFromIoNodeLink.ts`, `SubgraphInputEventMap.ts` — removed dead
subgraph wiring

- **Dependencies**: none
- **Breaking**: none (all removed code was internal/unused)

## Review Focus

- Confirm deleted production code (`useAutoPan`, `ImageLightbox`,
subgraph methods) has no remaining callers
- Validate that simplified `useNodeDrag.ts` preserves drag behavior
without auto-pan
- Check that re-skipped E2E tests have clear skip reasons for future
triage

## Screenshots (if applicable)

N/A

---------

Co-authored-by: Amp <amp@ampcode.com>
Co-authored-by: github-actions <github-actions@github.com>
2026-03-28 13:08:52 -07:00
pythongosssss
6836419e96 fix: App mode - Save as not using correct extension or persisting mode on change (#10679)
## Summary

With a previously saved workflow, selecting "Save as" in app mode would
not correctly change the file extension to the chosen mode, and would
require an additional save after to persist the actual mode change.

Recreation:
- Build app
- Save as worklow X, app mode
- Select Save as from builder footer [Save | v] chevron button
- Select node graph
- Save
- Check workflow on disk - it's still called X.app.json and doesn't have
linearMode: false <-- bug

## Changes

- **What**: 
- pass isApp to save workflow
- ensure active graph & initialMode are correctly set when calling
saveAs BEFORE the actual saveWorkflow call
- add linearMode to workflowShema to prevent casts
- tests

## Review Focus
e2e tests coming in a follow up PR along with some refactoring of the
browser tests (left this PR focused to the actual fix)

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10679-fix-App-mode-Save-as-not-using-correct-extension-or-persisting-mode-on-change-3316d73d365081ef985cf57c91c34299)
by [Unito](https://www.unito.io)
2026-03-28 12:08:35 -07:00
Christian Byrne
4c59a5e424 [chore] Update Ingest API types from cloud@0125ed6 (#10677)
## Automated Ingest API Type Update

This PR updates the Ingest API TypeScript types and Zod schemas from the
latest cloud OpenAPI specification.

- Cloud commit: 0125ed6
- Generated using @hey-api/openapi-ts with Zod plugin

These types cover cloud-only endpoints (workspaces, billing, secrets,
assets, tasks, etc.).
Overlapping endpoints shared with the local ComfyUI Python backend are
excluded.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10677-chore-Update-Ingest-API-types-from-cloud-0125ed6-3316d73d36508122a6f2ec7df88d416b)
by [Unito](https://www.unito.io)

---------

Co-authored-by: dante01yoon <6510430+dante01yoon@users.noreply.github.com>
Co-authored-by: GitHub Action <action@github.com>
2026-03-28 22:51:04 +09:00
Dante
82242f1b00 refactor: add Badge component and fix twMerge font-size detection (#10580)
## Summary
- Rename `text-xxxs`/`text-xxs` to `text-3xs`/`text-2xs` in design
system CSS — fixes `tailwind-merge` incorrectly classifying custom
font-size utilities as color classes, which clobbered text color
- Add `Badge` component with updated severity colors matching Figma
design (white text on colored backgrounds)
- Add Badge stories under `Components/Badges/Badge`
- Add unit tests including twMerge regression coverage

Split from #10438 per review feedback — this PR contains the
foundational Badge component; migration of consumers follows in a
separate PR.

## Test plan
- [x] Unit tests pass (`Badge.test.ts` — 12 tests)
- [x] Typecheck passes
- [x] Lint passes
- [ ] Verify Badge stories render correctly in Storybook
- [ ] Verify existing components using `text-2xs`/`text-3xs` render
unchanged

Fixes #10438 (partial)

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10580-refactor-add-Badge-component-and-fix-twMerge-font-size-detection-32f6d73d3650810dae7cd0d4af67fd1c)
by [Unito](https://www.unito.io)
2026-03-27 19:23:59 -07:00
Comfy Org PR Bot
f9c334092c 1.43.8 (#10635)
Patch version increment to 1.43.8

**Base branch:** `main`

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10635-1-43-8-3316d73d3650815db627c30a83d2b9fc)
by [Unito](https://www.unito.io)

---------

Co-authored-by: christian-byrne <72887196+christian-byrne@users.noreply.github.com>
Co-authored-by: github-actions <github-actions@github.com>
2026-03-27 19:19:07 -07:00
Christian Byrne
04aee0308b feat: add SHA-256 hashed email to GTM dataLayer for sign_up/login events (#10591)
## Summary

Adds SHA-256 hashed user email to GTM dataLayer `sign_up` and `login`
events to improve Meta/LinkedIn Conversions API (CAPI) match rate via
Stape server-side tracking.

## Privacy

- Email is SHA-256 hashed client-side before being pushed to dataLayer —
the raw email never enters the analytics pipeline.
- Email is normalized (trimmed + lowercased) before hashing per
Google/Meta requirements.
- If email is absent (e.g., GitHub OAuth without public email), no
`user_data` entry is pushed.

## Testing

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10591-feat-add-SHA-256-hashed-email-to-GTM-dataLayer-for-sign_up-login-events-3306d73d36508148a321d62810698013)
by [Unito](https://www.unito.io)
2026-03-27 19:18:05 -07:00
Dante
caa6f89436 test(assets): add property-based tests for asset utility functions (#10619)
## Summary

Add property-based tests (using `fast-check`) for asset-related pure
utility functions, complementing existing example-based unit tests with
algebraic invariant checks across thousands of randomized inputs.

Fixes #10617

## Changes

- **What**: 4 new `*.property.test.ts` files covering
`assetFilterUtils`, `assetSortUtils`, `useAssetSelection`, and
`useOutputStacks` — 32 property-based tests total

## Why property-based testing (fast-check)?

### Gap in existing tests

The existing example-based unit tests (53 tests across 3 files) verify
behavior for **hand-picked inputs** — specific category names, known
sort orderings, fixed asset lists. This leaves two blind spots:

1. **Edge-case discovery**: Example tests only cover cases the author
anticipates. Property tests generate hundreds of randomized inputs per
run, probing boundaries the author didn't consider (e.g., empty strings,
single-char names, deeply nested tag paths, assets with `undefined`
metadata fields).

2. **Algebraic invariants**: Certain guarantees should hold for **all**
inputs, not just the handful tested. For example:
- "Filtering always produces a subset" — impossible to violate with 5
examples, easy to violate in production with unexpected metadata shapes
- "Sorting is idempotent" — an unstable sort bug would only surface with
specific duplicate patterns
- "Reconciled selection IDs are always within visible assets" — a
set-intersection bug might only appear with specific overlap patterns
between selection and visible sets

3. **No test coverage for `useOutputStacks`**: The composable had zero
tests before this PR.

### What these tests verify (invariant catalog)

| Module | # Properties | Key invariants |
|--------|-------------|----------------|
| `assetFilterUtils` | 10 | Filter result ⊆ input; `"all"` is identity;
ownership partitions into disjoint my/public; empty constraint is
identity |
| `assetSortUtils` | 8 | Never mutates input; output is permutation of
input; idempotent (sort∘sort = sort); adjacent pairs satisfy comparator;
`"default"` preserves order |
| `useAssetSelection` | 7 | After reconcile: selected ⊆ visible;
reconcile never adds new IDs; superset preserves all; empty visible
clears; `getOutputCount` ≥ 1; `getTotalOutputCount` ≥ len(assets) |
| `useOutputStacks` | 7 | Collapsed count = input count; items reference
input assets; unique keys; selectableAssets length = assetItems length;
no collapsed child flags; reactive ref updates |

### Quantitative impact

Each property runs 100 iterations by default → **3,200 randomized inputs
per test run** vs 53 hand-picked examples in existing tests.

**Coverage delta** (v8, measured against target modules):

| Module | Metric | Before (53 tests) | After (+32 property) | Delta |
|--------|--------|-------------------|---------------------|-------|
| `useAssetSelection.ts` | Branch | 76.92% | 94.87% | **+17.95pp** |
| `useAssetSelection.ts` | Stmts | 82.50% | 90.00% | **+7.50pp** |
| `useAssetSelection.ts` | Lines | 81.69% | 88.73% | **+7.04pp** |
| `useOutputStacks.ts` | Stmts | 0% | 37.50% | **+37.50pp** (new) |
| `useOutputStacks.ts` | Funcs | 0% | 75.00% | **+75.00pp** (new) |
| `assetFilterUtils.ts` | All | 97.5%+ | 97.5%+ | maintained |
| `assetSortUtils.ts` | All | 100% | 100% | maintained |

### Prior art

Follows the established pattern from
`src/platform/workflow/persistence/base/draftCacheV2.property.test.ts`.

## Review Focus

- Are the chosen invariants correct and meaningful (not just
change-detector tests)?
- Are the `fc.Arbitrary` generators representative of real-world asset
data shapes?

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10619-test-assets-add-property-based-tests-for-asset-utility-functions-3306d73d3650816985ebcd611bbe0837)
by [Unito](https://www.unito.io)
2026-03-28 10:26:55 +09:00
Dante
c4d0b3c97a feat: fetch publish tag suggestions from hub labels API (#10497)
<img width="1305" height="730" alt="스크린샷 2026-03-28 오전 10 17
30"
src="https://github.com/user-attachments/assets/316fcb72-e749-40da-b29f-05af91f30610"
/>

## Summary
- Replace hardcoded `COMFY_HUB_TAG_OPTIONS` with dynamic fetch from `GET
/hub/labels?type=tag`
- Falls back to the existing static tag list when the API call fails
- Adds `zHubLabelListResponse` Zod schema and `fetchTagLabels` service
method

## Test plan
- [ ] Open publish wizard → verify tag suggestions load from API
- [ ] Disconnect network / use env without hub API → verify hardcoded
fallback tags appear
- [ ] Select and deselect tags → verify behavior unchanged
- [ ] Unit tests pass (`pnpm vitest run` on affected files)

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10497-feat-fetch-publish-tag-suggestions-from-hub-labels-API-32e6d73d3650815fb113cf591030d4e8)
by [Unito](https://www.unito.io)
2026-03-28 10:19:21 +09:00
Terry Jia
3eb7c29ea4 test: add image compare widget basic e2e tests (#10597)
## Summary
test: add image compare widget basic e2e tests

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10597-test-add-image-compare-widget-basic-e2e-tests-3306d73d365081699125e86b6caa7188)
by [Unito](https://www.unito.io)

---------

Co-authored-by: github-actions <github-actions@github.com>
2026-03-27 18:15:11 -07:00
Christian Byrne
cc2cb7e89f [chore] Update Ingest API types from cloud@d4d0319 (#10625)
## Automated Ingest API Type Update

This PR updates the Ingest API TypeScript types and Zod schemas from the
latest cloud OpenAPI specification.

- Cloud commit: d4d0319
- Generated using @hey-api/openapi-ts with Zod plugin

These types cover cloud-only endpoints (workspaces, billing, secrets,
assets, tasks, etc.).
Overlapping endpoints shared with the local ComfyUI Python backend are
excluded.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10625-chore-Update-Ingest-API-types-from-cloud-d4d0319-3306d73d365081509b1cc5cc9727e4f4)
by [Unito](https://www.unito.io)

---------

Co-authored-by: MillerMedia <7741082+MillerMedia@users.noreply.github.com>
Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: Alexander Brown <drjkl@comfy.org>
2026-03-27 18:13:29 -07:00
Arthur R Longbottom
d2f4d41960 test: make SubscriptionPanel refill date test timezone-agnostic (#10618)
## Summary

Fix timezone-dependent test failure in SubscriptionPanel and add a local
CI script.

## Changes

- **What**: The `renders refill date with literal slashes` test
hardcoded `12/31/24` but the component renders using local timezone
`Date` methods. In UTC-negative timezones, `2024-12-31T00:00:00Z`
renders as Dec 30. Now computes the expected string the same way the
component does.
- **What**: Added `pnpm test:ci:local` script
(`scripts/test-ci-local.sh`) that builds the frontend, starts a ComfyUI
backend with `--multi-user --front-end-root dist`, runs vitest +
Playwright, then cleans up. One command for full local CI.

## Review Focus

This is a test-only change — no production code modified. The
SubscriptionPanel component itself is unchanged; only the test assertion
is made timezone-agnostic.

## E2E Regression Test

Not applicable — this PR fixes a unit test assertion, not a production
bug. No user-facing behavior changed.
2026-03-27 17:31:25 -07:00
Terry Jia
070a5f59fe add basic mask editor tests (#10574)
## Summary
add basic mask editor tests

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10574-add-basic-mask-editor-tests-32f6d73d36508170b8b2c684be56cd26)
by [Unito](https://www.unito.io)
2026-03-27 16:04:36 -04:00
pythongosssss
7864e780e7 feat: App mode - Rework save flow (#10439)
## Summary

Users were finding the final step of the builder flow
confusing/misleading, with the "choose default mode" not actually saving
the workflow and people losing changes. This updates it to remove
"save"/"set default" as a step in the builder, and changes it to a
distinct action.

## Changes

- **What**: 
- add mode selection tab on footer toolbar
- extract reusable radio group component
- remove setting default mode dialog
- add save/save as/saved dialogs

## Screenshots (if applicable)


https://github.com/user-attachments/assets/c7439c2e-a917-4f2b-b176-f8bb8c10026d

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10439-feat-App-mode-Rework-save-flow-32d6d73d3650814781b6c7bbea685a97)
by [Unito](https://www.unito.io)
2026-03-27 12:53:09 -07:00
Alexander Brown
db1257fdb3 test: migrate 8 hard-case component tests from VTU to VTL (Phase 3) (#10493)
## Summary

Phase 3 of the VTL migration: migrate 8 hard-case component tests from
@vue/test-utils to @testing-library/vue (68 tests).

Stacked on #10490.

## Changes

- **What**: Migrate SignInForm, CurrentUserButton, NodeSearchBoxPopover,
BaseThumbnail, JobAssetsList, SelectionToolbox, QueueOverlayExpanded,
PackVersionSelectorPopover from VTU to VTL
- **`wrapper.vm` elimination**: 13 instances across 4 files (5 in
SignInForm, 3 in CurrentUserButton, 3 in PackVersionSelectorPopover, 2
in BaseThumbnail) replaced with user interactions or removed
- **`vm.$emit()` on stubs**: Interactive stubs with `setup(_, { emit })`
expose buttons or closure-based emit functions (QueueOverlayExpanded,
NodeSearchBoxPopover, JobAssetsList)
- **Removed**: 6 change-detector/redundant tests, 3 `@ts-expect-error`
annotations, `PackVersionSelectorVM` interface, `getVM` helper
- **BaseThumbnail**: Removed `useEventListener` mock — real event
handler attaches, `fireEvent.error(img)` triggers error state

## Review Focus

- Interactive stub patterns: `JobAssetsListStub` and `NodeSearchBoxStub`
use closure-based emit functions to trigger parent event handlers
without `vm.$emit`
- SignInForm form submission test fills PrimeVue Form fields via
`userEvent.type` and submits via button click (replaces `vm.onSubmit()`
direct call)
- CurrentUserButton Popover stub tracks open/close state reactively
- JobAssetsList: file-level `eslint-disable` for
`no-container`/`no-node-access`/`prefer-user-event` since stubs lack
ARIA roles and hover tests need `fireEvent`

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10493-test-migrate-8-hard-case-component-tests-from-VTU-to-VTL-Phase-3-32e6d73d365081f88097df634606d7e3)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Amp <amp@ampcode.com>
2026-03-27 12:37:09 -07:00
Arthur R Longbottom
7e7e2d5647 fix: persist subgraph viewport across navigation and tab switches (#10247)
## Summary

Fix subgraph viewport (zoom + position) drifting when navigating in/out
of subgraphs and switching workflow tabs.

## Problem

Three root causes:
1. **First visit**: `restoreViewport()` silently returned on cache miss,
leaving canvas at stale position
2. **Cross-workflow leakage**: Cache keyed by bare `graphId` — two
workflows with the same subgraph or unsaved workflows shared cache
entries
3. **Stale save on tab switch**: `loadGraphData` and
`changeTracker.restore()` overwrite `canvas.ds` before the async watcher
could save the old viewport

## Solution

1. **Workflow-scoped cache keys**: `${path}#${instanceId}:${graphId}` —
WeakMap assigns unique IDs per workflow object, handling unsaved
workflows with identical paths
2. **`flush: 'sync'` on activeSubgraph watcher**: Fires immediately
during `setGraph()`, BEFORE `loadGraphData`/`changeTracker` can corrupt
`canvas.ds`
3. **Cache miss → rAF fitToBounds**: On first visit, computes bounds
from `graph._nodes` and calls `ds.fitToBounds()` after the browser has
rendered
4. **Workflow switch watcher** (`flush: 'sync'`): Pre-saves viewport
under old workflow identity, suppresses `onNavigated` saves during load
cycle

Key architectural insight: `setGraph()` never touches `canvas.ds`, but
`loadGraphData` and `changeTracker.restore()` both write to it. By using
`flush: 'sync'`, the save happens during `setGraph` (before the
overwrites).

## Review Focus

- `subgraphNavigationStore.ts` — the three fixes and their interaction
- `flush: 'sync'` watchers — critical for correct save timing
- `suppressNavigatedSave` flag — prevents stale saves during async
workflow load

## Breaking Changes

None. Viewport cache is session-only (in-memory LRU). Existing workflows
unaffected.

## Demo Video of Fix


https://github.com/user-attachments/assets/71dd4107-a030-4e68-aa11-47fe00101b25

## Test plan

- [x] Unit: save/restore with workflow-scoped keys
- [x] Unit: cache miss doesn't mutate canvas synchronously
- [x] Unit: navigation integration (enter/exit preserves viewport)
- [x] E2E: first subgraph visit has visible nodes
- [x] Manual: enter subgraph → zoom/pan → exit → re-enter → viewport
restored
- [x] Manual: tab with subgraph → different tab → back → viewport
restored
- [x] Manual: two unsaved workflows → switch between → viewports
isolated

- Fixes #10246
- Related: #8173
<!-- QA_REPORT_SECTION -->
---
## 🔍 Automated QA Report

| | |
|---|---|
| **Status** |  Complete |
| **Report** |
[sno-qa-10247.comfy-qa.pages.dev](https://sno-qa-10247.comfy-qa.pages.dev/)
|
| **CI Run** | [View
workflow](https://github.com/Comfy-Org/ComfyUI_frontend/actions/runs/23373279990)
|

Before/after video recordings with **Behavior Changes** and **Timeline
Comparison** tables.
2026-03-27 19:32:57 +00:00
pythongosssss
dabfc6521e test: Add test to prevent regression of workflow corruption during graph loading (#10623)
## Summary

Adds regression test for
https://github.com/Comfy-Org/ComfyUI_frontend/pull/9531

## Changes

- **What**:  
- registers extension that triggers checkState during
afterConfigureGraph (at this point the workflow data and active graph
are not in sync), previously causing it to overwrite the workflow data
- switches between tabs
- ensures they are not corrupted

Line 35 can be uncommented to cause the test to fail by clearing the
isLoadingGraph flag

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10623-test-Add-test-to-prevent-regression-of-workflow-corruption-during-graph-loading-3306d73d3650815fbf02ef23dfdcddce)
by [Unito](https://www.unito.io)
2026-03-27 11:54:07 -07:00
Jin Yi
2f9431c6dd fix: stop Escape key propagation in Select components (#10397) 2026-03-27 18:50:04 +09:00
Christian Byrne
62979e3818 refactor: rename firebaseAuthStore to authStore with shared test fixtures (#10483)
## Summary

Rename `useFirebaseAuthStore` → `useAuthStore` and
`FirebaseAuthStoreError` → `AuthStoreError`. Introduce shared mock
factory (`authStoreMock.ts`) to replace 16 independent bespoke mocks.

## Changes

- **What**: Mechanical rename of store, composable, class, and store ID
(`firebaseAuth` → `auth`). Created
`src/stores/__tests__/authStoreMock.ts` — a shared mock factory with
reactive controls, used by all consuming test files. Migrated all 16
test files from ad-hoc mocks to the shared factory.
- **Files**: 62 files changed (rename propagation + new test infra)

## Review Focus

- Mock factory API design in `authStoreMock.ts` — covers all store
properties with reactive `controls` for per-test customization
- Self-test in `authStoreMock.test.ts` validates computed reactivity

Fixes #8219

## Stack

This is PR 1/5 in a stacked refactoring series:
1. **→ This PR**: Rename + shared test fixtures
2. #10484: Extract auth-routing from workspaceApi
3. #10485: Auth token priority tests
4. #10486: Decompose MembersPanelContent
5. #10487: Consolidate SubscriptionTier type

---------

Co-authored-by: Alexander Brown <drjkl@comfy.org>
2026-03-27 00:31:11 -07:00
Kelly Yang
6e249f2e05 fix: prevent canvas zoom when scrolling image history dropdown (#10550)
## Summary
 
Fix #10549 where using the mouse wheel over the image history dropdown
(e.g., in "Load Image" nodes) would trigger canvas zooming instead of
scrolling the list.

## Changes
Added `data-capture-wheel="true" ` to the root container. This attribute
is used by the `TransformPane` to identify elements that should consume
wheel events.

## Screenshots
 
after


https://github.com/user-attachments/assets/8935a1ca-9053-4ef1-9ab8-237f43eabb35

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10550-fix-prevent-canvas-zoom-when-scrolling-image-history-dropdown-32f6d73d365081c4ad09f763481ef8c2)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Terry Jia <terryjia88@gmail.com>
Co-authored-by: Alexander Brown <drjkl@comfy.org>
2026-03-27 06:43:24 +00:00
Jin Yi
a1c46d7086 fix: replace hardcoded font-size 10px/11px with text-2xs Tailwind token (#10604)
## Summary

Replace all hardcoded `text-[10px]`, `text-[11px]`, and `font-size:
10px` with a new `text-2xs` Tailwind theme token (0.625rem / 10px).

## Changes

- **What**: Add `--text-2xs` custom theme token to design system CSS and
replace 14 hardcoded font-size occurrences across 12 files with
`text-2xs`.

## Review Focus

Consistent use of design tokens instead of arbitrary values for small
font sizes.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10604-fix-replace-hardcoded-font-size-10px-11px-with-text-2xs-Tailwind-token-3306d73d365081dfa1ebdc278e0a20b7)
by [Unito](https://www.unito.io)

Co-authored-by: Amp <amp@ampcode.com>
2026-03-26 23:35:05 -07:00
Benjamin Lu
dd89b74ca5 fix: wait for settings before cloud desktop promo (#10526)
this fixes two issues, setting store race did not await load, and it
only cleared shown on clear not on show

## Summary

Wait for settings to load before deciding whether to show the one-time
macOS desktop cloud promo so the persisted dismissal state is respected
on launch.

## Changes

- **What**: Await `settingStore.load()` before checking
`Comfy.Desktop.CloudNotificationShown`, keep the promo gated to macOS
desktop, and persist the shown flag before awaiting dialog close.
- **Dependencies**: None

## Review Focus

- Launch-time settings race for `Comfy.Desktop.CloudNotificationShown`
- One-time modal behavior if the app closes before the dialog is
dismissed
- Regression coverage in `src/App.test.ts`

## Screenshots (if applicable)

- N/A

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10526-fix-wait-for-settings-before-cloud-desktop-promo-32e6d73d365081939fc3ca5b4346b873)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Alexander Brown <drjkl@comfy.org>
2026-03-26 22:31:31 -07:00
Christian Byrne
e809d74192 perf: disable Sentry event target wrapping to reduce DOM event overhead (#10472)
## Summary

Disable Sentry `browserApiErrorsIntegration` event target wrapping for
cloud builds to eliminate 231.7ms of `sentryWrapped` overhead during
canvas interaction.

## Changes

- **What**: Configure `browserApiErrorsIntegration({ eventTarget: false
})` in the cloud Sentry init path. This prevents Sentry from wrapping
every `addEventListener` callback in try/catch, which was the #1 hot
function during multi-cluster panning (100 profiling samples). Error
capturing still works via `window.onerror` and `unhandledrejection`.

## Review Focus

- Confirm that disabling event target wrapping is acceptable for cloud
error monitoring — Sentry still captures unhandled errors, just not
errors thrown inside individual event handler callbacks.
- Non-cloud builds already had `integrations: []` /
`defaultIntegrations: false`, so no change there.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10472-perf-disable-Sentry-event-target-wrapping-to-reduce-DOM-event-overhead-32d6d73d365081cdb455e47aee34dcc6)
by [Unito](https://www.unito.io)
2026-03-26 22:06:47 -07:00
Kelly Yang
47c9a027a7 fix: use try/finally for loading state in TeamWorkspacesDialogContent… (#10601)
… onCreate

## Summary
Wrap the function body in try/finally in the
`src/platform/workspace/components/dialogs/TeamWorkspacesDialogContent.vue`
to avoid staying in a permanent loading state if an unexpected error
happens.

Fix #10458 

```
async function onCreate() {
  if (!isValidName.value || loading.value) return
  loading.value = true

  try {
    const name = workspaceName.value.trim()
    try {
      await workspaceStore.createWorkspace(name)
    } catch (error) {
      toast.add({
        severity: 'error',
        summary: t('workspacePanel.toast.failedToCreateWorkspace'),
        detail: error instanceof Error ? error.message : t('g.unknownError')
      })
      return
    }
    try {
      await onConfirm?.(name)
    } catch (error) {
      toast.add({
        severity: 'error',
        summary: t('teamWorkspacesDialog.confirmCallbackFailed'),
        detail: error instanceof Error ? error.message : t('g.unknownError')
      })
    }
    dialogStore.closeDialog({ key: DIALOG_KEY })
  } finally {
    loading.value = false
  }
}
```

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10601-fix-use-try-finally-for-loading-state-in-TeamWorkspacesDialogContent-3306d73d365081dcb97bf205d7be9ca7)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 21:38:11 -07:00
Christian Byrne
6fbc5723bd fix: hide inaccurate resolution subtitle on cloud asset cards (#10602)
## Summary

Hide image resolution subtitle on cloud asset cards because thumbnails
are downscaled to max 512px, causing `naturalWidth`/`naturalHeight` to
report incorrect dimensions.

## Changes

- **What**: Gate the dimension display in `MediaAssetCard.vue` behind
`!isCloud` so resolution is only shown on local (where full-res images
are loaded). Added TODO referencing #10590 for re-enabling once
`/assets` API returns original dimensions in metadata.

## Review Focus

One-line conditional change — the `isCloud` import from
`@/platform/distribution/types` follows the established pattern used
across the repo.

Fixes #10590

## Screenshots (if applicable)

N/A — this removes a subtitle that was displaying wrong values (e.g.,
showing 512x512 for a 1024x1024 image on cloud).

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10602-fix-hide-inaccurate-resolution-subtitle-on-cloud-asset-cards-3306d73d36508186bd3ad704bd83bf14)
by [Unito](https://www.unito.io)
2026-03-27 13:32:11 +09:00
Benjamin Lu
db6e5062f2 test: add assets sidebar empty-state coverage (#10595)
## Summary

Add the first user-centric Playwright coverage for the assets sidebar
empty state and introduce a small assets-specific test helper/page
object surface.

## Changes

- **What**: add `AssetsSidebarTab`, add `AssetsHelper`, and cover
generated/imported empty states in a dedicated browser spec

## Review Focus

This is intentionally a small first slice for assets-sidebar coverage.
The new helper still mocks the HTTP boundary in Playwright for now
because current OSS job history and input files are global backend
state, which makes true backend-seeded parallel coverage a separate
backend change.

Long-term recommendation: add backend-owned, user-scoped test seeding
for jobs/history and input assets so browser tests can hit the real
routes on a shared backend. Follow-up: COM-307.

Fixes COM-306

## Screenshots (if applicable)

Not applicable.

## Validation

- `pnpm typecheck:browser`
- `pnpm exec playwright test browser_tests/tests/sidebar/assets.spec.ts
--project=chromium` against an isolated preview env

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10595-test-add-assets-sidebar-empty-state-coverage-3306d73d365081d1b34fdd146ae6c5c6)
by [Unito](https://www.unito.io)
2026-03-26 21:19:38 -07:00
Terry Jia
6da5d26980 test: add painter widget e2e tests (#10599)
## Summary
add painter widget e2e tests

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10599-test-add-painter-widget-e2e-tests-3306d73d365081899b3ec3e1d7c6f57c)
by [Unito](https://www.unito.io)

---------

Co-authored-by: github-actions <github-actions@github.com>
2026-03-27 00:19:17 -04:00
Johnpaul Chiwetelu
9b6b762a97 test: add browser tests for zoom controls (#10589)
## Summary
- Add E2E Playwright tests for zoom controls: default zoom level, zoom
to fit, zoom out with clamping at 10% minimum, manual percentage input,
and toggle visibility
- Add `data-testid` attributes to `ZoomControlsModal.vue` for stable
test selectors
- Add new TestId entries to `selectors.ts`

## Test plan
- [x] All 6 new tests pass locally
- [x] Existing minimap and graphCanvasMenu tests still pass
- [ ] CI passes

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10589-test-add-browser-tests-for-zoom-controls-3306d73d36508177ae19e16b3f62b8e7)
by [Unito](https://www.unito.io)
2026-03-27 05:18:46 +01:00
Kelly Yang
00c8c11288 fix: derive payment redirect URLs from getComfyPlatformBaseUrl() (#10600)
Replaces hardcoded `https://www.comfy.org/payment/...` URLs with
`getComfyPlatformBaseUrl()` so staging deploys no longer redirect users
to production after payment.

fix #10456

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10600-fix-derive-payment-redirect-URLs-from-getComfyPlatformBaseUrl-3306d73d365081679ef4da840337bb81)
by [Unito](https://www.unito.io)
2026-03-26 20:51:07 -07:00
Comfy Org PR Bot
668f7e48e7 1.43.7 (#10583)
Patch version increment to 1.43.7

**Base branch:** `main`

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10583-1-43-7-3306d73d3650810f921bf97fc447e402)
by [Unito](https://www.unito.io)

Co-authored-by: christian-byrne <72887196+christian-byrne@users.noreply.github.com>
2026-03-26 19:56:32 -07:00
Alexander Brown
3de387429a test: migrate 11 interactive component tests from VTU to VTL (Phase 2) (#10490)
## Summary

Phase 2 of the VTL migration: migrate 11 interactive component tests
from @vue/test-utils to @testing-library/vue (69 tests).

Stacked on #10471.

## Changes

- **What**: Migrate BatchCountEdit, BypassButton, BuilderFooterToolbar,
ComfyActionbar, SidebarIcon, EditableText, UrlInput, SearchInput,
TagsInput, TreeExplorerTreeNode, ColorCustomizationSelector from VTU to
VTL
- **Pattern transforms**: `trigger('click')` → `userEvent.click()`,
`setValue()` → `userEvent.type()`, `findComponent().props()` →
`getByRole/getByText/getByTestId`, `emitted()` → callback props
- **Removed**: 4 `@ts-expect-error` annotations, 1 change-detector test
(SearchInput `vm.focus`)
- **PrimeVue**: `data-pc-name` selectors + `aria-pressed` for
SelectButton, container queries for ColorPicker/InputIcon

## Review Focus

- PrimeVue escape hatches in ColorCustomizationSelector
(SelectButton/ColorPicker lack standard ARIA roles)
- Teleport test in ComfyActionbar uses `container.querySelector`
intentionally (scoped to teleport target)
- SearchInput debounce tests use `fireEvent.update` instead of
`userEvent.type` due to fake timer interaction
- EditableText escape-then-blur test simplified:
`userEvent.keyboard('{Escape}')` already triggers blur internally

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10490-test-migrate-11-interactive-component-tests-from-VTU-to-VTL-Phase-2-32e6d73d3650817ca40fd61395499e3f)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Amp <amp@ampcode.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-03-26 19:46:31 -07:00
Alexander Brown
08b1199265 test: migrate 13 component tests from VTU to VTL (Phase 1) (#10471)
## Summary

Migrate 13 component test files from @vue/test-utils to
@testing-library/vue as Phase 1 of incremental VTL adoption.

## Changes

- **What**: Rewrite 13 test files (88 tests) to use `render`/`screen`
queries, `userEvent` interactions, and `jest-dom` assertions. Add
`data-testid` attributes to 6 components for lint-clean icon/element
queries. Delete unused `src/utils/test-utils.ts`.
- **Dependencies**: `@testing-library/vue`,
`@testing-library/user-event`, `@testing-library/jest-dom` (installed in
Phase 0)

## Review Focus

- `data-testid` additions to component templates are minimal and
non-behavioral
- PrimeVue passthrough (`pt`) usage in UserAvatar.vue for icon testid
- 2 targeted `eslint-disable` in FormRadioGroup.test.ts where PrimeVue
places `aria-describedby` on wrapper div, not input

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10471-test-migrate-13-component-tests-from-VTU-to-VTL-Phase-1-32d6d73d36508159a33ffa285afb4c38)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Amp <amp@ampcode.com>
2026-03-26 18:15:11 -07:00
Dante
98a9facc7d tool: add CSS containment audit skill and Playwright diagnostic (#10026)
## Summary

- Add a Playwright-based diagnostic tool (`@audit` tagged) that
automatically detects DOM elements where CSS `contain: layout style`
would improve rendering performance
- Extend `ComfyPage` fixture and `playwright.config.ts` to support
`@audit` tag (excluded from CI, perf infra enabled)
- Add `/contain-audit` skill definition documenting the workflow

## How it works

1. Loads the 245-node workflow in a real browser
2. Walks the DOM tree and scores every element by subtree size and
sizing constraints
3. For each high-scoring candidate, applies `contain: layout style` via
JS
4. Measures rendering performance (style recalcs, layouts, task
duration) before and after
5. Takes before/after screenshots to detect visual breakage
6. Outputs a ranked report to console

## Test plan

- [ ] `pnpm typecheck` passes
- [ ] `pnpm typecheck:browser` passes
- [ ] `pnpm lint` passes
- [ ] Existing Playwright tests unaffected (`@audit` excluded from CI
via `grepInvert`)
- [ ] Run `pnpm exec playwright test
browser_tests/tests/containAudit.spec.ts --project=chromium` locally
with dev server

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10026-tool-add-CSS-containment-audit-skill-and-Playwright-diagnostic-3256d73d365081b29470df164f798f7d)
by [Unito](https://www.unito.io)

---------

Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-27 10:08:41 +09:00
jaeone94
8b1cf594d1 fix: improve error overlay design and indicator placement (#10564)
## Summary

- Move error red border from TopMenuSection/ComfyActionbar to
ErrorOverlay
- Add error indicator (outline + StatusBadge dot) on right side panel
toggle button when errors are present, the panel/overlay are closed, and
the errors tab setting is enabled
- Replace technical group titles (e.g. "Missing Node Packs") with
user-friendly i18n messages in ErrorOverlay
- Dynamically change action button label based on single error type
(e.g. "Show missing nodes" instead of "See Errors")
- Remove unused `hasAnyError` prop from ComfyActionbar
- Fix `type="secondary"` → `variant="secondary"` on panel toggle button
- Pre-wire `missing_media` error type support for #10309
- Migrate ErrorOverlay E2E selectors from `getByText`/`getByRole` to
`data-testid`
- Update E2E screenshot snapshots affected by TopMenuSection error state
design changes

## Test plan

- [x] Trigger execution error → verify red border on ErrorOverlay, no
red border on TopMenuSection/ComfyActionbar
- [x] With errors and right side panel closed → verify red outline + dot
on panel toggle button
- [x] Open right side panel or error overlay → verify indicator
disappears
- [x] Disable `Comfy.RightSidePanel.ShowErrorsTab` → verify no indicator
even with errors
- [x] Load workflow with only missing nodes → verify "Show missing
nodes" button label and friendly message
- [x] Load workflow with only missing models → verify "Show missing
models" button label and count message
- [x] Load workflow with mixed errors → verify "See Errors" fallback
label
- [x] E2E: `pnpm test:browser:local -- --grep "Error overlay"`

## Screenshots
<img width="498" height="381" alt="스크린샷 2026-03-26 230252"
src="https://github.com/user-attachments/assets/034f0f3f-e6a1-4617-b8f6-cd4c145e3a47"
/>
<img width="550" height="303" alt="스크린샷 2026-03-26 230525"
src="https://github.com/user-attachments/assets/2958914b-0ff0-461b-a6ea-7f2811bf33c2"
/>
<img width="551" height="87" alt="스크린샷 2026-03-26 230318"
src="https://github.com/user-attachments/assets/396e9cb1-667e-44c4-83fe-ab113b313d16"
/>

---------

Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Dante <bunggl@naver.com>
2026-03-27 09:11:26 +09:00
Dante
ff263fced0 fix: make splitter state key position-aware to prevent shared panel widths (#9525)
## Summary

Fix right-side sidebar panels and left-side panels sharing the same
PrimeVue Splitter state key, causing them to incorrectly apply each
other's saved widths.

## Changes

- **What**: Make `sidebarStateKey` position-aware by including
`sidebarLocation` and offside panel visibility in the localStorage key

## Problem

When sidebar location is set to **right**, all panels (both the
right-side sidebar like Job History and left-side panels like Workflow
overview) share a single PrimeVue Splitter `state-key`
(`unified-sidebar`). PrimeVue persists panel widths to localStorage
using this key, so any resize on one side gets applied to the other.

### AS-IS (before fix)

The `sidebarStateKey` is computed without any awareness of panel
position:

```typescript
// Always returns 'unified-sidebar' (when unified width enabled)
// or the active tab id — regardless of sidebar location or offside panel state
const sidebarStateKey = computed(() => {
  return unifiedWidth.value
    ? 'unified-sidebar'
    : (activeSidebarTabId.value ?? 'default-sidebar')
})
```

This produces a **single localStorage key** for all layout
configurations. The result:

1. Set sidebar to **right**, open **Job History** → resize it smaller →
saved to `unified-sidebar`
2. Open **Workflow overview** (appears on the left as an offside panel)
→ loads the same `unified-sidebar` key → gets the Job History width
applied to a completely different panel position
3. Both panels open simultaneously share the same persisted width, even
though they are on opposite sides of the screen

This is exactly the behavior shown in the [issue
screenshots](https://github.com/Comfy-Org/ComfyUI_frontend/issues/9440):
pulling the Workflow overview smaller also changes Job History to that
same size, and vice versa.

### TO-BE (after fix)

The `sidebarStateKey` now includes `sidebarLocation` (`left`/`right`)
and whether the offside panel is visible:

```typescript
const sidebarTabKey = computed(() => {
  return unifiedWidth.value
    ? 'unified-sidebar'
    : (activeSidebarTabId.value ?? 'default-sidebar')
})

const sidebarStateKey = computed(() => {
  const base = sidebarTabKey.value
  const suffix = showOffsideSplitter.value ? '-with-offside' : ''
  return `${base}-${sidebarLocation.value}${suffix}`
})
```

This produces **distinct localStorage keys** per layout configuration:
| Layout | Key |
|--------|-----|
| Sidebar left, no offside | `unified-sidebar-left` |
| Sidebar left, right panel open | `unified-sidebar-left-with-offside` |
| Sidebar right, no offside | `unified-sidebar-right` |
| Sidebar right, left panel open | `unified-sidebar-right-with-offside`
|

Each configuration now persists and restores its own panel sizes
independently, so resizing Job History on the right no longer affects
Workflow overview on the left.

## Review Focus

- The offside suffix (`-with-offside`) is necessary because the Splitter
transitions from a 2-panel layout (sidebar + center) to a 3-panel layout
(sidebar + center + offside) — these are fundamentally different panel
configurations and should not share persisted sizes.

Fixes #9440

## Screenshots (if applicable)

See issue for reproduction screenshots:
https://github.com/Comfy-Org/ComfyUI_frontend/issues/9440

🤖 Generated with [Claude Code](https://claude.com/claude-code)

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-27 08:31:56 +09:00
Alexander Brown
3e197b5c57 docs: ADR 0008 — Entity Component System (#10420)
## Summary

Architecture documentation proposing an Entity Component System for the
litegraph layer.

```mermaid
graph LR
    subgraph Today["Today: Spaghetti"]
        God["🍝 God Objects"]
        Circ["🔄 Circular Deps"]
        Mut["💥 Render Mutations"]
    end

    subgraph Tomorrow["Tomorrow: ECS"]
        ID["🏷️ Branded IDs"]
        Comp["📦 Components"]
        Sys["⚙️ Systems"]
        World["🌍 World"]
    end

    God -->|"decompose"| Comp
    Circ -->|"flatten"| ID
    Mut -->|"separate"| Sys
    Comp --> World
    ID --> World
    Sys -->|"query"| World
```

## Changes

- **What**: ADR 0008 + 4 architecture docs (no code changes)
- `docs/adr/0008-entity-component-system.md` — entity taxonomy, branded
IDs, component decomposition, migration strategy
- `docs/architecture/entity-interactions.md` — as-is Mermaid diagrams of
all entity relationships
- `docs/architecture/entity-problems.md` — structural problems with
file:line evidence
- `docs/architecture/ecs-target-architecture.md` — target architecture
diagrams
- `docs/architecture/proto-ecs-stores.md` — analysis of existing Pinia
stores as proto-ECS patterns

## Review Focus

- Does the entity taxonomy (Node, Link, Subgraph, Widget, Slot, Reroute,
Group) cover all cases?
- Are the component decompositions reasonable starting points?
- Is the migration strategy (bridge layer, incremental extraction)
feasible?
- Are there entity interactions or problems we missed?

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10420-docs-ADR-0008-Entity-Component-System-32d6d73d365081feb048d16a5231d350)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: Amp <amp@ampcode.com>
Co-authored-by: Christian Byrne <cbyrne@comfy.org>
2026-03-26 16:14:44 -07:00
Deep Mehta
9573074ea6 feat: add model-to-node mappings for 10 node packs (#10560)
## Summary
Add `MODEL_NODE_MAPPINGS` entries for 10 model directories missing UI
backlinks.

### New mappings
- `BiRefNet` → `LayerMask: LoadBiRefNetModelV2` (`version`)
- `EVF-SAM` → `LayerMask: EVFSAMUltra` (`model`)
- `florence2` → `DownloadAndLoadFlorence2Model` (`model`)
- `interpolation` → `DownloadAndLoadGIMMVFIModel` (`model`)
- `rmbg` → `RMBG` (`model`)
- `smol` → `LayerUtility: LoadSmolLM2Model` (`model`)
- `transparent-background` → `LayerMask: TransparentBackgroundUltra`
(`model`)
- `yolo` → `LayerMask: ObjectDetectorYOLO8` (`yolo_model`)
- `mediapipe` → `LivePortraitLoadMediaPipeCropper` (auto-load)
- `superprompt-v1` → `Superprompt` (auto-load)

These mappings ensure the "Use" button in the model browser correctly
creates the appropriate loader node.

## Test plan
- [ ] Verify "Use" button works for models in each new directory

🤖 Generated with [Claude Code](https://claude.com/claude-code)

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10560-feat-add-model-to-node-mappings-for-10-node-packs-32f6d73d365081d18b6acda078e7fe0b)
by [Unito](https://www.unito.io)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-26 16:01:08 -07:00
Alexander Brown
68d47af075 fix: normalize legacy prefixed proxyWidget entries on configure (#10573)
## Summary

Normalize legacy prefixed proxyWidget entries during subgraph configure
so nested subgraph widgets resolve correctly.

## Changes

- **What**: Extract `normalizeLegacyProxyWidgetEntry` to strip legacy
`nodeId: innerNodeId: widgetName` prefixes from serialized proxyWidgets
and resolve the correct `disambiguatingSourceNodeId`. Write-back
comparison now checks serialized content (not just array length) so
stale formats are cleaned up even when the entry count is unchanged.

## Review Focus

- The iterative prefix-stripping loop in `resolveLegacyPrefixedEntry` —
it peels one `N: ` prefix per iteration and tries all disambiguator
candidates at each level.
- The write-back condition change from length comparison to
`JSON.stringify` equality.

┆Issue is synchronized with this [Notion
page](https://www.notion.so/PR-10573-fix-normalize-legacy-prefixed-proxyWidget-entries-on-configure-32f6d73d365081e886e1c9b3939e3b9f)
by [Unito](https://www.unito.io)

---------

Co-authored-by: Amp <amp@ampcode.com>
2026-03-26 15:53:10 -07:00
832 changed files with 70155 additions and 8730 deletions

View File

@@ -0,0 +1,118 @@
---
name: adr-compliance
description: Checks code changes against Architecture Decision Records, with emphasis on ECS (ADR 0008) and command-pattern (ADR 0003) compliance
severity-default: medium
tools: [Read, Grep, glob]
---
Check that code changes are consistent with the project's Architecture Decision Records in `docs/adr/`.
## Priority 1: ECS and Command-Pattern Compliance (ADR 0008 + ADR 0003)
These are the primary architectural guardrails. Every entity/litegraph change must be checked against them.
### Command Pattern (ADR 0003)
All entity state mutations MUST be expressible as **serializable, idempotent, deterministic commands**. This is required for CRDT sync, undo/redo, cross-environment portability, and gateway backends.
Flag:
- **Direct spatial mutation** — `node.pos = ...`, `node.size = ...`, `group.pos = ...` outside of a store or command. All spatial data flows through `layoutStore` commands.
- **Imperative fire-and-forget mutation** — Any new API that mutates entity state as a side effect rather than producing a serializable command object. Systems should produce command batches, not execute mutations directly.
- **Void-returning mutation APIs** — New entity mutation functions that return `void` instead of a result type (`{ status: 'applied' | 'rejected' | 'no-op' }`). Commands need error/rejection semantics.
- **Auto-incrementing IDs in new entity code** — New entity creation using auto-increment counters without acknowledging the CRDT collision problem. Concurrent environments need globally unique, stable identifiers.
### ECS Architecture (ADR 0008)
The graph domain model is migrating to ECS. New code must not make the migration harder.
Flag:
- **God-object growth** — New methods/properties added to `LGraphNode` (~4k lines), `LGraphCanvas` (~9k lines), `LGraph` (~3k lines), or `Subgraph`. Extract to systems, stores, or composables instead.
- **Mixed data and behavior** — New component-like data structures that contain methods or back-references to parent entities. ECS components are plain data objects.
- **New circular entity dependencies** — New circular imports between `LGraph``Subgraph`, `LGraphNode``LGraphCanvas`, or similar entity classes.
- **Direct `graph._version++`** — Mutating the private version counter directly instead of through a public API. Extensions already depend on this side-channel; it must become a proper API.
### Centralized Registries and ECS-Style Access
All entity data access should move toward centralized query patterns, not instance property access.
Flag:
- **New instance method/property patterns** — Adding `node.someProperty` or `node.someMethod()` for data that should be a component in the World, queried via `world.getComponent(entityId, ComponentType)`.
- **OOP inheritance for entity modeling** — Extending entity classes with new subclasses instead of composing behavior through components and systems.
- **Scattered state** — New entity state stored in multiple locations (class properties, stores, local variables) instead of being consolidated in the World or in a single store.
### Extension Ecosystem Impact
Entity API changes affect 40+ custom node repos. Changes to these patterns require an extension migration path.
Flag when changed without migration guidance:
- `onConnectionsChange`, `onRemoved`, `onAdded`, `onConfigure` callbacks
- `onConnectInput` / `onConnectOutput` validation hooks
- `onWidgetChanged` handlers
- `node.widgets.find(w => w.name === ...)` patterns
- `node.serialize` overrides
- `graph._version++` direct mutation
- `getNodeById` usage patterns
## Priority 2: General ADR Compliance
For all other ADRs, iterate through each file in `docs/adr/` and extract the core lesson. Ensure changed code does not contradict accepted ADRs. Flag contradictions with proposed ADRs as directional guidance.
### How to Apply
1. Read `docs/adr/README.md` to get the full ADR index
2. For each ADR, read the Decision and Consequences sections
3. Check the diff against each ADR's constraints
4. Only flag ACTUAL violations in changed code, not pre-existing patterns
### Skip List
These ADRs can be skipped for most reviews (they cover completed or narrow-scope decisions):
- **ADR 0004** (Rejected — Fork PrimeVue) — only relevant if someone proposes forking PrimeVue again
## How to Check
1. Identify changed files in the entity/litegraph layer: `src/lib/litegraph/`, `src/ecs/`, `src/platform/`, entity-related stores
2. For Priority 1 patterns, use targeted searches:
```
# Direct position mutation
Grep: pattern="\.pos\s*=" path="src/lib/litegraph"
Grep: pattern="\.size\s*=" path="src/lib/litegraph"
# God object growth (new methods)
Grep: pattern="(class LGraphNode|class LGraphCanvas|class LGraph\b)" path="src/lib/litegraph"
# Version mutation
Grep: pattern="_version\+\+" path="src/lib/litegraph"
# Extension callback changes
Grep: pattern="on(ConnectionsChange|Removed|Added|Configure|ConnectInput|ConnectOutput|WidgetChanged)" path="src/lib/litegraph"
```
3. For Priority 2, read `docs/adr/` files and check for contradictions
## Severity Guidelines
| Issue | Severity |
| -------------------------------------------------------- | -------- |
| Imperative mutation API without command-pattern wrapper | high |
| New god-object method on LGraphNode/LGraphCanvas/LGraph | high |
| Breaking extension callback without migration path | high |
| New circular entity dependency | high |
| Direct spatial mutation bypassing command pattern | medium |
| Mixed data/behavior in component-like structures | medium |
| New OOP inheritance pattern for entities | medium |
| Contradicts accepted ADR direction | medium |
| Contradicts proposed ADR direction without justification | low |
## Rules
- Only flag ACTUAL violations in changed code, not pre-existing patterns
- If a change explicitly acknowledges an ADR tradeoff in comments or PR description, lower severity
- Proposed ADRs carry less weight than accepted ones — flag as directional guidance
- Reference the specific ADR number in every finding

View File

@@ -0,0 +1,74 @@
---
name: playwright-e2e
description: Reviews Playwright E2E test code for ComfyUI-specific patterns, flakiness risks, and fixture misuse
severity-default: medium
tools: [Read, Grep]
---
You are reviewing Playwright E2E test code in `browser_tests/`. Focus on issues a **reviewer** would catch that an author might miss — flakiness risks, fixture misuse, test isolation problems, and convention violations.
Reference docs (read if you need full context):
- `browser_tests/README.md` — setup, patterns, screenshot workflow
- `browser_tests/AGENTS.md` — directory structure, fixture overview
- `docs/guidance/playwright.md` — type assertion rules, test tags, forbidden patterns
- `.claude/skills/writing-playwright-tests/SKILL.md` — anti-patterns, retry patterns, Vue Nodes vs LiteGraph decision guide
## Checks
### Flakiness Risks (Major)
1. **`waitForTimeout` usage** — Always wrong. Must use retrying assertions (`toBeVisible`, `toHaveText`), `expect.poll()`, or `expect().toPass()`. See retry patterns in `.claude/skills/writing-playwright-tests/SKILL.md`.
2. **Missing `nextFrame()` after canvas ops** — Any `drag`, `click` on canvas, `resizeNode`, `pan`, `zoom`, or programmatic graph mutation via `page.evaluate` that changes visual state needs `await comfyPage.nextFrame()` before assertions. `loadWorkflow()` does NOT need it. Prefer encapsulating `nextFrame()` calls inside Page Object methods so tests don't manage frame timing directly.
3. **Keyboard actions without prior focus**`page.keyboard.press()` without a preceding `comfyPage.canvas.click()` or element `.focus()` will silently send keys to nothing.
4. **Coordinate-based interactions where node refs exist** — Raw `{ x, y }` clicks on canvas are fragile. If the test targets a node, use `comfyPage.nodeOps.getNodeRefById()` / `getNodeRefsByTitle()` / `getNodeRefsByType()` instead.
5. **Shared mutable state between tests** — Variables declared outside `test()` blocks, `let` state mutated across tests, or tests depending on execution order. Each test must be independently runnable.
6. **Missing cleanup of server-persisted state** — Settings changed via `comfyPage.settings.setSetting()` persist across tests. Must be reset in `afterEach` or at test start. Same for uploaded files or saved workflows. Prefer moving cleanup into [fixture options](https://playwright.dev/docs/test-fixtures#fixtures-options) so individual tests don't manage reset logic.
7. **Double-click without `{ delay }` option**`dblclick()` without `{ delay: 5 }` or similar can be too fast for the canvas event handler.
### Fixture & API Misuse (Medium)
8. **Reimplementing existing fixture helpers** — Before flagging, grep `browser_tests/fixtures/` for the functionality. Common missed helpers:
- `comfyPage.command.executeCommand()` for menu/command actions
- `comfyPage.workflow.loadWorkflow()` for loading test workflows
- `comfyPage.canvasOps.resetView()` for view reset
- `comfyPage.settings.setSetting()` for settings
- Component page objects in `browser_tests/fixtures/components/`
9. **Building workflows programmatically when a JSON asset would work** — Complex `page.evaluate` chains to construct a graph should use a premade JSON workflow in `browser_tests/assets/` loaded via `comfyPage.workflow.loadWorkflow()`.
10. **Selectors not using `TestIds`** — Hard-coded `data-testid` strings should reference `browser_tests/fixtures/selectors.ts` when a matching entry exists. Check `selectors.ts` before flagging.
### Convention Violations (Minor)
11. **Missing test tags** — Every `test.describe` should have `tag` with at least one of: `@smoke`, `@slow`, `@screenshot`, `@canvas`, `@node`, `@widget`, `@mobile`, `@2x`. See `.claude/skills/writing-playwright-tests/SKILL.md` for when to use each.
12. **`as any` type assertions** — Forbidden in E2E tests. Use specific type assertions or test-local type helpers. See `docs/guidance/playwright.md` for acceptable patterns.
13. **Screenshot tests without masking dynamic content** — Timestamps, version numbers, or other non-deterministic content in screenshots will cause flakes. Use `mask` option.
14. **`test.describe` without `afterEach` cleanup when canvas state changes** — Tests that manipulate canvas view (drag, zoom, pan) should include `afterEach` with `comfyPage.canvasOps.resetView()`. Prefer moving canvas reset into the fixture so individual tests don't manage cleanup.
15. **Debug helpers left in committed code**`debugAddMarker`, `debugAttachScreenshot`, `debugShowCanvasOverlay`, `debugGetCanvasDataURL` are for local debugging only.
### Test Design (Nitpick)
16. **Screenshot-only assertions where functional assertions are possible** — Prefer `expect(await node.isPinned()).toBe(true)` over screenshot comparison when testing non-visual behavior.
17. **Overly large test workflows** — Test should load the minimal workflow needed. If a test only needs one node, don't load the full default graph.
18. **Vue Nodes / LiteGraph mismatch** — If testing Vue-rendered node UI (DOM widgets, CSS states), should use `comfyPage.vueNodes.*`. If testing canvas interactions/connections, should use `comfyPage.nodeOps.*`. Mixing both in one test is a smell.
## Rules
- Only review `.spec.ts` files and supporting code in `browser_tests/`
- Do NOT flag patterns in fixture/helper code (`browser_tests/fixtures/`) — those are shared infrastructure with different rules
- "Major" for flakiness risks (items 1-7), "medium" for fixture misuse (8-10), "minor" for convention violations (11-15), "nitpick" for test design (16-18)
- When flagging missing fixture usage (item 8), confirm the helper exists by checking the fixture code — don't assume
- Existing tests that predate conventions are acceptable to modify but not required to fix

View File

@@ -0,0 +1,94 @@
# ADR Compliance Audit
Audit the current changes (or a specified PR) for compliance with Architecture Decision Records.
## Step 1: Gather the Diff
- If a PR number is provided, run: `gh pr diff $PR_NUMBER`
- Otherwise, run: `git diff origin/main...HEAD` (or `git diff --cached` for staged changes)
## Step 2: Priority 1 — ECS and Command-Pattern Compliance
Read these documents for context:
```
docs/adr/0003-crdt-based-layout-system.md
docs/adr/0008-entity-component-system.md
docs/architecture/ecs-target-architecture.md
docs/architecture/ecs-migration-plan.md
docs/architecture/appendix-critical-analysis.md
```
### Check A: Command Pattern (ADR 0003)
Every entity state mutation must be a **serializable, idempotent, deterministic command** — replayable, undoable, transmittable over CRDT.
Flag:
1. **Direct spatial mutation**`node.pos = ...`, `node.size = ...`, `group.pos = ...` outside a store/command
2. **Imperative fire-and-forget APIs** — Functions that mutate entity state as side effects rather than producing serializable command objects. Systems should produce command batches, not execute mutations directly.
3. **Void-returning mutation APIs** — Entity mutations returning `void` instead of `{ status: 'applied' | 'rejected' | 'no-op' }`
4. **Auto-increment IDs** — New entity creation via counters without addressing CRDT collision. Concurrent environments need globally unique identifiers.
5. **Missing transaction semantics** — Multi-entity operations without atomic grouping (e.g., node removal = 10+ deletes with no rollback on failure)
### Check B: ECS Architecture (ADR 0008)
Flag:
1. **God-object growth** — New methods/properties on `LGraphNode`, `LGraphCanvas`, `LGraph`, `Subgraph`
2. **Mixed data/behavior** — Component-like structures with methods or back-references
3. **OOP instance patterns** — New `node.someProperty` or `node.someMethod()` for data that should be a World component
4. **OOP inheritance** — New entity subclasses instead of component composition
5. **Circular entity deps** — New `LGraph``Subgraph`, `LGraphNode``LGraphCanvas` circular imports
6. **Direct `_version++`** — Mutating private version counter instead of through public API
### Check C: Extension Ecosystem Impact
If any of these patterns are changed, flag and require migration guidance:
- `onConnectionsChange`, `onRemoved`, `onAdded`, `onConfigure` callbacks
- `onConnectInput` / `onConnectOutput` validation hooks
- `onWidgetChanged` handlers
- `node.widgets.find(w => w.name === ...)` access patterns
- `node.serialize` overrides
- `graph._version++` direct mutation
Reference: 40+ custom node repos depend on these (rgthree-comfy, ComfyUI-Impact-Pack, cg-use-everywhere, etc.)
## Step 3: Priority 2 — General ADR Compliance
1. Read `docs/adr/README.md` for the full ADR index
2. For each ADR (except skip list), read the Decision section
3. Check the diff for contradictions
4. Only flag ACTUAL violations in changed code
**Skip list**: ADR 0004 (Rejected — Fork PrimeVue)
## Step 4: Generate Report
```
## ADR Compliance Audit Report
### Summary
- Files audited: N
- Priority 1 findings: N (command-pattern: N, ECS: N, ecosystem: N)
- Priority 2 findings: N
### Priority 1: Command Pattern & ECS
(List each with ADR reference, file, line, description)
### Priority 1: Extension Ecosystem Impact
(List each changed callback/API with affected custom node repos)
### Priority 2: General ADR Compliance
(List each with ADR reference, file, line, description)
### Compliant Patterns
(Note changes that positively align with ADR direction)
```
## Severity
- **Must fix**: Contradicts accepted ADR, or introduces imperative mutation API without command-pattern wrapper, or breaks extension callback without migration path
- **Should discuss**: Contradicts proposed ADR direction — either align or propose ADR amendment
- **Note**: Surfaces open architectural question not yet addressed by ADRs

View File

@@ -0,0 +1,84 @@
---
name: adding-deprecation-warnings
description: 'Adds deprecation warnings for renamed or removed properties/APIs. Searches custom node ecosystem for usage, applies defineDeprecatedProperty helper, adds JSDoc. Triggers on: deprecate, deprecation warning, rename property, backward compatibility.'
---
# Adding Deprecation Warnings
Adds backward-compatible deprecation warnings for renamed or removed
properties using the `defineDeprecatedProperty` helper in
`src/lib/litegraph/src/utils/feedback.ts`.
## When to Use
- A property or API has been renamed and custom nodes still use the old name
- A property is being removed but needs a grace period
- Backward compatibility must be preserved while nudging adoption
## Steps
### 1. Search the Custom Node Ecosystem
Before implementing, assess impact by searching for usage of the
deprecated property across ComfyUI custom nodes:
```text
Use the comfy_codesearch tool to search for the old property name.
Search for both `widget.oldProp` and just `oldProp` to catch all patterns.
```
Document the usage patterns found (property access, truthiness checks,
caching to local vars, style mutation, etc.) — these all must continue
working.
### 2. Apply the Deprecation
Use `defineDeprecatedProperty` from `src/lib/litegraph/src/utils/feedback.ts`:
```typescript
import { defineDeprecatedProperty } from '@/lib/litegraph/src/utils/feedback'
/** @deprecated Use {@link obj.newProp} instead. */
defineDeprecatedProperty(
obj,
'oldProp',
'newProp',
'obj.oldProp is deprecated. Use obj.newProp instead.'
)
```
### 3. Checklist
- [ ] Ecosystem search completed — all usage patterns are compatible
- [ ] `defineDeprecatedProperty` call added after the new property is assigned
- [ ] JSDoc `@deprecated` tag added above the call for IDE support
- [ ] Warning message names both old and new property clearly
- [ ] `pnpm typecheck` passes
- [ ] `pnpm lint` passes
### 4. PR Comment
Add a PR comment summarizing the ecosystem search results: which repos
use the deprecated property, what access patterns were found, and
confirmation that all patterns are compatible with the ODP getter/setter.
## How `defineDeprecatedProperty` Works
- Creates an `Object.defineProperty` getter/setter on the target object
- Getter returns `this[currentKey]`, setter assigns `this[currentKey]`
- Both log via `warnDeprecated`, which deduplicates (once per unique
message per session via a `Set`)
- `enumerable: false` keeps the alias out of `Object.keys()` / `for...in`
/ `JSON.stringify`
- `configurable: true` allows further redefinition if needed
## Edge Cases
- **Truthiness checks** (`if (widget.oldProp)`) — works, getter fires
- **Caching to local var** (`const el = widget.oldProp`) — works, warns
once then the cached ref is used directly
- **Style/property mutation** (`widget.oldProp.style.color = 'red'`) —
works, getter returns the real object
- **Serialization** (`JSON.stringify`) — `enumerable: false` excludes it
- **Heavy access in loops** — `warnDeprecated` deduplicates, only warns
once per session regardless of call count

View File

@@ -0,0 +1,99 @@
---
name: contain-audit
description: 'Detect DOM elements where CSS contain:layout+style would improve rendering performance. Runs a Playwright-based audit on a large workflow, scores candidates by subtree size and sizing constraints, measures performance impact, and generates a ranked report.'
---
# CSS Containment Audit
Automatically finds DOM elements where adding `contain: layout style` would reduce browser recalculation overhead.
## What It Does
1. Loads a large workflow (245 nodes) in a real browser
2. Walks the DOM tree and scores every element as a containment candidate
3. For each high-scoring candidate, applies `contain: layout style` via JavaScript
4. Measures rendering performance (style recalcs, layouts, task duration) before and after
5. Takes before/after screenshots to detect visual breakage
6. Generates a ranked report with actionable recommendations
## When to Use
- After adding new Vue components to the node rendering pipeline
- When investigating rendering performance on large workflows
- Before and after refactoring node DOM structure
- As part of periodic performance audits
## How to Run
```bash
# Start the dev server first
pnpm dev &
# Run the audit (uses the @audit tag, not included in normal CI runs)
pnpm exec playwright test browser_tests/tests/containAudit.spec.ts --project=audit
# View the HTML report
pnpm exec playwright show-report
```
## How to Read Results
The audit outputs a table to the console:
```text
CSS Containment Audit Results
=======================================================
Rank | Selector | Subtree | Score | DRecalcs | DLayouts | Visual
1 | [data-testid="node-inner-wrap"] | 18 | 72 | -34% | -12% | OK
2 | .node-body | 12 | 48 | -8% | -3% | OK
3 | .node-header | 4 | 16 | +1% | 0% | OK
```
- **Subtree**: Number of descendant elements (higher = more to skip)
- **Score**: Composite heuristic score (subtree size x sizing constraint bonus)
- **DRecalcs / DLayouts**: Change in style recalcs / layout counts vs baseline (negative = improvement)
- **Visual**: OK if no pixel change, DIFF if screenshot differs (may include subpixel noise — verify manually)
## Candidate Scoring
An element is a good containment candidate when:
1. **Large subtree** -- many descendants that the browser can skip recalculating
2. **Externally constrained size** -- width/height determined by CSS variables, flex, or explicit values (not by content)
3. **No existing containment** -- `contain` is not already applied
4. **Not a leaf** -- has at least a few child elements
Elements that should NOT get containment:
- Elements whose children overflow visually beyond bounds (e.g., absolute-positioned overlays with negative inset)
- Elements whose height is determined by content and affects sibling layout
- Very small subtrees (overhead of containment context outweighs benefit)
## Limitations
- Cannot fully guarantee `contain` safety -- visual review of screenshots is required
- Performance measurements have natural variance; run multiple times for confidence
- Only tests idle and pan scenarios; widget interactions may differ
- The audit modifies styles at runtime via JS, which doesn't account for Tailwind purging or build-time optimizations
## Example PR
[#9946 — fix: add CSS contain:layout contain:style to node inner wrapper](https://github.com/Comfy-Org/ComfyUI_frontend/pull/9946)
This PR added `contain-layout contain-style` to the node inner wrapper div in `LGraphNode.vue`. The audit tool would have flagged this element as a high-scoring candidate because:
- **Large subtree** (18+ descendants: header, slots, widgets, content, badges)
- **Externally constrained size** (`w-(--node-width)`, `flex-1` — dimensions set by CSS variables and flex parent)
- **Natural isolation boundary** between frequently-changing content (widgets) and infrequently-changing overlays (selection outlines, borders)
The actual change was a single line: adding `'contain-layout contain-style'` to the inner wrapper's class list at `src/renderer/extensions/vueNodes/components/LGraphNode.vue:79`.
## Reference
| Resource | Path |
| ----------------- | ------------------------------------------------------- |
| Audit test | `browser_tests/tests/containAudit.spec.ts` |
| PerformanceHelper | `browser_tests/fixtures/helpers/PerformanceHelper.ts` |
| Perf tests | `browser_tests/tests/performance.spec.ts` |
| Large workflow | `browser_tests/assets/large-graph-workflow.json` |
| Example PR | https://github.com/Comfy-Org/ComfyUI_frontend/pull/9946 |

View File

@@ -28,3 +28,21 @@ reviews:
3. The PR description includes a concrete, non-placeholder explanation of why an end-to-end regression test was not added.
Fail otherwise. When failing, mention which bug-fix signal you found and ask the author to either add or update a Playwright regression test under `browser_tests/` or add a concrete explanation in the PR description of why an end-to-end regression test is not practical.
- name: ADR compliance for entity/litegraph changes
mode: warning
instructions: |
Use only PR metadata already available in the review context: the changed-file list relative to the PR base, the PR description, and the diff content. Do not rely on shell commands.
This check applies ONLY when the PR modifies files under `src/lib/litegraph/`, `src/ecs/`, or files related to graph entities (nodes, links, widgets, slots, reroutes, groups, subgraphs).
If none of those paths appear in the changed files, pass immediately.
When applicable, check for:
1. **Command pattern (ADR 0003)**: Entity state mutations must be serializable, idempotent, deterministic commands — not imperative fire-and-forget side effects. Flag direct spatial mutation (`node.pos =`, `node.size =`, `group.pos =`) outside of a store or command, and any new void-returning mutation API that should produce a command object.
2. **God-object growth (ADR 0008)**: New methods/properties added to `LGraphNode`, `LGraphCanvas`, `LGraph`, or `Subgraph` that add responsibilities rather than extracting/migrating existing ones.
3. **ECS data/behavior separation (ADR 0008)**: Component-like data structures that contain methods or back-references to parent entities. ECS components must be plain data. New OOP instance patterns (`node.someProperty`, `node.someMethod()`) for data that should be a World component.
4. **Extension ecosystem (ADR 0008)**: Changes to extension-facing callbacks (`onConnectionsChange`, `onRemoved`, `onAdded`, `onConfigure`, `onConnectInput/Output`, `onWidgetChanged`), `node.widgets` access, `node.serialize` overrides, or `graph._version++` without migration guidance. These affect 40+ custom node repos.
Pass if none of these patterns are found in the diff.
When warning, reference the specific ADR by number and link to `docs/adr/` for context. Frame findings as directional guidance since ADR 0003 and 0008 are in Proposed status.

View File

@@ -13,8 +13,6 @@ runs:
# Install pnpm, Node.js, build frontend
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6

View File

@@ -17,8 +17,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6

View File

@@ -22,8 +22,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6

View File

@@ -21,8 +21,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6

View File

@@ -20,8 +20,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Use Node.js
uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0

View File

@@ -21,8 +21,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Use Node.js
uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
@@ -76,8 +74,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Use Node.js
uses: actions/setup-node@6044e13b5dc448c55e2357c09f80417699197238 # v6.2.0
@@ -99,7 +95,7 @@ jobs:
if npx license-checker-rseidelsohn@4 \
--production \
--summary \
--excludePackages '@comfyorg/comfyui-frontend;@comfyorg/design-system;@comfyorg/registry-types;@comfyorg/shared-frontend-utils;@comfyorg/tailwind-utils;@comfyorg/comfyui-electron-types' \
--excludePackages '@comfyorg/comfyui-frontend;@comfyorg/design-system;@comfyorg/ingest-types;@comfyorg/registry-types;@comfyorg/shared-frontend-utils;@comfyorg/tailwind-utils;@comfyorg/comfyui-electron-types' \
--clarificationsFile .github/license-clarifications.json \
--onlyAllow 'MIT;MIT*;Apache-2.0;BSD-2-Clause;BSD-3-Clause;ISC;0BSD;BlueOak-1.0.0;Python-2.0;CC0-1.0;Unlicense;(MIT OR Apache-2.0);(MIT OR GPL-3.0);(Apache-2.0 OR MIT);(MPL-2.0 OR Apache-2.0);CC-BY-4.0;CC-BY-3.0;GPL-3.0-only'; then
echo ''

View File

@@ -33,6 +33,20 @@ jobs:
path: dist/
retention-days: 1
# Build cloud distribution for @cloud tagged tests
# NX_SKIP_NX_CACHE=true is required because `nx build` was already run
# for the OSS distribution above. Without skipping cache, Nx returns
# the cached OSS build since env vars aren't part of the cache key.
- name: Build cloud frontend
run: NX_SKIP_NX_CACHE=true pnpm build:cloud
- name: Upload cloud frontend
uses: actions/upload-artifact@v6
with:
name: frontend-dist-cloud
path: dist/
retention-days: 1
# Sharded chromium tests
playwright-tests-chromium-sharded:
needs: setup
@@ -97,14 +111,14 @@ jobs:
strategy:
fail-fast: false
matrix:
browser: [chromium-2x, chromium-0.5x, mobile-chrome]
browser: [chromium-2x, chromium-0.5x, mobile-chrome, cloud]
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: Download built frontend
uses: actions/download-artifact@v7
with:
name: frontend-dist
name: ${{ matrix.browser == 'cloud' && 'frontend-dist-cloud' || 'frontend-dist' }}
path: dist/
- name: Start ComfyUI server

View File

@@ -1,4 +1,4 @@
# Description: Unit and component testing with Vitest
# Description: Unit and component testing with Vitest + coverage reporting
name: 'CI: Tests Unit'
on:
@@ -23,5 +23,12 @@ jobs:
- name: Setup frontend
uses: ./.github/actions/setup-frontend
- name: Run Vitest tests
run: pnpm test:unit
- name: Run Vitest tests with coverage
run: pnpm test:coverage
- name: Upload coverage to Codecov
if: always()
uses: codecov/codecov-action@1af58845a975a7985b0beb0cbe6fbbb71a41dbad # v5.5.3
with:
files: coverage/lcov.info
fail_ci_if_error: false

182
.github/workflows/hub-ci.yaml vendored Normal file
View File

@@ -0,0 +1,182 @@
name: Hub CI
on:
push:
branches: [main]
paths:
- 'apps/hub/**'
- '.github/workflows/hub-ci.yaml'
pull_request:
branches: [main]
paths:
- 'apps/hub/**'
- '.github/workflows/hub-ci.yaml'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: read
pull-requests: write
jobs:
lint:
name: Lint & Check
runs-on: ubuntu-latest
defaults:
run:
working-directory: apps/hub
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Setup frontend
uses: ./.github/actions/setup-frontend
- name: Astro Check
run: pnpm run check
- name: Unit Tests
run: pnpm test
- name: Validate Templates
run: pnpm run validate:templates
continue-on-error: true
build:
name: Build Hub
runs-on: ubuntu-latest
defaults:
run:
working-directory: apps/hub
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Setup frontend
uses: ./.github/actions/setup-frontend
- name: Build site
run: pnpm run build
env:
HUB_SKIP_SYNC: 'true'
SKIP_AI_GENERATION: 'true'
- name: Upload build artifact
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: hub-build
path: apps/hub/dist
retention-days: 1
seo-audit:
name: SEO Audit
needs: build
runs-on: ubuntu-latest
defaults:
run:
working-directory: apps/hub
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Setup frontend
uses: ./.github/actions/setup-frontend
- name: Download build artifact
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: hub-build
path: apps/hub/dist
- name: Validate sitemap
id: sitemap
continue-on-error: true
run: |
echo "## Sitemap Validation" >> $GITHUB_STEP_SUMMARY
if pnpm run validate:sitemap 2>&1 | tee sitemap-output.txt; then
echo "✅ Sitemap validation passed" >> $GITHUB_STEP_SUMMARY
echo "status=passed" >> $GITHUB_OUTPUT
else
echo "❌ Sitemap validation failed" >> $GITHUB_STEP_SUMMARY
echo "status=failed" >> $GITHUB_OUTPUT
fi
- name: Run SEO audit
id: seo
continue-on-error: true
run: |
echo "## SEO Audit" >> $GITHUB_STEP_SUMMARY
if pnpm run audit:seo 2>&1 | tee seo-output.txt; then
echo "✅ SEO audit passed" >> $GITHUB_STEP_SUMMARY
echo "status=passed" >> $GITHUB_OUTPUT
else
echo "⚠️ SEO audit found issues" >> $GITHUB_STEP_SUMMARY
echo "status=issues" >> $GITHUB_OUTPUT
fi
- name: Check internal links
id: links
continue-on-error: true
run: |
echo "## Link Check" >> $GITHUB_STEP_SUMMARY
DIST_DIR="dist"
if [ ! -d "$DIST_DIR" ]; then
echo "⚠️ No build output found at $DIST_DIR" >> $GITHUB_STEP_SUMMARY
echo "status=skipped" >> $GITHUB_OUTPUT
exit 0
fi
BROKEN_FILE="broken-links.txt"
: > "$BROKEN_FILE"
BROKEN_COUNT=0
TOTAL_COUNT=0
for htmlfile in $(find "$DIST_DIR" -name '*.html' \
-not -path "$DIST_DIR/ar/*" -not -path "$DIST_DIR/es/*" -not -path "$DIST_DIR/fr/*" \
-not -path "$DIST_DIR/ja/*" -not -path "$DIST_DIR/ko/*" -not -path "$DIST_DIR/pt-BR/*" \
-not -path "$DIST_DIR/ru/*" -not -path "$DIST_DIR/tr/*" -not -path "$DIST_DIR/zh/*" \
-not -path "$DIST_DIR/zh-TW/*" | head -500); do
hrefs=$(grep -oP 'href="(/[^"]*)"' "$htmlfile" | sed 's/href="//;s/"$//' || true)
for href in $hrefs; do
TOTAL_COUNT=$((TOTAL_COUNT + 1))
clean="${href%%#*}"
clean="${clean%%\?*}"
if [ -z "$clean" ] || [ "$clean" = "/" ]; then continue; fi
found=false
if [[ "$clean" =~ \.[a-zA-Z0-9]+$ ]]; then
[ -f "${DIST_DIR}${clean}" ] && found=true
else
base="${clean%/}"
[ -f "${DIST_DIR}${base}/index.html" ] && found=true
[ "$found" = false ] && [ -f "${DIST_DIR}${base}.html" ] && found=true
[ "$found" = false ] && [ -f "${DIST_DIR}${clean}" ] && found=true
[ "$found" = false ] && [ -d "${DIST_DIR}${base}" ] && found=true
fi
if [ "$found" = false ]; then
BROKEN_COUNT=$((BROKEN_COUNT + 1))
echo "- \`${href}\` (in ${htmlfile#${DIST_DIR}/})" >> "$BROKEN_FILE"
fi
done
done
if [ "$BROKEN_COUNT" -eq 0 ]; then
echo "✅ All internal links valid ($TOTAL_COUNT checked)" >> $GITHUB_STEP_SUMMARY
echo "status=passed" >> $GITHUB_OUTPUT
else
echo "❌ Found $BROKEN_COUNT broken internal links out of $TOTAL_COUNT" >> $GITHUB_STEP_SUMMARY
head -n 50 "$BROKEN_FILE" >> $GITHUB_STEP_SUMMARY
echo "status=failed" >> $GITHUB_OUTPUT
fi
- name: Upload SEO reports
if: always()
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: hub-seo-reports
path: |
apps/hub/seo-output.txt
apps/hub/seo-summary.json
apps/hub/broken-links.txt
if-no-files-found: ignore

68
.github/workflows/hub-cron-rebuild.yaml vendored Normal file
View File

@@ -0,0 +1,68 @@
name: Hub Cron Rebuild
on:
schedule:
# Every 15 minutes — rebuilds the site to pick up new UGC workflows
# for search index, sitemap, filter pages, and pre-rendered detail pages.
- cron: '*/15 * * * *'
workflow_dispatch:
concurrency:
group: hub-deploy-prod
cancel-in-progress: false
permissions:
contents: read
jobs:
rebuild:
runs-on: ubuntu-latest
env:
SKIP_AI_GENERATION: 'true'
PUBLIC_POSTHOG_KEY: ${{ secrets.HUB_POSTHOG_KEY }}
PUBLIC_GA_MEASUREMENT_ID: ${{ secrets.HUB_GA_MEASUREMENT_ID }}
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Setup frontend
uses: ./.github/actions/setup-frontend
- name: Checkout templates data
uses: actions/checkout@v6
with:
repository: Comfy-Org/workflow_templates
path: _workflow_templates
sparse-checkout: templates
token: ${{ secrets.GH_TOKEN }}
- name: Restore content cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: apps/hub/.content-cache
key: hub-content-cache-cron-prod-${{ hashFiles('_workflow_templates/templates/**', 'apps/hub/src/**') }}
restore-keys: |
hub-content-cache-cron-prod-
- name: Sync templates
run: pnpm run sync
working-directory: apps/hub
env:
HUB_TEMPLATES_DIR: ${{ github.workspace }}/_workflow_templates/templates
- name: Build Astro site
run: pnpm run build
working-directory: apps/hub
env:
PUBLIC_HUB_API_URL: ${{ secrets.HUB_API_URL_PRODUCTION }}
PUBLIC_COMFY_CLOUD_URL: ${{ secrets.COMFY_CLOUD_URL_PRODUCTION }}
PUBLIC_APPROVED_ONLY: 'true'
- name: Deploy to Vercel
uses: amondnet/vercel-action@16e87c0a08142b0d0d33b76aeaf20823c381b9b9 # v25.2.0
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }}
vercel-org-id: ${{ secrets.VERCEL_ORG_ID }}
vercel-project-id: ${{ secrets.HUB_VERCEL_PROJECT_ID }}
working-directory: apps/hub
vercel-args: '--prebuilt --prod'

80
.github/workflows/hub-deploy.yaml vendored Normal file
View File

@@ -0,0 +1,80 @@
name: Deploy Hub
on:
workflow_dispatch:
inputs:
skip_ai:
description: 'Skip AI content generation'
type: boolean
default: false
force_regenerate:
description: 'Force regenerate all content (ignore cache)'
type: boolean
default: false
template_filter:
description: 'Regenerate specific template only (e.g. "flux_schnell")'
type: string
default: ''
concurrency:
group: hub-deploy-prod
cancel-in-progress: false
permissions:
contents: read
jobs:
build-deploy:
runs-on: ubuntu-latest
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
PUBLIC_POSTHOG_KEY: ${{ secrets.HUB_POSTHOG_KEY }}
PUBLIC_GA_MEASUREMENT_ID: ${{ secrets.HUB_GA_MEASUREMENT_ID }}
SKIP_AI_GENERATION: ${{ inputs.skip_ai && 'true' || '' }}
FORCE_AI_REGENERATE: ${{ inputs.force_regenerate && 'true' || '' }}
AI_TEMPLATE_FILTER: ${{ inputs.template_filter }}
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Setup frontend
uses: ./.github/actions/setup-frontend
- name: Checkout templates data
uses: actions/checkout@v6
with:
repository: Comfy-Org/workflow_templates
path: _workflow_templates
sparse-checkout: templates
token: ${{ secrets.GH_TOKEN }}
- name: Restore content cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: apps/hub/.content-cache
key: hub-content-cache-${{ hashFiles('_workflow_templates/templates/**', 'apps/hub/src/**') }}
restore-keys: |
hub-content-cache-
- name: Sync templates
run: pnpm run sync
working-directory: apps/hub
env:
HUB_TEMPLATES_DIR: ${{ github.workspace }}/_workflow_templates/templates
- name: Build Astro site
run: pnpm run build
working-directory: apps/hub
env:
PUBLIC_HUB_API_URL: ${{ secrets.HUB_API_URL_PRODUCTION }}
PUBLIC_COMFY_CLOUD_URL: ${{ secrets.COMFY_CLOUD_URL_PRODUCTION }}
PUBLIC_APPROVED_ONLY: 'true'
- name: Deploy to Vercel
uses: amondnet/vercel-action@16e87c0a08142b0d0d33b76aeaf20823c381b9b9 # v25.2.0
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }}
vercel-org-id: ${{ secrets.VERCEL_ORG_ID }}
vercel-project-id: ${{ secrets.HUB_VERCEL_PROJECT_ID }}
working-directory: apps/hub
vercel-args: '--prebuilt --prod'

134
.github/workflows/hub-preview-cron.yaml vendored Normal file
View File

@@ -0,0 +1,134 @@
name: Hub Preview Cron
on:
schedule:
- cron: '*/15 * * * *'
workflow_dispatch:
permissions:
contents: read
pull-requests: write
jobs:
discover:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.targets.outputs.matrix }}
steps:
- uses: actions/checkout@v6
- name: Build rebuild targets
id: targets
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
targets='[]'
# Main with production API (all workflows, no approved filter)
targets=$(echo "$targets" | jq -c '. + [{"ref": "main", "is_main": true, "pr": 0, "api_env": "production"}]')
# Main with test API
targets=$(echo "$targets" | jq -c '. + [{"ref": "main", "is_main": true, "pr": 0, "api_env": "test"}]')
# Find open PRs with the "preview-cron" label
prs=$(gh pr list --label "preview-cron" --state open --json number,headRefName)
for row in $(echo "$prs" | jq -c '.[]'); do
ref=$(echo "$row" | jq -r '.headRefName')
num=$(echo "$row" | jq -r '.number')
targets=$(echo "$targets" | jq -c \
--arg ref "$ref" --argjson num "$num" \
'. + [{"ref": $ref, "is_main": false, "pr": $num, "api_env": "test"}]')
done
echo "matrix={\"include\":$targets}" >> "$GITHUB_OUTPUT"
echo "### Rebuild targets" >> "$GITHUB_STEP_SUMMARY"
echo "$targets" | jq '.' >> "$GITHUB_STEP_SUMMARY"
rebuild:
needs: discover
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.discover.outputs.matrix) }}
concurrency:
group: hub-preview-cron-${{ matrix.ref }}-${{ matrix.api_env }}
cancel-in-progress: true
env:
SKIP_AI_GENERATION: 'true'
steps:
- name: Checkout
uses: actions/checkout@v6
with:
ref: ${{ matrix.ref }}
- name: Setup frontend
uses: ./.github/actions/setup-frontend
- name: Checkout templates data
uses: actions/checkout@v6
with:
repository: Comfy-Org/workflow_templates
path: _workflow_templates
sparse-checkout: templates
token: ${{ secrets.GH_TOKEN }}
- name: Restore content cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: apps/hub/.content-cache
key: hub-content-cache-cron-${{ matrix.ref }}-${{ matrix.api_env }}-${{ hashFiles('_workflow_templates/templates/**', 'apps/hub/src/**') }}
restore-keys: |
hub-content-cache-cron-${{ matrix.ref }}-${{ matrix.api_env }}-
- name: Sync templates
run: pnpm run sync:en-only
working-directory: apps/hub
env:
HUB_TEMPLATES_DIR: ${{ github.workspace }}/_workflow_templates/templates
- name: Build Astro site
run: pnpm run build
working-directory: apps/hub
env:
PUBLIC_HUB_API_URL: ${{ matrix.api_env == 'test' && secrets.HUB_API_URL_PREVIEW || secrets.HUB_API_URL_PRODUCTION }}
PUBLIC_COMFY_CLOUD_URL: ${{ matrix.api_env == 'test' && secrets.COMFY_CLOUD_URL_PREVIEW || secrets.COMFY_CLOUD_URL_PRODUCTION }}
- name: Deploy to Vercel
id: deploy
uses: amondnet/vercel-action@16e87c0a08142b0d0d33b76aeaf20823c381b9b9 # v25.2.0
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }}
vercel-org-id: ${{ secrets.VERCEL_ORG_ID }}
vercel-project-id: ${{ secrets.HUB_VERCEL_PROJECT_ID }}
working-directory: apps/hub
vercel-args: '--prebuilt'
- name: Alias main preview (prod API)
if: matrix.is_main && matrix.api_env == 'production' && secrets.HUB_PREVIEW_ALIAS
env:
PREVIEW_URL: ${{ steps.deploy.outputs.preview-url }}
ALIAS: ${{ secrets.HUB_PREVIEW_ALIAS }}
VERCEL_TOKEN_VAL: ${{ secrets.VERCEL_TOKEN }}
VERCEL_SCOPE: ${{ secrets.VERCEL_ORG_ID }}
run: |
npx vercel alias "$PREVIEW_URL" "$ALIAS" --token="$VERCEL_TOKEN_VAL" --scope="$VERCEL_SCOPE"
- name: Alias main preview (test API)
if: matrix.is_main && matrix.api_env == 'test' && secrets.HUB_PREVIEW_TEST_ALIAS
env:
PREVIEW_URL: ${{ steps.deploy.outputs.preview-url }}
ALIAS: ${{ secrets.HUB_PREVIEW_TEST_ALIAS }}
VERCEL_TOKEN_VAL: ${{ secrets.VERCEL_TOKEN }}
VERCEL_SCOPE: ${{ secrets.VERCEL_ORG_ID }}
run: |
npx vercel alias "$PREVIEW_URL" "$ALIAS" --token="$VERCEL_TOKEN_VAL" --scope="$VERCEL_SCOPE"
- name: Comment preview URL on PR
if: matrix.pr > 0
uses: marocchino/sticky-pull-request-comment@773744901bac0e8cbb5a0dc842800d45e9b2b405 # v2.9.4
with:
number: ${{ matrix.pr }}
header: hub-preview-cron
message: |
🔄 **Hub preview cron rebuilt:** ${{ steps.deploy.outputs.preview-url }}
_Last rebuild: ${{ github.event.head_commit.timestamp || 'manual trigger' }}_

74
.github/workflows/hub-preview.yaml vendored Normal file
View File

@@ -0,0 +1,74 @@
name: Hub Preview
on:
pull_request:
paths:
- 'apps/hub/**'
workflow_dispatch:
concurrency:
group: hub-preview-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
permissions:
contents: read
pull-requests: write
jobs:
preview:
runs-on: ubuntu-latest
env:
SKIP_AI_GENERATION: 'true'
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Setup frontend
uses: ./.github/actions/setup-frontend
- name: Checkout templates data
uses: actions/checkout@v6
with:
repository: Comfy-Org/workflow_templates
path: _workflow_templates
sparse-checkout: templates
token: ${{ secrets.GH_TOKEN }}
- name: Restore content cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: apps/hub/.content-cache
key: hub-content-cache-preview-${{ hashFiles('_workflow_templates/templates/**', 'apps/hub/src/**') }}
restore-keys: |
hub-content-cache-preview-
- name: Sync templates
run: pnpm run sync:en-only
working-directory: apps/hub
env:
HUB_TEMPLATES_DIR: ${{ github.workspace }}/_workflow_templates/templates
- name: Build Astro site
run: pnpm run build
working-directory: apps/hub
env:
PUBLIC_HUB_API_URL: ${{ secrets.HUB_API_URL_PREVIEW }}
PUBLIC_COMFY_CLOUD_URL: ${{ secrets.COMFY_CLOUD_URL_PREVIEW }}
- name: Deploy preview to Vercel
id: deploy
uses: amondnet/vercel-action@16e87c0a08142b0d0d33b76aeaf20823c381b9b9 # v25.2.0
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }}
vercel-org-id: ${{ secrets.VERCEL_ORG_ID }}
vercel-project-id: ${{ secrets.HUB_VERCEL_PROJECT_ID }}
working-directory: apps/hub
vercel-args: '--prebuilt'
- name: Comment preview URL
if: github.event_name == 'pull_request'
uses: marocchino/sticky-pull-request-comment@773744901bac0e8cbb5a0dc842800d45e9b2b405 # v2.9.4
with:
header: hub-vercel-preview
message: |
🚀 **Hub preview deployed:** ${{ steps.deploy.outputs.preview-url }}

View File

@@ -30,8 +30,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6

View File

@@ -85,8 +85,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6

View File

@@ -76,8 +76,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6
@@ -203,8 +201,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- uses: actions/setup-node@v6
with:

View File

@@ -20,10 +20,10 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- uses: actions/setup-node@v6
with:
node-version-file: '.nvmrc'

View File

@@ -76,8 +76,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6

View File

@@ -16,10 +16,10 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v6
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- uses: actions/setup-node@v6
with:
node-version-file: '.nvmrc'

View File

@@ -144,8 +144,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6

View File

@@ -52,8 +52,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6

View File

@@ -30,8 +30,6 @@ jobs:
- name: Install pnpm
uses: pnpm/action-setup@fc06bc1257f339d1d5d8b3a19a8cae5388b55320 # v4.4.0
with:
version: 10
- name: Setup Node.js
uses: actions/setup-node@v6

View File

@@ -12,6 +12,8 @@
"playwright-report/*",
"src/extensions/core/*",
"src/scripts/*",
"apps/hub/scripts/**/*",
"apps/hub/src/scripts/*",
"src/types/generatedManagerTypes.ts",
"src/types/vue-shim.d.ts",
"test-results/*",

View File

@@ -208,7 +208,7 @@ See @docs/testing/\*.md for detailed patterns.
3. Keep your module mocks contained
Do not use global mutable state within the test file
Use `vi.hoisted()` if necessary to allow for per-test Arrange phase manipulation of deeper mock state
4. For Component testing, use [Vue Test Utils](https://test-utils.vuejs.org/) and especially follow the advice [about making components easy to test](https://test-utils.vuejs.org/guide/essentials/easy-to-test.html)
4. For Component testing, prefer [@testing-library/vue](https://testing-library.com/docs/vue-testing-library/intro/) with `@testing-library/user-event` for user-centric, behavioral tests. [Vue Test Utils](https://test-utils.vuejs.org/) is also accepted, especially for tests that need direct access to the component wrapper (e.g., `findComponent`, `emitted()`). Follow the advice [about making components easy to test](https://test-utils.vuejs.org/guide/essentials/easy-to-test.html)
5. Aim for behavioral coverage of critical and new features
### Playwright / Browser / E2E Tests
@@ -216,6 +216,7 @@ See @docs/testing/\*.md for detailed patterns.
1. Follow the Best Practices described [in the Playwright documentation](https://playwright.dev/docs/best-practices)
2. Do not use waitForTimeout, use Locator actions and [retrying assertions](https://playwright.dev/docs/test-assertions#auto-retrying-assertions)
3. Tags like `@mobile`, `@2x` are respected by config and should be used for relevant tests
4. Type all API mock responses in `route.fulfill()` using generated types or schemas from `packages/ingest-types`, `packages/registry-types`, `src/workbench/extensions/manager/types/generatedManagerTypes.ts`, or `src/schemas/` — see `docs/guidance/playwright.md` for the full source-of-truth table
## External Resources
@@ -231,6 +232,18 @@ See @docs/testing/\*.md for detailed patterns.
- Nx: <https://nx.dev/docs/reference/nx-commands>
- [Practical Test Pyramid](https://martinfowler.com/articles/practical-test-pyramid.html)
## Architecture Decision Records
All architectural decisions are documented in `docs/adr/`. Code changes must be consistent with accepted ADRs. Proposed ADRs indicate design direction and should be treated as guidance. See `.agents/checks/adr-compliance.md` for automated validation rules.
### Entity Architecture Constraints (ADR 0003 + ADR 0008)
1. **Command pattern for all mutations**: Every entity state change must be a serializable, idempotent, deterministic command — replayable, undoable, and transmittable over CRDT. No imperative fire-and-forget mutation APIs. Systems produce command batches, not direct side effects.
2. **Centralized registries and ECS-style access**: Entity data lives in the World (centralized registry), queried via `world.getComponent(entityId, ComponentType)`. Do not add new instance properties/methods to entity classes. Do not use OOP inheritance for entity modeling.
3. **No god-object growth**: Do not add methods to `LGraphNode`, `LGraphCanvas`, `LGraph`, or `Subgraph`. Extract to systems, stores, or composables.
4. **Plain data components**: ECS components are plain data objects — no methods, no back-references to parent entities. Behavior belongs in systems (pure functions).
5. **Extension ecosystem impact**: Changes to entity callbacks (`onConnectionsChange`, `onRemoved`, `onAdded`, `onConnectInput/Output`, `onConfigure`, `onWidgetChanged`), `node.widgets` access, `node.serialize`, or `graph._version++` affect 40+ custom node repos and require migration guidance.
## Project Philosophy
- Follow good software engineering principles

View File

@@ -41,12 +41,49 @@
/src/components/templates/ @Myestery @christian-byrne @comfyui-wiki
# Mask Editor
/src/extensions/core/maskeditor.ts @trsommer @brucew4yn3rp
/src/extensions/core/maskEditorLayerFilenames.ts @trsommer @brucew4yn3rp
/src/extensions/core/maskeditor.ts @trsommer @brucew4yn3rp @jtydhr88
/src/extensions/core/maskEditorLayerFilenames.ts @trsommer @brucew4yn3rp @jtydhr88
/src/components/maskeditor/ @trsommer @brucew4yn3rp @jtydhr88
/src/composables/maskeditor/ @trsommer @brucew4yn3rp @jtydhr88
/src/stores/maskEditorStore.ts @trsommer @brucew4yn3rp @jtydhr88
/src/stores/maskEditorDataStore.ts @trsommer @brucew4yn3rp @jtydhr88
# Image Crop
/src/extensions/core/imageCrop.ts @jtydhr88
/src/components/imagecrop/ @jtydhr88
/src/composables/useImageCrop.ts @jtydhr88
/src/lib/litegraph/src/widgets/ImageCropWidget.ts @jtydhr88
# Image Compare
/src/extensions/core/imageCompare.ts @jtydhr88
/src/renderer/extensions/vueNodes/widgets/components/WidgetImageCompare.vue @jtydhr88
/src/renderer/extensions/vueNodes/widgets/components/WidgetImageCompare.test.ts @jtydhr88
/src/renderer/extensions/vueNodes/widgets/components/WidgetImageCompare.stories.ts @jtydhr88
/src/renderer/extensions/vueNodes/widgets/composables/useImageCompareWidget.ts @jtydhr88
/src/lib/litegraph/src/widgets/ImageCompareWidget.ts @jtydhr88
# Painter
/src/extensions/core/painter.ts @jtydhr88
/src/components/painter/ @jtydhr88
/src/composables/painter/ @jtydhr88
/src/renderer/extensions/vueNodes/widgets/composables/usePainterWidget.ts @jtydhr88
/src/lib/litegraph/src/widgets/PainterWidget.ts @jtydhr88
# GLSL
/src/renderer/glsl/ @jtydhr88 @pythongosssss @christian-byrne
# 3D
/src/extensions/core/load3d.ts @jtydhr88
/src/extensions/core/load3dLazy.ts @jtydhr88
/src/extensions/core/load3d/ @jtydhr88
/src/components/load3d/ @jtydhr88
/src/composables/useLoad3d.ts @jtydhr88
/src/composables/useLoad3d.test.ts @jtydhr88
/src/composables/useLoad3dDrag.ts @jtydhr88
/src/composables/useLoad3dDrag.test.ts @jtydhr88
/src/composables/useLoad3dViewer.ts @jtydhr88
/src/composables/useLoad3dViewer.test.ts @jtydhr88
/src/services/load3dService.ts @jtydhr88
# Manager
/src/workbench/extensions/manager/ @viva-jinyi @christian-byrne @ltdrdata

9
apps/hub/.gitignore vendored Normal file
View File

@@ -0,0 +1,9 @@
dist/
.astro/
.content-cache/
src/content/templates/
public/workflows/thumbnails/
public/workflows/avatars/
public/previews/
public/search-index.json
knowledge/tutorials/

254
apps/hub/astro.config.mjs Normal file
View File

@@ -0,0 +1,254 @@
import { defineConfig } from 'astro/config'
import sitemap from '@astrojs/sitemap'
import vercel from '@astrojs/vercel'
import tailwindcss from '@tailwindcss/vite'
import fs from 'node:fs'
import path from 'node:path'
import os from 'node:os'
import vue from '@astrojs/vue'
// Build template date lookup at config time
const templatesDir = path.join(process.cwd(), 'src/content/templates')
const templateDates = new Map()
if (fs.existsSync(templatesDir)) {
const files = fs.readdirSync(templatesDir).filter((f) => f.endsWith('.json'))
for (const file of files) {
try {
const content = JSON.parse(
fs.readFileSync(path.join(templatesDir, file), 'utf-8')
)
if (content.name && content.date) {
templateDates.set(content.name, content.date)
}
} catch {
// Skip invalid JSON files
}
}
}
// Build timestamp used as lastmod fallback for pages without a specific date
const buildDate = new Date().toISOString()
// Supported locales (matches src/i18n/config.ts)
const locales = [
'en',
'zh',
'zh-TW',
'ja',
'ko',
'es',
'fr',
'ru',
'tr',
'ar',
'pt-BR'
]
const nonDefaultLocales = locales.filter((l) => l !== 'en')
// Custom sitemap pages for ISR routes not discovered at build time
const siteOrigin = (
process.env.PUBLIC_SITE_ORIGIN || 'https://www.comfy.org'
).replace(/\/$/, '')
// Creator profile pages — extract unique usernames from synced templates
const creatorUsernames = new Set()
if (fs.existsSync(templatesDir)) {
const files = fs.readdirSync(templatesDir).filter((f) => f.endsWith('.json'))
for (const file of files) {
try {
const content = JSON.parse(
fs.readFileSync(path.join(templatesDir, file), 'utf-8')
)
if (content.username) creatorUsernames.add(content.username)
} catch {
// Skip invalid JSON
}
}
}
const creatorPages = [...creatorUsernames].map(
(u) => `${siteOrigin}/workflows/${u}/`
)
const localeCustomPages = nonDefaultLocales.map(
(locale) => `${siteOrigin}/${locale}/workflows/`
)
const customPages = [...creatorPages, ...localeCustomPages]
// https://astro.build/config
export default defineConfig({
site: (process.env.PUBLIC_SITE_ORIGIN || 'https://www.comfy.org').replace(
/\/$/,
''
),
prefetch: {
prefetchAll: false,
defaultStrategy: 'hover'
},
i18n: {
defaultLocale: 'en',
locales: locales,
routing: {
prefixDefaultLocale: false // English at root, others prefixed (/zh/, /ja/, etc.)
}
},
integrations: [
sitemap({
// Use custom filename to avoid collision with Framer's /sitemap.xml
filenameBase: 'sitemap-workflows',
// Include Framer's marketing sitemap in the index
customSitemaps: ['https://www.comfy.org/sitemap.xml'],
// Include on-demand locale pages that aren't discovered at build time
customPages: customPages,
serialize(item) {
const url = new URL(item.url)
const pathname = url.pathname
// Template detail pages: /workflows/{slug}/ or /{locale}/workflows/{slug}/
const templateMatch = pathname.match(
/^(?:\/([a-z]{2}(?:-[A-Z]{2})?))?\/workflows\/([^/]+)\/?$/
)
if (templateMatch) {
const slug = templateMatch[2]
const date = templateDates.get(slug)
item.lastmod = date ? new Date(date).toISOString() : buildDate
// @ts-expect-error - sitemap types are stricter than actual API
item.changefreq = 'monthly'
item.priority = 0.8
return item
}
// Homepage
if (pathname === '/' || pathname === '') {
item.lastmod = buildDate
// @ts-expect-error - sitemap types are stricter than actual API
item.changefreq = 'daily'
item.priority = 1.0
return item
}
// Workflows index (including localized versions)
if (pathname.match(/^(?:\/[a-z]{2}(?:-[A-Z]{2})?)?\/workflows\/?$/)) {
item.lastmod = buildDate
// @ts-expect-error - sitemap types are stricter than actual API
item.changefreq = 'daily'
item.priority = 0.9
return item
}
// Category pages: /workflows/category/{type}/ or /{locale}/workflows/category/{type}/
if (
pathname.match(
/^(?:\/[a-z]{2}(?:-[A-Z]{2})?)?\/workflows\/category\//
)
) {
// @ts-expect-error - sitemap types are stricter than actual API
item.changefreq = 'weekly'
item.priority = 0.7
return item
}
// Model pages: /workflows/model/{model}/ or /{locale}/workflows/model/{model}/
if (
pathname.match(/^(?:\/[a-z]{2}(?:-[A-Z]{2})?)?\/workflows\/model\//)
) {
// @ts-expect-error - sitemap types are stricter than actual API
item.changefreq = 'weekly'
item.priority = 0.6
return item
}
// Tag pages: /workflows/tag/{tag}/ or /{locale}/workflows/tag/{tag}/
if (
pathname.match(/^(?:\/[a-z]{2}(?:-[A-Z]{2})?)?\/workflows\/tag\//)
) {
// @ts-expect-error - sitemap types are stricter than actual API
item.changefreq = 'weekly'
item.priority = 0.6
return item
}
// Default for other pages
// @ts-expect-error - sitemap types are stricter than actual API
item.changefreq = 'weekly'
item.priority = 0.5
return item
},
// Exclude OG image routes and legacy redirect pages from sitemap.
// Legacy redirects are /workflows/{slug}/ without a 12-char hex share_id suffix.
// Canonical detail pages are /workflows/{slug}-{shareId}/ (shareId = 12 hex chars).
filter: (page) => {
if (
page.includes('/workflows/og/') ||
page.includes('/workflows/og.png')
)
return false
// Check if this is a workflow detail path (not category/tag/model/creators)
const match = page.match(/\/workflows\/([^/]+)\/$/)
if (match) {
const segment = match[1]
// Skip known sub-paths
if (
['category', 'tag', 'model', 'creators'].some((p) =>
page.includes(`/workflows/${p}/`)
)
)
return true
// Include if it has a share_id suffix (12 hex chars after last hyphen)
const lastHyphen = segment.lastIndexOf('-')
if (lastHyphen === -1) return false // No hyphen = legacy redirect
const candidate = segment.slice(lastHyphen + 1)
if (candidate.length === 12 && /^[0-9a-f]+$/.test(candidate))
return true
return false // Has hyphen but not a valid share_id = legacy redirect
}
return true
}
}),
vue()
],
output: 'static',
adapter: vercel({
webAnalytics: { enabled: true },
skewProtection: true
}),
// Build performance optimizations
build: {
// Increase concurrency for faster builds on multi-core systems
concurrency: Math.max(1, os.cpus().length),
// Inline small stylesheets automatically
inlineStylesheets: 'auto'
},
// HTML compression
compressHTML: true,
// Image optimization settings
image: {
service: {
entrypoint: 'astro/assets/services/sharp',
config: {
// Limit input pixels to prevent memory issues with large images
limitInputPixels: 268402689 // ~16384x16384
}
}
},
// Responsive images for automatic srcset generation (now stable in Astro 5)
// Note: responsiveImages was moved from experimental to stable in Astro 5.x
vite: {
plugins: [tailwindcss()],
build: {
chunkSizeWarningLimit: 1000
},
optimizeDeps: {
include: ['web-vitals']
},
css: {
devSourcemap: false
}
}
})

View File

@@ -0,0 +1,22 @@
# 3D Generation
3D generation creates three-dimensional models — meshes, point clouds, or multi-view images — from text or image inputs. This enables rapid prototyping of 3D assets without manual modeling. In ComfyUI, several approaches exist: image-to-3D (lifting a single photo into a mesh), text-to-3D (generating a 3D object from a description), and multi-view generation (producing consistent views of an object that can be reconstructed into 3D).
## How It Works in ComfyUI
- Key nodes involved: Model-specific loaders (`TripoSR`, `InstantMesh`, `StableZero123`), `LoadImage`, `Save3D` / `Preview3D`, `CRM` nodes
- Typical workflow pattern: Load image → Load 3D model → Run inference → Preview 3D result → Export mesh
## Key Settings
- **Inference steps**: Number of denoising/reconstruction steps. More steps generally improve quality but increase generation time.
- **Elevation angle**: Camera elevation for multi-view generation, controlling the vertical viewing angle of the generated views.
- **Guidance scale**: How closely the model follows the input image or text. Higher values increase fidelity to the input but may reduce diversity.
- **Output format**: Export format for the 3D mesh — OBJ, GLB, and PLY are common options, each suited to different downstream tools.
## Tips
- Clean single-object images on white or simple backgrounds work best for image-to-3D conversion.
- Multi-view approaches (like Zero123) often produce better geometry than single-view methods.
- Post-process generated meshes in Blender for cleanup, retopology, or texturing before production use.
- Start with TripoSR for quick results — it generates meshes in seconds and is a good baseline to compare against other methods.

View File

@@ -0,0 +1,374 @@
{
"text-to-image": [
"01_get_started_text_to_image",
"api_bfl_flux2_max_sofa_swap",
"api_bfl_flux_1_kontext_max_image",
"api_bfl_flux_1_kontext_multiple_images_input",
"api_bfl_flux_1_kontext_pro_image",
"api_bfl_flux_pro_t2i",
"api_bytedance_seedream4",
"api_flux2",
"api_from_photo_2_miniature",
"api_google_gemini_image",
"api_grok_text_to_image",
"api_ideogram_v3_t2i",
"api_kling_omni_image",
"api_luma_photon_i2i",
"api_luma_photon_style_ref",
"api_nano_banana_pro",
"api_openai_dall_e_2_inpaint",
"api_openai_dall_e_2_t2i",
"api_openai_dall_e_3_t2i",
"api_openai_fashion_billboard_generator",
"api_openai_image_1_i2i",
"api_openai_image_1_inpaint",
"api_openai_image_1_multi_inputs",
"api_openai_image_1_t2i",
"api_recraft_image_gen_with_color_control",
"api_recraft_image_gen_with_style_control",
"api_recraft_style_reference",
"api_recraft_vector_gen",
"api_runway_reference_to_image",
"api_runway_text_to_image",
"api_stability_ai_i2i",
"api_stability_ai_sd3.5_i2i",
"api_stability_ai_sd3.5_t2i",
"api_stability_ai_stable_image_ultra_t2i",
"api_wan_text_to_image",
"default",
"flux1_dev_uso_reference_image_gen",
"flux1_krea_dev",
"flux_canny_model_example",
"flux_depth_lora_example",
"flux_dev_checkpoint_example",
"flux_dev_full_text_to_image",
"flux_fill_inpaint_example",
"flux_fill_outpaint_example",
"flux_redux_model_example",
"flux_schnell",
"flux_schnell_full_text_to_image",
"hidream_e1_1",
"hidream_e1_full",
"hidream_i1_dev",
"hidream_i1_fast",
"hidream_i1_full",
"image-qwen_image_edit_2511_lora_inflation",
"image_anima_preview",
"image_chroma1_radiance_text_to_image",
"image_chroma_text_to_image",
"image_flux2",
"image_flux2_fp8",
"image_flux2_klein_image_edit_4b_base",
"image_flux2_klein_image_edit_4b_distilled",
"image_flux2_klein_image_edit_9b_base",
"image_flux2_klein_image_edit_9b_distilled",
"image_flux2_klein_text_to_image",
"image_flux2_text_to_image",
"image_flux2_text_to_image_9b",
"image_kandinsky5_t2i",
"image_lotus_depth_v1_1",
"image_netayume_lumina_t2i",
"image_newbieimage_exp0_1-t2i",
"image_omnigen2_image_edit",
"image_omnigen2_t2i",
"image_ovis_text_to_image",
"image_qwen_Image_2512",
"image_qwen_image",
"image_qwen_image_2512_with_2steps_lora",
"image_qwen_image_controlnet_patch",
"image_qwen_image_instantx_controlnet",
"image_qwen_image_instantx_inpainting_controlnet",
"image_qwen_image_union_control_lora",
"image_z_image",
"image_z_image_turbo",
"image_z_image_turbo_fun_union_controlnet",
"sd3.5_large_blur",
"sd3.5_large_canny_controlnet_example",
"sd3.5_large_depth",
"sd3.5_simple_example",
"sdxl_refiner_prompt_example",
"sdxl_revision_text_prompts",
"sdxl_simple_example",
"sdxlturbo_example",
"templates-9grid_social_media-v2.0"
],
"img2img": [
"02_qwen_Image_edit_subgraphed",
"api_luma_photon_i2i",
"api_meshy_multi_image_to_model",
"api_openai_image_1_i2i",
"api_runway_reference_to_image",
"api_stability_ai_i2i",
"api_stability_ai_sd3.5_i2i",
"flux1_dev_uso_reference_image_gen",
"flux_canny_model_example",
"flux_depth_lora_example",
"flux_fill_inpaint_example",
"flux_fill_outpaint_example",
"flux_kontext_dev_basic",
"flux_redux_model_example",
"image_chrono_edit_14B",
"image_qwen_image_edit",
"image_qwen_image_edit_2509",
"image_qwen_image_instantx_controlnet",
"image_qwen_image_instantx_inpainting_controlnet",
"sd3.5_large_blur",
"sd3.5_large_canny_controlnet_example",
"sd3.5_large_depth"
],
"inpainting": [
"api_openai_dall_e_2_inpaint",
"api_openai_image_1_inpaint",
"api_stability_ai_audio_inpaint",
"flux_fill_inpaint_example",
"flux_fill_outpaint_example",
"image_flux.1_fill_dev_OneReward",
"image_qwen_image_instantx_inpainting_controlnet",
"video_wan2_2_14B_fun_inpaint",
"video_wan2_2_5B_fun_inpaint",
"video_wan_vace_inpainting",
"wan2.1_fun_inp"
],
"outpainting": [
"api_bria_image_outpainting",
"flux_fill_outpaint_example",
"image_flux.1_fill_dev_OneReward",
"video_wan_vace_outpainting"
],
"controlnet": [
"02_qwen_Image_edit_subgraphed",
"flux_canny_model_example",
"flux_depth_lora_example",
"flux_redux_model_example",
"image_lotus_depth_v1_1",
"image_qwen_image_controlnet_patch",
"image_qwen_image_edit_2509",
"image_qwen_image_instantx_controlnet",
"image_qwen_image_instantx_inpainting_controlnet",
"image_qwen_image_union_control_lora",
"image_z_image_turbo_fun_union_controlnet",
"sd3.5_large_canny_controlnet_example",
"sd3.5_large_depth",
"utility-depthAnything-v2-relative-video",
"utility-frame_interpolation-film",
"utility-lineart-video",
"utility-normal_crafter-video",
"utility-openpose-video",
"video_ltx2_canny_to_video",
"video_ltx2_depth_to_video",
"video_ltx2_pose_to_video",
"wan2.1_fun_control"
],
"upscaling": [
"api_topaz_image_enhance",
"api_topaz_video_enhance",
"api_wavespeed_flshvsr_video_upscale",
"api_wavespped_image_upscale",
"api_wavespped_seedvr2_ai_image_fix",
"ultility_hitpaw_general_image_enhance",
"ultility_hitpaw_video_enhance",
"utility-gan_upscaler",
"utility-topaz_landscape_upscaler",
"utility_interpolation_image_upscale",
"utility_nanobanana_pro_ai_image_fix",
"utility_nanobanana_pro_illustration_upscale",
"utility_nanobanana_pro_product_upscale",
"utility_recraft_creative_image_upscale",
"utility_recraft_crisp_image_upscale",
"utility_seedvr2_image_upscale",
"utility_seedvr2_video_upscale",
"utility_topaz_illustration_upscale",
"utility_video_upscale"
],
"video-generation": [
"03_video_wan2_2_14B_i2v_subgraphed",
"api_bytedace_seedance1_5_flf2v",
"api_bytedace_seedance1_5_image_to_video",
"api_bytedace_seedance1_5_text_to_video",
"api_bytedance_flf2v",
"api_bytedance_image_to_video",
"api_bytedance_text_to_video",
"api_grok_video",
"api_grok_video_edit",
"api_hailuo_minimax_i2v",
"api_hailuo_minimax_t2v",
"api_hailuo_minimax_video",
"api_kling2_6_i2v",
"api_kling2_6_t2v",
"api_kling_effects",
"api_kling_flf",
"api_kling_i2v",
"api_kling_motion_control",
"api_kling_omni_edit_video",
"api_kling_omni_i2v",
"api_kling_omni_t2v",
"api_kling_omni_v2v",
"api_ltxv_image_to_video",
"api_ltxv_text_to_video",
"api_luma_i2v",
"api_luma_t2v",
"api_moonvalley_image_to_video",
"api_moonvalley_text_to_video",
"api_moonvalley_video_to_video_motion_transfer",
"api_moonvalley_video_to_video_pose_control",
"api_openai_sora_video",
"api_pixverse_i2v",
"api_pixverse_t2v",
"api_pixverse_template_i2v",
"api_runway_first_last_frame",
"api_runway_gen3a_turbo_image_to_video",
"api_runway_gen4_turo_image_to_video",
"api_topaz_video_enhance",
"api_veo2_i2v",
"api_veo3",
"api_vidu_image_to_video",
"api_vidu_q2_flf2v",
"api_vidu_q2_i2v",
"api_vidu_q2_r2v",
"api_vidu_q2_t2v",
"api_vidu_q3_image_to_video",
"api_vidu_q3_text_to_video",
"api_vidu_reference_to_video",
"api_vidu_start_end_to_video",
"api_vidu_text_to_video",
"api_vidu_video_extension",
"api_wan2_6_i2v",
"api_wan2_6_t2v",
"api_wan_image_to_video",
"api_wan_r2v",
"api_wan_text_to_video",
"api_wavespeed_flshvsr_video_upscale",
"gsc_starter_2",
"hunyuan_video_text_to_video",
"image_to_video_wan",
"ltxv_image_to_video",
"ltxv_text_to_video",
"template-Animation_Trajectory_Control_Wan_ATI",
"templates-3D_logo_texture_animation",
"templates-6-key-frames",
"templates-car_product",
"templates-photo_to_product_vid",
"templates-sprite_sheet",
"templates-stitched_vid_contact_sheet",
"templates-textured_logo_elements",
"templates-textured_logotype-v2.1",
"text_to_video_wan",
"txt_to_image_to_video",
"ultility_hitpaw_video_enhance",
"utility-depthAnything-v2-relative-video",
"utility-frame_interpolation-film",
"utility-gan_upscaler",
"utility-lineart-video",
"utility-normal_crafter-video",
"utility-openpose-video",
"utility_seedvr2_video_upscale",
"utility_video_upscale",
"video-wan21_scail",
"video_humo",
"video_hunyuan_video_1.5_720p_i2v",
"video_hunyuan_video_1.5_720p_t2v",
"video_kandinsky5_i2v",
"video_kandinsky5_t2v",
"video_ltx2_canny_to_video",
"video_ltx2_depth_to_video",
"video_ltx2_i2v",
"video_ltx2_i2v_distilled",
"video_ltx2_pose_to_video",
"video_ltx2_t2v",
"video_ltx2_t2v_distilled",
"video_wan2.1_alpha_t2v_14B",
"video_wan2.1_fun_camera_v1.1_1.3B",
"video_wan2.1_fun_camera_v1.1_14B",
"video_wan2_1_infinitetalk",
"video_wan2_2_14B_animate",
"video_wan2_2_14B_flf2v",
"video_wan2_2_14B_fun_camera",
"video_wan2_2_14B_fun_control",
"video_wan2_2_14B_fun_inpaint",
"video_wan2_2_14B_i2v",
"video_wan2_2_14B_s2v",
"video_wan2_2_14B_t2v",
"video_wan2_2_5B_fun_control",
"video_wan2_2_5B_fun_inpaint",
"video_wan2_2_5B_ti2v",
"video_wan_ati",
"video_wan_vace_14B_ref2v",
"video_wan_vace_14B_t2v",
"video_wan_vace_14B_v2v",
"video_wan_vace_flf2v",
"video_wan_vace_inpainting",
"video_wan_vace_outpainting",
"video_wanmove_480p",
"video_wanmove_480p_hallucination",
"wan2.1_flf2v_720_f16",
"wan2.1_fun_control",
"wan2.1_fun_inp"
],
"audio-generation": [
"05_audio_ace_step_1_t2a_song_subgraphed",
"api_kling2_6_i2v",
"api_kling2_6_t2v",
"api_stability_ai_audio_inpaint",
"api_stability_ai_audio_to_audio",
"api_stability_ai_text_to_audio",
"api_vidu_q3_image_to_video",
"api_vidu_q3_text_to_video",
"audio-chatterbox_tts",
"audio-chatterbox_tts_dialog",
"audio-chatterbox_tts_multilingual",
"audio-chatterbox_vc",
"audio_ace_step_1_5_checkpoint",
"audio_ace_step_1_5_split",
"audio_ace_step_1_5_split_4b",
"audio_ace_step_1_m2m_editing",
"audio_ace_step_1_t2a_instrumentals",
"audio_ace_step_1_t2a_song",
"audio_stable_audio_example",
"utility-audioseparation",
"video_wan2_1_infinitetalk",
"video_wan2_2_14B_s2v"
],
"3d-generation": [
"04_hunyuan_3d_2.1_subgraphed",
"3d_hunyuan3d-v2.1",
"3d_hunyuan3d_image_to_model",
"3d_hunyuan3d_multiview_to_model",
"3d_hunyuan3d_multiview_to_model_turbo",
"api_from_photo_2_miniature",
"api_hunyuan3d_image_to_model",
"api_hunyuan3d_text_to_model",
"api_meshy_image_to_model",
"api_meshy_multi_image_to_model",
"api_meshy_text_to_model",
"api_rodin_gen2",
"api_rodin_image_to_model",
"api_rodin_multiview_to_model",
"api_tripo3_0_image_to_model",
"api_tripo3_0_text_to_model",
"api_tripo_image_to_model",
"api_tripo_multiview_to_model",
"api_tripo_text_to_model",
"templates-3D_logo_texture_animation",
"templates-qwen_multiangle"
],
"lora": [
"flux_depth_lora_example",
"image-qwen_image_edit_2511_lora_inflation",
"image_qwen_image_2512_with_2steps_lora",
"image_qwen_image_union_control_lora"
],
"embeddings": [],
"ip-adapter": [
"api_kling_omni_i2v",
"api_kling_omni_image",
"api_kling_omni_v2v",
"api_magnific_image_style_transfer",
"api_recraft_style_reference",
"api_vidu_q2_r2v",
"api_wan_r2v",
"templates-product_ad-v2.0"
],
"samplers": [],
"cfg": [],
"vae": []
}

View File

@@ -0,0 +1,22 @@
# Audio Generation
Audio generation in ComfyUI covers creating speech (text-to-speech), music, and sound effects from text prompts or reference audio. Dedicated audio models run within ComfyUI's node graph, letting you integrate audio creation into larger multimedia workflows — for example, generating a video and its soundtrack in a single pipeline.
## How It Works in ComfyUI
- Key nodes involved: Model-specific nodes (`CosyVoice` nodes for TTS, `StableAudio` nodes for music/SFX), audio preview and save nodes, `AudioScheduler`
- Typical workflow pattern: Load audio model → Provide text/reference input → Generate audio → Preview/save audio
## Key Settings
- **Sample rate**: Output audio quality, typically 2400048000 Hz. Higher rates capture more detail but produce larger files.
- **Duration**: Length of generated audio in seconds. Longer durations may reduce quality or coherence depending on the model.
- **Voice reference**: For voice cloning, a short audio clip of the target voice (310 seconds of clean speech works best).
- **Text input**: The text to be spoken (TTS) or the description of the desired audio (music/SFX generation).
## Tips
- CosyVoice and F5-TTS are popular choices for text-to-speech in ComfyUI, each with dedicated custom nodes.
- Stable Audio Open handles music and sound effect generation from text descriptions.
- Use clean, noise-free reference audio clips for voice cloning to get the best results.
- Keep text inputs short and well-punctuated for the highest quality speech output — long paragraphs may degrade in naturalness.

View File

@@ -0,0 +1,23 @@
# CFG / Guidance Scale
Classifier-Free Guidance (CFG) controls how strongly the model follows your text prompt versus generating freely. Higher CFG values produce outputs that adhere more closely to the prompt but can cause oversaturation and artifacts, while lower values yield more natural-looking images at the cost of reduced prompt control. Finding the right balance is essential for every workflow.
## How It Works in ComfyUI
- Key nodes: `KSampler` (the `cfg` parameter), `ModelSamplingDiscrete` (for advanced noise schedule configurations)
- During each sampling step, the model generates both a conditioned prediction (with your prompt) and an unconditioned prediction (without it). CFG scales the difference between the two — higher values push the output further toward the conditioned prediction, amplifying prompt influence.
## Key Settings
- **cfg** (1.030.0): The guidance scale value. Recommended ranges vary by model architecture:
- SD 1.5 / SDXL: 78 is the standard starting point
- Flux: 1.04.0 (Flux uses much lower guidance)
- Video models (e.g., Wan, HunyuanVideo): 3.55.0
## Tips
- Start at 7 for SD-based models and 3.5 for Flux, then adjust based on results
- Values above ~12 for SD models typically cause color oversaturation, harsh contrast, and visible artifacts
- Values below ~3 for SD models tend to produce blurry or incoherent results
- Some models like Flux Schnell use a guidance embedding baked into the model rather than traditional CFG — for these, the `cfg` parameter may have little or no effect
- When experimenting, change CFG in increments of 0.51.0 to see its impact clearly

View File

@@ -0,0 +1,28 @@
# ControlNet
ControlNet guides image generation using structural conditions extracted from reference images — such as edge maps, depth information, or human poses. Instead of relying solely on text prompts for composition, ControlNet lets you specify the spatial layout precisely. This bridges the gap between text-to-image flexibility and the structural precision needed for professional workflows.
## How It Works in ComfyUI
- Key nodes involved: `ControlNetLoader`, `ControlNetApplyAdvanced`, preprocessor nodes (`CannyEdgePreprocessor`, `DepthAnythingPreprocessor`, `DWPosePreprocessor`, `LineartPreprocessor`)
- Typical workflow pattern: Load reference image → preprocess to extract condition (edges/depth/pose) → load ControlNet model → apply condition to sampling → generate image with structural guidance
## ControlNet Types
- **Canny**: Detects edges to preserve outlines and shapes
- **Depth**: Captures spatial depth for accurate foreground/background placement
- **OpenPose**: Extracts human body and hand poses for character positioning
- **Normal Map**: Encodes surface orientation for consistent lighting and geometry
- **Lineart**: Follows line drawings and illustrations as generation guides
- **Scribble**: Uses rough sketches as loose compositional guides
## Key Settings
- **Strength**: Controls how strongly the condition guides generation (0.01.0). Values of 0.51.0 are typical. Higher values enforce the structure more rigidly; lower values allow the model more creative freedom.
- **start_percent / end_percent**: Controls when the ControlNet activates during the sampling process. Starting at 0.0 and ending at 1.0 applies guidance throughout. Ending earlier (e.g., 0.8) lets the model refine fine details freely in final steps.
## Tips
- Always preprocess your input image with the appropriate preprocessor node before feeding it to ControlNet. Raw images will not produce correct conditioning.
- Combine multiple ControlNets for precise control — for example, Depth for spatial layout plus OpenPose for character positioning. Stack them by chaining `ControlNetApplyAdvanced` nodes.
- If your generation looks distorted or overcooked, lower the ControlNet strength. Values above 0.8 can fight with the text prompt and produce artifacts.

View File

@@ -0,0 +1,19 @@
# Textual Embeddings
Textual embeddings are learned text representations that encode specific concepts, styles, or objects into the CLIP text encoder's vocabulary. These tiny files (~10100 KB) effectively add new "words" to your prompt vocabulary, letting you reference complex visual concepts — a particular art style, a specific character, or a set of undesirable artifacts — with a single token. Because they operate at the text-encoding level, embeddings integrate seamlessly with your existing prompts and require no changes to the model itself.
## How It Works in ComfyUI
- Key nodes: `CLIPTextEncode` — reference embeddings directly in your prompt text using the syntax `embedding:name_of_embedding`
- Typical workflow pattern: Place embedding files in `ComfyUI/models/embeddings/` → type `embedding:name_of_embedding` inside your positive or negative prompt in a `CLIPTextEncode` node → connect to sampler as usual
## Key Settings
- **Prompt weighting**: Embeddings have no dedicated strength slider, but you can adjust their influence with prompt weighting syntax, e.g., `(embedding:name_of_embedding:1.2)` to increase strength or `(embedding:name_of_embedding:0.6)` to soften it
- **Placement**: Add embeddings to the negative prompt to suppress unwanted features, or to the positive prompt to invoke a learned concept
## Tips
- Embeddings are commonly used in negative prompts (e.g., `embedding:EasyNegative`, `embedding:bad-hands-5`) to reduce common artifacts like malformed hands or distorted faces
- Make sure the embedding matches your base model version — an SD 1.5 embedding will not work correctly with an SDXL checkpoint
- You can combine multiple embeddings with regular text in the same prompt for fine-grained control

View File

@@ -0,0 +1,20 @@
# Image-to-Image
Image-to-image (img2img) transforms an existing image using a text prompt while preserving the original structure and composition. Instead of starting from pure noise, the source image is encoded into latent space and partially noised, then the sampler denoises it guided by your prompt. This lets you restyle photos, refine AI-generated images, or apply creative modifications while keeping the overall layout intact.
## How It Works in ComfyUI
- Key nodes involved: `LoadImage`, `VAEEncode`, `CLIPTextEncode` (positive + negative), `KSampler`, `VAEDecode`, `SaveImage`
- Typical workflow pattern: Load source image → encode to latent with VAE → encode text prompts → sample with partial denoise → decode latent to image → save
## Key Settings
- **Denoise Strength**: The most important setting, ranging from 0.0 to 1.0. Lower values (0.20.4) preserve more of the original image with subtle changes. Higher values (0.60.8) allow more creative freedom but deviate further from the source. A value of 1.0 effectively ignores the input image entirely.
- **Steps**: Number of sampling steps. 2030 is typical. Fewer steps may be sufficient at low denoise values since less transformation is needed.
- **CFG Scale**: Controls prompt adherence, same as text-to-image. 78 is a standard starting point.
## Tips
- Start with a denoise strength of 0.5 and adjust up or down based on how much change you want. This gives a balanced mix of original structure and new content.
- Your input image resolution should match the model's training resolution. Resize or crop your source image to 512×512 (SD 1.5) or 1024×1024 (SDXL) before loading to avoid quality issues.
- Use img2img iteratively: generate an initial text-to-image result, then refine it with img2img at low denoise to fix details without losing the overall composition.

View File

@@ -0,0 +1,21 @@
# Inpainting
Inpainting selectively regenerates parts of an image using a mask while leaving the rest untouched. You paint a mask over the area you want to change, provide a text prompt describing the desired replacement, and the model fills in only the masked region. This is essential for fixing defects, replacing objects, or refining specific details in an otherwise finished image.
## How It Works in ComfyUI
- Key nodes involved: `LoadImage`, `VAEEncodeForInpainting`, `CLIPTextEncode` (positive + negative), `KSampler`, `VAEDecode`, `SaveImage`
- Typical workflow pattern: Load image + mask → encode with inpainting-aware VAE node → encode text prompts → sample → decode → save
- The mask can be created using ComfyUI's built-in mask editor or loaded from an external image
## Key Settings
- **grow_mask_by**: Expands the mask by a number of pixels, helping the regenerated area blend smoothly with the surrounding image. 68 pixels is typical. Too little causes visible seams; too much affects areas you wanted to keep.
- **Denoise Strength**: For inpainting, higher values (0.71.0) generally work best since you want the masked region to be fully regenerated. Lower values may produce inconsistent blending.
- **Checkpoint**: Dedicated inpainting models like `512-inpainting-ema` produce significantly better edge blending than standard checkpoints.
## Tips
- Always expand your mask slightly beyond the target area. Tight masks create hard edges that look unnatural against the surrounding image.
- Describe what you want to appear in the masked region, not what you want to remove. For example, prompt "a clear blue sky" rather than "remove the bird."
- Use inpainting-specific checkpoints whenever possible. Standard models can inpaint but often struggle with seamless blending at mask boundaries.

View File

@@ -0,0 +1,21 @@
# IP-Adapter
IP-Adapter (Image Prompt Adapter) uses reference images to guide generation style, composition, or subject instead of — or alongside — text prompts. Rather than describing what you want in words, you show the model an image, enabling "image prompting." This is especially powerful for transferring artistic style, maintaining character consistency across generations, or conveying visual concepts that are difficult to express in text.
## How It Works in ComfyUI
- Key nodes: `IPAdapterModelLoader`, `IPAdapterApply` (or `IPAdapterAdvanced`), `CLIPVisionLoader`, `CLIPVisionEncode`, `PrepImageForClipVision`
- Typical workflow pattern: Load IP-Adapter model + CLIP Vision model → prepare and encode reference image → apply adapter to the main model → connect to sampler → decode
## Key Settings
- **weight** (0.01.0): Controls the influence of the reference image on the output. A range of 0.50.8 is typical; higher values make the output closer to the reference
- **weight_type**: Determines how the reference is interpreted — `standard` for general use, `style transfer` for artistic style without copying content, `composition` for layout guidance
- **start_at / end_at** (0.01.0): Controls when the adapter is active during sampling. Limiting the range (e.g., 0.00.8) can improve prompt responsiveness while retaining reference influence
## Tips
- Use the `style_transfer` weight type when you want to borrow an artistic style without reproducing the reference image's content
- Combine IP-Adapter with a text prompt for the best results — the text adds detail and specificity on top of the visual guidance
- Face-specific IP-Adapter models (e.g., `ip-adapter-faceid`) exist for portrait consistency across multiple generations
- Lower the weight if your output looks too similar to the reference image

View File

@@ -0,0 +1,20 @@
# LoRA
LoRA (Low-Rank Adaptation) is a technique for fine-tuning a base model's behavior using a small add-on file rather than retraining the entire model. LoRAs adjust a model's style, teach it specific subjects, or introduce new concepts — all in a file typically just 10200 MB, compared to multi-gigabyte full checkpoints. This makes them easy to share, swap, and combine. In ComfyUI, you load LoRAs on top of a checkpoint and control how strongly they influence the output.
## How It Works in ComfyUI
- Key nodes involved: `LoraLoader` (loads one LoRA and applies it to both MODEL and CLIP), `LoraLoaderModelOnly` (applies to MODEL only, skipping CLIP for faster loading)
- Typical workflow pattern: Load checkpoint → LoraLoader (attach LoRA) → CLIP Text Encode → KSampler → VAE Decode. Chain multiple `LoraLoader` nodes to stack LoRAs.
## Key Settings
- **strength_model**: Controls how much the LoRA affects the diffusion model. Range 0.01.0; typical values are 0.61.0. Higher values apply the LoRA effect more strongly.
- **strength_clip**: Controls how much the LoRA affects text encoding. Usually set to the same value as strength_model, but can be adjusted independently for fine control.
## Tips
- Start with strength 0.7 and adjust up or down based on results — too high can cause oversaturation or artifacts.
- Stacking too many LoRAs simultaneously can cause visual artifacts or conflicting styles; two or three is usually a safe limit.
- Ensure the LoRA matches your base model architecture — SD 1.5 LoRAs will not work with SDXL checkpoints, and vice versa.
- Many LoRAs require specific trigger words in your prompt to activate; always check the LoRA's documentation or model card.

View File

@@ -0,0 +1,20 @@
# Outpainting
Outpainting extends an image beyond its original borders, generating new content that seamlessly continues the existing scene. Unlike inpainting which replaces content within an image, outpainting adds content outside the frame — expanding the canvas in any direction. This is useful for changing aspect ratios, adding environmental context, or creating panoramic compositions from a single image.
## How It Works in ComfyUI
- Key nodes involved: `LoadImage`, `ImagePadForOutpaint`, `VAEEncodeForInpainting`, `CLIPTextEncode` (positive + negative), `KSampler`, `VAEDecode`, `SaveImage`
- Typical workflow pattern: Load image → pad image with transparent/noised borders → encode with inpainting VAE node (padded area becomes the mask) → encode text prompts → sample → decode → save
## Key Settings
- **Padding Pixels**: The number of pixels to extend on each side, typically 64256. Smaller increments produce more coherent results since the model has more context relative to the new area.
- **Denoise Strength**: Use high values (0.81.0) for outpainted regions since the padded area is essentially blank and needs full generation.
- **Feathering**: Controls the gradient blend between the original image and the new content. Higher feathering values create smoother transitions and reduce visible seams.
## Tips
- Outpaint in stages rather than all at once. Extending by 128 pixels at a time and iterating produces far more coherent results than trying to add 512 pixels in a single pass.
- Use a lower CFG scale (56) for outpainting. This allows the model to generate more natural, context-aware extensions rather than forcing strict prompt adherence that may clash with the existing image.
- Include scene context in your prompt that matches the original image. If the source shows an indoor room, describe the room's style and lighting so the extension feels continuous.

View File

@@ -0,0 +1,21 @@
# Samplers & Schedulers
Samplers are the algorithms that iteratively denoise a random latent into a coherent image, while schedulers control the noise schedule — how much noise is removed at each step. Together they determine the image's quality, speed, and visual character. Choosing the right combination is one of the most impactful decisions in any generation workflow.
## How It Works in ComfyUI
- Key nodes: `KSampler` (main sampling node), `KSamplerAdvanced` (provides control over start/end steps for multi-pass workflows)
- Typical workflow pattern: Load model → connect conditioning → configure sampler/scheduler/steps → sample → decode
## Key Settings
- **sampler_name**: The denoising algorithm. Common choices include `euler` (fast, good baseline), `euler_ancestral` (more creative variation), `dpmpp_2m` (balanced quality and speed), `dpmpp_2m_sde` (high quality, slightly slower), `dpmpp_3m_sde` (very high quality), and `uni_pc` (fast convergence)
- **scheduler**: Controls the noise reduction curve. `normal` is linear, `karras` front-loads noise reduction for better detail, `exponential` and `sgm_uniform` (recommended for SDXL) are also available
- **steps** (1100): Number of denoising iterations. 2030 is typical; more steps give diminishing returns. Flux and LCM models need far fewer (48 steps)
## Tips
- `euler` + `normal` is the safest starting combination for any model
- `dpmpp_2m` + `karras` is a popular choice when you want higher quality with minimal speed cost
- Ancestral samplers (`euler_ancestral`, any `_sde` variant) produce different results each run even with the same seed — useful for exploration, but not for reproducibility
- Flux and LCM models converge much faster; using 20+ steps with them wastes time without improving quality

View File

@@ -0,0 +1,21 @@
# Text-to-Image Generation
Text-to-image is the foundational workflow in ComfyUI: you provide a text description (prompt) and the system generates an image from scratch. This is the starting point for most generative AI art. A diffusion model iteratively denoises a random latent image, guided by your text prompt encoded through CLIP, to produce a coherent image matching your description.
## How It Works in ComfyUI
- Key nodes involved: `CheckpointLoaderSimple`, `CLIPTextEncode` (positive + negative), `EmptyLatentImage`, `KSampler`, `VAEDecode`, `SaveImage`
- Typical workflow pattern: Load checkpoint → encode text prompts → create empty latent → sample → decode latent to image → save
## Key Settings
- **Resolution**: Must match the model's training resolution. Use 512×512 for SD 1.5, 1024×1024 for SDXL and Flux models. Mismatched resolutions produce artifacts like duplicated limbs or distorted compositions.
- **Steps**: Number of denoising iterations. 2030 steps is a good balance between quality and speed. More steps refine details but with diminishing returns beyond 30.
- **CFG Scale**: Controls how strongly the sampler follows your prompt. 78 is the typical range. Higher values increase prompt adherence but can introduce oversaturation or artifacts.
- **Seed**: Determines the initial random noise. A fixed seed produces reproducible results, which is useful for iterating on prompts while keeping composition consistent.
## Tips
- Start with simple, descriptive prompts before adding stylistic modifiers. Complex prompts can conflict and produce muddy results.
- Use the negative prompt `CLIPTextEncode` to specify what you want to avoid (e.g., "blurry, low quality, deformed hands") — this significantly improves output quality.
- Always match your `EmptyLatentImage` resolution to the model you loaded. A 768×768 image on an SD 1.5 checkpoint will produce noticeably worse results than 512×512.

View File

@@ -0,0 +1,21 @@
# Upscaling
Upscaling increases image resolution while adding detail, turning a small generated image into a large, sharp result. In ComfyUI, there are two main approaches: model-based upscaling, which uses trained AI models (like RealESRGAN or 4x-UltraSharp) to intelligently enlarge an image in one pass, and latent-based upscaling, which works in latent space with a KSampler to add new detail during the enlargement process. Model-based is faster, while latent-based offers more creative control.
## How It Works in ComfyUI
- Key nodes involved: `UpscaleModelLoader`, `ImageUpscaleWithModel`, `ImageScaleBy`, `LatentUpscale`, `VAEDecodeTiled`
- Typical workflow pattern: Generate image → Upscale model loader → ImageUpscaleWithModel → Save image (model-based), or Generate latent → LatentUpscale → KSampler (lower denoise) → VAEDecode → Save image (latent-based)
## Key Settings
- **Upscale model**: The AI model used for model-based upscaling. `RealESRGAN_x4plus` is a reliable general-purpose choice; `4x-UltraSharp` excels at photo-realistic detail.
- **Scale factor**: How much to enlarge — 2x and 4x are typical. Higher factors increase VRAM usage significantly.
- **tile_size**: For tiled decoding/encoding of very large images. Range 5121024; smaller tiles use less VRAM but take longer.
## Tips
- Model-based upscaling is faster but less creative; latent upscaling paired with a KSampler adds genuinely new detail.
- Use `VAEDecodeTiled` for very large images to avoid out-of-memory errors.
- Chain two 2x upscales instead of one 4x for better overall quality.
- When using latent upscaling, set KSampler denoise to 0.30.5 to add detail without changing the composition.

View File

@@ -0,0 +1,20 @@
# VAE (Variational Autoencoder)
The VAE encodes pixel images into a compact latent representation and decodes latents back into pixel images. All diffusion in Stable Diffusion and Flux happens in latent space — the VAE is the bridge between the images you see and the mathematical space where the model actually works. Every generation workflow ends with a VAE decode step to produce a viewable image.
## How It Works in ComfyUI
- Key nodes: `VAEDecode` (latent → image), `VAEEncode` (image → latent), `VAEDecodeTiled` (for large images to avoid out-of-memory errors), `VAELoader` (load a standalone VAE file)
- Typical workflow pattern: Most checkpoints include a built-in VAE, so the `VAEDecode` node can pull directly from the loaded checkpoint. To use a different VAE, add a `VAELoader` node and connect it to `VAEDecode` instead.
## Key Settings
- **tile_size** (for `VAEDecodeTiled`): Size of each tile when decoding in chunks. Default is 512; reduce if you still encounter memory issues
- **VAE choice**: VAE files are model-specific. Use `sdxl_vae.safetensors` for SDXL, `ae.safetensors` for Flux. Place files in `ComfyUI/models/vae/`
## Tips
- If colors look washed out or slightly off, try loading an external VAE — the VAE baked into a checkpoint is not always optimal, especially for community fine-tunes
- Use `VAEDecodeTiled` for images larger than ~2048 px on either side to prevent out-of-memory crashes
- SDXL and Flux each have their own VAE architecture — using the wrong one will produce corrupted output
- When doing img2img or inpainting, the `VAEEncode` node converts your input image into the latent space the model expects

View File

@@ -0,0 +1,22 @@
# Video Generation
Video generation creates video content from text prompts (T2V), reference images (I2V), or existing video (V2V) using specialized video diffusion models. Unlike image generation, video models must maintain temporal coherence across frames, ensuring smooth motion and consistent subjects. ComfyUI supports several leading open-source video models including WAN 2.1 and HunyuanVideo, each with their own loader and latent nodes.
## How It Works in ComfyUI
- Key nodes involved: Model-specific loaders (e.g. `WAN` video nodes, `HunyuanVideo` nodes, `LTXVLoader`), `EmptyHunyuanLatentVideo` / `EmptyLTXVLatentVideo`, `KSampler`, `VHS_VideoCombine` (from Video Helper Suite)
- Typical workflow pattern: Load video model → Create empty video latent → KSampler (with video-aware scheduling) → VAE decode → VHS_VideoCombine → Save video
## Key Settings
- **Frame count**: Number of frames to generate. Typically 1681 frames depending on the model; more frames require more VRAM and time.
- **Resolution**: Often 512×320 or 848×480 for T2V. Higher resolutions need significantly more resources.
- **FPS**: Frames per second for output, typically 824. Higher FPS gives smoother motion but requires more frames for the same duration.
- **Motion scale/strength**: Controls the amount of movement in the generated video. Lower values produce subtle motion; higher values produce more dynamic scenes.
## Tips
- Start with fewer frames and lower resolution to test your prompt and settings before committing to a full-quality render.
- Image-to-video (I2V) typically gives better coherence than text-to-video (T2V) because the model has a visual anchor.
- Video Helper Suite (VHS) nodes are essential for loading, previewing, and saving video — install this custom node pack first.
- WAN 2.1 and HunyuanVideo are currently the leading open models for quality video generation in ComfyUI.

View File

@@ -0,0 +1,88 @@
{
"Wan": "wan",
"Wan2.1": "wan",
"Wan2.2": "wan",
"Wan2.5": "wan",
"Wan2.6": "wan",
"Wan-Move": "wan",
"Motion Control": "wan",
"Flux": "flux",
"Flux.2": "flux",
"Flux.2 Dev": "flux",
"Flux.2 Klein": "flux",
"Kontext": "flux",
"BFL": "flux",
"SDXL": "sdxl",
"SD1.5": "sdxl",
"Stability": "sdxl",
"Reimagine": "sdxl",
"SD3.5": "sd3-5",
"SVD": "svd",
"Stable Audio": "stable-audio",
"Google": "gemini",
"Google Gemini": "gemini",
"Google Gemini Image": "gemini",
"Gemini3 Pro Image Preview": "gemini",
"Gemini-2.5-Flash": "gemini",
"Veo": "veo",
"Nano Banana Pro": "nano-banana-pro",
"nano-banana": "nano-banana-pro",
"OpenAI": "gpt-image-1",
"GPT-Image-1": "gpt-image-1",
"GPT-Image-1.5": "gpt-image-1",
"Qwen": "qwen",
"Qwen-Image": "qwen",
"Qwen-Image-Edit": "qwen",
"Qwen-Image-Layered": "qwen",
"Qwen-Image 2512": "qwen",
"Hunyuan Video": "hunyuan",
"Hunyuan3D": "hunyuan",
"Tencent": "hunyuan",
"LTX-2": "ltx-video",
"LTXV": "ltx-video",
"Lightricks": "ltx-video",
"ByteDance": "seedance",
"Seedance": "seedance",
"Seedream": "seedream",
"Seedream 4.0": "seedream",
"SeedVR2": "seedvr2",
"Vidu": "vidu",
"Vidu Q2": "vidu",
"Vidu Q3": "vidu",
"Kling": "kling",
"Kling O1": "kling",
"Kling2.6": "kling",
"ACE-Step": "ace-step",
"Chatter Box": "chatterbox",
"Recraft": "recraft",
"Runway": "runway",
"Luma": "luma",
"HiDream": "hidream",
"Tripo": "tripo",
"MiniMax": "minimax",
"Z-Image-Turbo": "z-image",
"Z-Image": "z-image",
"Grok": "grok",
"Moonvalley": "moonvalley",
"Topaz": "topaz",
"Kandinsky": "kandinsky",
"OmniGen": "omnigen",
"Magnific": "magnific",
"PixVerse": "pixverse",
"Meshy": "meshy",
"Rodin": "rodin",
"WaveSpeed": "wavespeed",
"Chroma": "chroma",
"BRIA": "bria",
"HitPaw": "hitpaw",
"NewBie": "newbie",
"Ovis-Image": "ovis-image",
"Ideogram": "ideogram",
"Anima": "anima",
"ChronoEdit": "chronoedit",
"Nvidia": "chronoedit",
"HuMo": "humo",
"FlashVSR": "flashvsr",
"Real-ESRGAN": "real-esrgan",
"Depth Anything\u00a0v2": "depth-anything-v2"
}

View File

@@ -0,0 +1,47 @@
# ACE-Step
ACE-Step is a foundation model for music generation developed by ACE Studio and StepFun. It uses diffusion-based generation with a Deep Compression AutoEncoder (DCAE) and a lightweight linear transformer to achieve state-of-the-art speed and musical coherence.
## Model Variants
### ACE-Step (3.5B)
- 3.5B parameter diffusion model
- DCAE encoder with linear transformer conditioning
- 27 or 60 inference steps recommended
- Apache 2.0 license
## Key Features
- 15x faster than LLM-based baselines (20 seconds for a 4-minute song on A100)
- Full-song generation with lyrics and structure
- Duration control for variable-length output
- Music remixing and style transfer
- Lyrics editing and vocal synthesis
- Supports 16+ languages including English, Chinese, Japanese, Korean, French, German, Spanish, and more
- Text-to-music from natural language descriptions
## Hardware Requirements
- RTX 3090: 12.76x real-time factor at 27 steps
- RTX 4090: 34.48x real-time factor at 27 steps
- NVIDIA A100: 27.27x real-time factor at 27 steps
- Apple M2 Max: 2.27x real-time factor at 27 steps
- Higher step counts (60) reduce speed by roughly half
## Common Use Cases
- Original music generation from text descriptions
- Song remixing and style transfer
- Lyrics-based music creation
- Multi-language vocal music generation
- Rapid music prototyping for content creators
- Background music and soundtrack generation
## Key Parameters
- **steps**: Inference steps (27 for speed, 60 for quality)
- **duration**: Target audio length in seconds (up to ~5 minutes)
- **lyrics**: Song lyrics text input for vocal generation
- **prompt**: Natural language description of desired music style and mood
- **seed**: Random seed for reproducible generation (results are seed-sensitive)

View File

@@ -0,0 +1,46 @@
# Anima
Anima is an API-based AI video generation platform that creates animated video content from text prompts, supporting character consistency and storyboard-driven workflows.
## Model Variants
### Anima Video Generation
- Cloud-based video generation service
- Supports multiple underlying AI models (Runway, Kling, Minimax, Luma)
- Integrated text, image, and audio generation pipeline
## Key Features
- AI character generation with persistent identity across scenes
- Storyboard-based workflow: script to visual scenes with narration
- Multi-model integration (GPT-4, Claude, Gemini for text; FLUX, MidJourney for images)
- Voice generation via ElevenLabs integration
- Music composition via Suno integration
- Autopilot mode for fully automated video creation
- Prompt enhancement for optimized output quality
- Template library for rapid content creation
- Scene-by-scene generation with character consistency
## Hardware Requirements
- No local hardware required (cloud-based service)
- Runs entirely through web API
- Browser-based interface for interactive use
## Common Use Cases
- Animated story series production
- Movie trailer and concept video creation
- Kids bedtime story animation
- Lofi music video generation
- Marketing and explainer video content
- Storyboard visualization
## Key Parameters
- **prompt**: Text description of the scene or story
- **character**: Selected or generated character for identity consistency
- **style**: Visual style preset (animation, cinematic, etc.)
- **duration**: Target video length
- **resolution**: Output video resolution

View File

@@ -0,0 +1,48 @@
# BRIA AI
BRIA AI is an enterprise-focused visual generative AI platform that trains its models exclusively on licensed, ethically sourced data, ensuring commercially safe outputs with full IP indemnification.
## Model Variants
### BRIA Fibo
- Flagship hyper-controllable text-to-image model
- JSON-based control framework with 100+ disentangled visual attributes
- Supports lighting, depth, color, composition, and camera control
- Ideal for agentic workflows and enterprise-scale creative automation
### BRIA Text-to-Image Lite
- Fully private, self-hosted deployment of the Fibo pipeline
- Designed for regulated industries requiring total data control
- Runs on-premises with no external data transfer
## Key Features
- Trained on 100% licensed data from 20+ partners including Getty Images
- Full IP indemnification for commercial use
- Tri-layer content moderation for brand-safe outputs
- Patented attribution engine compensating data owners by usage
- ControlNet support for canny, depth, recoloring, and IP Adapter
- Multilingual prompt support
- Fine-tuning API for brand-specific customization
## Hardware Requirements
- Cloud-hosted API available (no local GPU required)
- Self-hosted Lite version supports deployment on AWS and Azure
- Open-source weights available on Hugging Face for local inference
## Common Use Cases
- Enterprise marketing and advertising content
- E-commerce product photography
- Brand-consistent visual asset generation
- Storyboarding and concept art for media production
## Key Parameters
- **prompt**: Text description of desired image
- **style**: Photorealistic, illustrative, or custom styles
- **guidance_methods**: ControlNet canny, depth, recoloring, IP Adapter
- **resolution**: Multiple aspect ratios supported

View File

@@ -0,0 +1,52 @@
# Chatterbox
Chatterbox is a family of state-of-the-art open-source text-to-speech models developed by Resemble AI, featuring zero-shot voice cloning and emotion control.
## Model Variants
### Chatterbox Turbo
- 350M parameters, single-step mel decoding for low latency
- Paralinguistic tags for non-speech sounds ([laugh], [cough], [chuckle])
- English only, optimized for voice agents and production use
### Chatterbox (Original)
- 500M parameter Llama backbone, English only
- CFG and exaggeration control for emotion intensity
### Chatterbox Multilingual
- 500M parameters, 23 languages (Arabic, Chinese, French, German, Hindi, Japanese, Korean, Spanish, and more)
- Zero-shot voice cloning across languages
## Key Features
- Zero-shot voice cloning from a few seconds of reference audio
- Emotion exaggeration control (first open-source model with this feature)
- Built-in PerTh neural watermarking for responsible AI
- Sub-200ms latency for real-time applications
- Trained on 500K hours of cleaned speech data
- MIT license (free for commercial use)
- Outperforms ElevenLabs in subjective evaluations
## Hardware Requirements
- Minimum: NVIDIA GPU with CUDA support
- Turbo model requires less VRAM than original due to smaller architecture
- Runs on consumer GPUs (RTX 3060 and above)
- CPU inference possible but significantly slower
## Common Use Cases
- Voice cloning for content creation
- AI voice agents and assistants
- Audiobook narration
- Game and media dialogue generation
## Key Parameters
- **exaggeration**: Emotion intensity control (0.0 to 1.0, default 0.5)
- **cfg_weight**: Classifier-free guidance weight (0.0 to 1.0, default 0.5)
- **audio_prompt_path**: Path to reference audio clip for voice cloning
- **language_id**: Language code for multilingual model (e.g., "fr", "zh", "ja")

View File

@@ -0,0 +1,50 @@
# Chroma
Chroma is an open-source 8.9 billion parameter text-to-image model based on the FLUX.1-schnell architecture, developed by Lodestone Rock and the community. It is fully Apache 2.0 licensed.
## Model Variants
### Chroma
- 8.9B parameter model based on FLUX.1-schnell
- Trained on a curated 5M sample dataset (from 20M candidates)
- Apache 2.0 license for unrestricted use
- Supports both tag-based and natural language prompting
### Chroma XL
- Experimental merge and fine-tune based on NoobAI-XL (SDXL architecture)
- Low CFG (2.5-3.0) and low step count (8-12 steps)
- Optimized for fast generation on consumer hardware
## Key Features
- Fully open-source with Apache 2.0 licensing
- Diverse training data spanning anime, artistic, and photographic styles
- Community-driven development with public training logs
- Compatible with FLUX ecosystem (VAE, T5 text encoder)
- ComfyUI workflow support
- LoRA and fine-tuning compatible
- GGUF quantized versions available for lower VRAM
## Hardware Requirements
- Base model: 24GB VRAM recommended (BF16)
- Q8_0 quantized: ~13GB VRAM
- Q4_0 quantized: ~7GB VRAM
- Requires FLUX.1 VAE and T5 text encoder
## Common Use Cases
- Open-source text-to-image generation
- Artistic and stylized image creation
- Community model fine-tuning and experimentation
- LoRA training for custom styles and characters
## Key Parameters
- **prompt**: Text description or tag-based prompt
- **steps**: Inference steps (15-30 recommended)
- **cfg_scale**: Guidance scale (1-4, model uses low CFG)
- **resolution**: Output resolution (1024x1024 default)
- **guidance**: Flux-style guidance parameter (around 4)

View File

@@ -0,0 +1,58 @@
# ChronoEdit
ChronoEdit is an image editing framework by NVIDIA that reframes editing as a video generation task, using temporal reasoning to ensure physically plausible and consistent edits.
## Model Variants
### ChronoEdit-14B
- Full 14 billion parameter model for maximum quality
- Built on pretrained video diffusion model architecture
- Requires ~34GB VRAM (38GB with temporal reasoning enabled)
### ChronoEdit-2B
- Compact 2 billion parameter variant for efficiency
- Maintains core temporal reasoning capabilities
- Lower VRAM requirements for broader hardware compatibility
### ChronoEdit-14B 8-Step Distilled LoRA
- Distilled variant requiring only 8 inference steps
- Faster generation with minimal quality loss
- Uses flow-shift 2.0 and guidance-scale 1.0
## Key Features
- Treats image editing as a video generation task for temporal consistency
- Temporal reasoning tokens simulate intermediate editing trajectories
- Ensures physically plausible edits (object interactions, lighting, shadows)
- Two-stage pipeline: temporal reasoning stage followed by editing frame generation
- Prompt enhancer integration for improved editing instructions
- LoRA fine-tuning support via DiffSynth-Studio
- Upscaler LoRA available for super-resolution editing
- PaintBrush LoRA for sketch-to-object editing
- Apache-2.0 license
## Hardware Requirements
- 14B model: 34GB VRAM minimum (38GB with temporal reasoning)
- 2B model: 12GB+ VRAM estimated
- Supports model offloading to reduce peak VRAM
- Linux only (not supported on Windows/macOS)
## Common Use Cases
- Physically consistent image editing (add/remove/modify objects)
- World simulation for autonomous driving and robotics
- Visualizing editing trajectories and reasoning
- Image super-resolution via upscaler LoRA
- Sketch-to-object conversion via PaintBrush LoRA
## Key Parameters
- **prompt**: Text description of the desired edit
- **num_inference_steps**: Denoising steps (default ~50, or 8 with distilled LoRA)
- **guidance_scale**: Prompt adherence strength (default ~7.5, or 1.0 with distilled LoRA)
- **flow_shift**: Flow matching shift parameter (2.0 for distilled LoRA)
- **enable_temporal_reasoning**: Toggle temporal reasoning stage for better consistency

View File

@@ -0,0 +1,60 @@
# Depth Anything V2
Depth Anything V2 is a monocular depth estimation model trained on 595K synthetic labeled images and 62M+ real unlabeled images, providing robust relative depth maps from single images.
## Model Variants
### Depth-Anything-V2-Small
- Lightweight variant for fast inference
- ViT-S (Small) encoder backbone
- Suitable for real-time applications
### Depth-Anything-V2-Base
- Mid-range variant balancing speed and accuracy
- ViT-B (Base) encoder backbone
### Depth-Anything-V2-Large
- High-accuracy variant for detailed depth maps
- ViT-L (Large) encoder backbone with 256 output features
- Recommended for most production use cases
### Depth-Anything-V2-Giant
- Maximum accuracy variant
- ViT-G (Giant) encoder backbone
- Highest computational requirements
## Key Features
- More fine-grained depth detail than Depth Anything V1
- More robust than V1 and Stable Diffusion-based alternatives (Marigold, Geowizard)
- 10× faster than SD-based depth estimation models
- Trained on large-scale synthetic + real data mixture
- Produces relative (not metric) depth maps by default
- DPT (Dense Prediction Transformer) decoder architecture
## Hardware Requirements
- Small: 2GB VRAM minimum
- Base: 4GB VRAM minimum
- Large: 6GB VRAM recommended
- Giant: 12GB+ VRAM recommended
- CPU inference supported for smaller variants
## Common Use Cases
- Depth map generation for compositing and VFX
- ControlNet depth conditioning for image generation
- 3D scene understanding and reconstruction
- Foreground/background separation
- Augmented reality occlusion
- Video depth estimation for parallax effects
## Key Parameters
- **encoder**: Model size variant (vits, vitb, vitl, vitg)
- **input_size**: Processing resolution (higher = more detail, more VRAM)
- **output_type**: Raw depth array or normalized visualization

View File

@@ -0,0 +1,50 @@
# FlashVSR
FlashVSR is a diffusion-based streaming video super-resolution framework that achieves near real-time 4× upscaling through one-step inference with locality-constrained sparse attention.
## Model Variants
### FlashVSR v1
- Initial release of the one-step streaming VSR model
- Built on Wan2.1 1.3B video diffusion backbone
- 4× super-resolution optimized
### FlashVSR v1.1
- Enhanced stability and fidelity over v1
- Improved artifact handling across different aspect ratios
- Recommended for production use
## Key Features
- One-step diffusion inference (no multi-step denoising required)
- Streaming architecture with KV cache for sequential frame processing
- Locality-Constrained Sparse Attention (LCSA) prevents artifacts at high resolutions
- Tiny Conditional Decoder (TC Decoder) achieves 7× faster decoding than standard WanVAE
- Three-stage distillation pipeline from multi-step to single-step inference
- Runs at ~17 FPS for 768×1408 videos on a single A100 GPU
- Up to 12× speedup over prior one-step diffusion VSR models
- Scales reliably to ultra-high resolutions
## Hardware Requirements
- Minimum: 24GB VRAM (A100 or similar recommended)
- Optimized for NVIDIA A100 GPUs
- Significant VRAM required for high-resolution video processing
- Multi-GPU inference not required but beneficial for throughput
## Common Use Cases
- Real-world video upscaling to 4K
- AI-generated video enhancement and artifact removal
- Long video super-resolution with temporal consistency
- Streaming video quality improvement
- Restoring compressed or low-resolution video footage
## Key Parameters
- **scale**: Upscaling factor (4× recommended for best results)
- **tile_size**: Spatial tiling for memory management (0 = auto)
- **input_resolution**: Source video resolution (outputs 4× larger)
- **model_version**: v1 or v1.1 checkpoint selection

View File

@@ -0,0 +1,98 @@
# Flux
Flux is a family of state-of-the-art text-to-image and image editing models developed by Black Forest Labs (BFL).
## Model Variants
### Flux.1 Schnell
- Ultra-fast inference (1-4 steps)
- 12B parameter rectified flow transformer
- Apache 2.0 license (open source)
- Best for rapid prototyping and real-time applications
### Flux.1 Dev
- High-quality 12B parameter development model
- 20-50 steps for best results
- Non-commercial license for research
- Guidance-distilled for efficient generation
### Flux.1 Pro
- Highest quality Flux.1 outputs via commercial API
- Best prompt adherence and detail
### Flux.2 Dev
- 32B parameter rectified flow transformer
- Unified text-to-image, single-reference editing, and multi-reference editing
- No fine-tuning needed for character/object/style reference
- Up to 4MP photorealistic output with improved autoencoder
- Non-commercial license; quantized versions available for consumer GPUs
### Flux.2 Klein
- Fastest Flux model family — sub-second inference on modern hardware
- **Klein 4B**: ~8GB VRAM, Apache 2.0 license, ideal for edge deployment
- **Klein 9B**: Best quality-to-latency ratio, non-commercial license
- Base (undistilled) variants available for fine-tuning and LoRA training
- Supports text-to-image, single-reference editing, and multi-reference editing
### Flux.1 Kontext
- In-context image generation and editing via text instructions
- Available as Kontext Max (premium), Pro (API), and Dev (open-weights, 12B)
- Character consistency across multiple scenes without fine-tuning
- Typography manipulation and local editing within images
### Flux.1 Fill
- Dedicated inpainting and outpainting model
- Maintains consistency with surrounding image context
- Available as Fill Pro (API) and Fill Dev (open-weights)
### Flux Redux / Canny / Depth
- **Redux**: Image variation generation from reference images
- **Canny**: Edge-detection-based structural conditioning
- **Depth**: Depth-map-based structural conditioning for pose/layout control
## Key Features
- Excellent text rendering in images
- Strong prompt following and instruction adherence
- High resolution output (up to 4MP with Flux.2)
- Multi-reference editing: combine up to 6 reference images
- Consistent style and quality across generations
## Hardware Requirements
- Flux.2 Klein 4B: ~8GB VRAM (consumer GPUs like RTX 4070)
- Flux.2 Klein 9B: ~20GB VRAM
- Flux.1 models: 12GB VRAM minimum (fp16), 24GB recommended
- Flux.2 Dev: 64GB+ VRAM native, FP8 quantized ~40GB
- Quantized and weight-streaming options available for lower VRAM cards
## Common Use Cases
- Text-to-image generation
- Iterative image editing via text instructions
- Character-consistent multi-scene generation
- Inpainting and outpainting
- Style transfer and image variation
- Structural conditioning (canny, depth)
## Key Parameters
- **steps**: 1-4 (Schnell/Klein distilled), 20-50 (Dev/Base)
- **guidance_scale**: 3.5-4.0 typical for Flux.2, 3.5 for Flux.1
- **resolution**: Up to 2048x2048 (Flux.1), up to 4MP (Flux.2)
- **seed**: For reproducible generation
- **prompt_upsampling**: Optional LLM-based prompt enhancement (Flux.2)
## Blog References
- [FLUX.2 Day-0 Support in ComfyUI](../blog/flux2-day-0-support.md) — FLUX.2 with 4MP output, multi-reference consistency, professional text rendering
- [FLUX.2 [klein] 4B & 9B](../blog/flux2-klein-4b.md) — Fastest Flux models, sub-second inference, unified generation and editing
- [The Complete AI Upscaling Handbook](../blog/upscaling-handbook.md) — Benchmarks for upscaling workflows

View File

@@ -0,0 +1 @@
Flux is Black Forest Labs' family of text-to-image and image editing models. The lineup includes Flux.1 Schnell (ultra-fast, 1-4 steps, Apache 2.0), Flux.1 Dev (high-quality, 20-50 steps, non-commercial), Flux.1 Pro (commercial API), and the newer Flux.2 Dev (32B parameters, up to 4MP output, multi-reference editing without fine-tuning). Flux.2 Klein offers sub-second inference in 4B (~8GB VRAM, Apache 2.0) and 9B variants. Specialized models include Kontext (in-context editing, character consistency), Fill (inpainting/outpainting), Redux (image variations), and Canny/Depth (structural conditioning). Flux excels at text rendering in images, strong prompt adherence, and consistent multi-scene generation. VRAM ranges from ~8GB (Klein 4B) to 64GB+ (Flux.2 Dev native), with quantized options available. Key parameters: guidance_scale 3.5-4.0, resolution up to 4MP for Flux.2. Primary uses include text-to-image, iterative editing, style transfer, and structural conditioning.

View File

@@ -0,0 +1,75 @@
# Gemini
Gemini is Google DeepMind's multimodal AI model family with native image generation, editing, and video generation capabilities, accessible in ComfyUI through API nodes.
## Model Variants
### Gemini 3 Pro Image Preview
- Most capable Gemini image model with advanced reasoning
- Complex multi-turn image generation and editing
- Up to 14 input images, native 4K output
- Also known as Nano Banana Pro
- Model ID: `gemini-3-pro-image-preview`
### Gemini 2.5 Flash Image
- Cost-effective image generation optimized for speed and low latency
- Character consistency, multi-image fusion, and prompt-based editing
- $0.039 per image (1290 output tokens per image)
- Model ID: `gemini-2.5-flash-image`
### Google Gemini (General)
- Multimodal model for text, image understanding, and generation
- Interleaved text-and-image output in conversational context
- Supports image input for analysis and editing tasks
### Veo 2
- Text-to-video and image-to-video generation
- 8-second video clips at 720p resolution
- Realistic physics simulation and cinematic styles
- Supports 16:9 and 9:16 aspect ratios
- Model ID: `veo-2.0-generate-001`
### Veo 3 / 3.1
- Latest video generation with native audio (dialogue, SFX, ambient)
- Up to 1080p and 4K resolution (Veo 3.1)
- Style reference images for aesthetic control
- 4, 6, or 8-second video duration options
## Key Features
- Native multimodal generation: text, images, and video in one model family
- World knowledge from Google Search for factually accurate image generation
- SynthID invisible watermarking on all generated content
- Multi-image fusion and character consistency across generations
- Clean text rendering across multiple languages
- Prompt-based image editing without masks or complex workflows
## Hardware Requirements
- No local GPU required — all models accessed via cloud API
- Available through ComfyUI API nodes, Google AI Studio, and Vertex AI
- Requires API key and network access
## Common Use Cases
- Text-to-image and image editing via API nodes
- Multi-turn conversational image generation
- Video generation from text prompts or reference images
- Product animation and social media video content
- Style-consistent character and brand asset generation
- Text rendering and translation in images
## Key Parameters
- **prompt**: Text description for generation or editing
- **aspect_ratio**: 1:1, 3:4, 4:3, 9:16, 16:9, 21:9 (images); 16:9, 9:16 (video)
- **temperature**: 0.0-2.0 (default 1.0 for image models)
- **durationSeconds**: 4-8 seconds for Veo models
- **sampleCount**: 1-4 output videos per request
- **seed**: Integer for reproducible generation
- **personGeneration**: Safety control — `allow_adult`, `dont_allow`, or `allow_all`

View File

@@ -0,0 +1,62 @@
# GPT-Image-1
GPT-Image-1 is OpenAI's natively multimodal image generation model, capable of generating and editing images from text and image inputs. It is accessed in ComfyUI through API nodes.
## Model Variants
### GPT-Image-1.5
- Latest and most advanced GPT Image model
- Best overall quality with superior instruction following
- High input fidelity for the first 5 input images
- Supports generate vs. edit action control
- Multi-turn editing via the Responses API
### GPT-Image-1
- Production-grade image generation and editing
- High input fidelity for the first input image
- Supports up to 16 input images for editing
- Up to 10 images per generation request
### GPT-Image-1-Mini
- Cost-effective variant for lower quality requirements
- Same API surface as GPT-Image-1
- Suitable for rapid prototyping and high-volume workloads
## Key Features
- Superior text rendering in generated images
- Real-world knowledge for accurate depictions
- Transparent background support (PNG and WebP)
- Mask-based inpainting with prompt guidance
- Multi-image editing: combine up to 16 reference images
- Streaming partial image output during generation
- Content moderation with adjustable strictness
## Hardware Requirements
- No local GPU required — cloud API service via OpenAI
- Accessed through ComfyUI API nodes
- Requires OpenAI API key and organization verification
## Common Use Cases
- Text-to-image generation with detailed prompts
- Image editing and compositing from multiple references
- Product photography and mockup generation
- Inpainting with mask-guided editing
- Transparent asset generation (stickers, logos, icons)
- Multi-turn iterative image refinement
## Key Parameters
- **prompt**: Text description up to 32,000 characters
- **size**: `1024x1024`, `1536x1024` (landscape), `1024x1536` (portrait), or `auto`
- **quality**: `low`, `medium`, `high`, or `auto` (affects cost and detail)
- **n**: Number of images to generate (1-10)
- **background**: `transparent`, `opaque`, or `auto`
- **output_format**: `png`, `jpeg`, or `webp`
- **moderation**: `auto` (default) or `low` (less restrictive)
- **input_fidelity**: `low` (default) or `high` for preserving input image details

View File

@@ -0,0 +1,56 @@
# Grok (Aurora)
Aurora is xAI's autoregressive image generation model integrated into Grok, excelling at photorealistic rendering and precise text instruction following.
## Model Variants
### grok-2-image-1212
- API-accessible image generation model
- Generates multiple images from text prompts
- $0.07 per generated image
- OpenAI and Anthropic SDK compatible
### Aurora (Consumer)
- Autoregressive mixture-of-experts network
- Trained on billions of text and image examples
- Available via Grok on X platform, web, iOS, and Android
### Grok Imagine
- Video and image generation model
- State-of-the-art quality across cost and latency
- API available since January 2026
## Key Features
- Photorealistic image generation from text prompts
- Precise text rendering within images
- Accurate rendering of real-world entities, logos, and text
- Image editing via uploaded photos with text instructions
- Multi-image generation per request
- Native multimodal input support
## Hardware Requirements
- Cloud API-based (no local GPU required)
- All generation runs on xAI infrastructure
- API access via console.x.ai
## Common Use Cases
- Photorealistic image generation
- Text and logo rendering in images
- Image editing and style transfer
- Meme and social media content creation
- Product visualization
- Character and portrait generation
## Key Parameters
- **prompt**: Text description of desired image
- **model**: Model identifier (grok-2-image-1212)
- **n**: Number of images to generate
- **response_format**: Output format (url or b64_json)
- **size**: Image dimensions

View File

@@ -0,0 +1,55 @@
# HiDream-I1
HiDream-I1 is a 17B parameter image generation foundation model by HiDream.ai that achieves state-of-the-art quality using a sparse diffusion transformer architecture.
## Model Variants
### HiDream-I1 Full
- Full 17B parameter sparse diffusion transformer
- Uses Llama-3.1-8B-Instruct and T5-XXL as text encoders
- VAE from FLUX.1 Schnell, MIT license
### HiDream-I1 Dev
- Distilled variant, faster inference with minor quality tradeoff
### HiDream-I1 Fast
- Further distilled for maximum speed, best for rapid prototyping
### HiDream-E1
- Instruction-based image editing model
## Key Features
- State-of-the-art HPS v2.1 score (33.82), surpassing Flux.1-dev, DALL-E 3, and Midjourney V6
- Best-in-class prompt following on GenEval (0.83) and DPG-Bench (85.89)
- Multiple output styles: photorealistic, cartoon, artistic, and more
- Dual text encoding with Llama-3.1-8B-Instruct and T5-XXL for strong prompt adherence
- MIT license for commercial use
- Requires Flash Attention for optimal performance
## Hardware Requirements
- Minimum: 24GB VRAM (Full model), Dev and Fast variants run on lower VRAM
- Recommended: 40GB+ VRAM for Full model at high resolution
- CUDA 12.4+ recommended for Flash Attention
- Llama-3.1-8B-Instruct weights downloaded automatically
## Common Use Cases
- High-fidelity text-to-image generation
- Photorealistic image creation
- Artistic and stylized illustrations
- Instruction-based image editing (E1 variant)
- Commercial image generation
## Key Parameters
- **model_type**: Variant selection (full, dev, fast)
- **steps**: Inference steps (varies by variant; fewer for fast/dev)
- **cfg_scale**: Guidance scale for prompt adherence
- **resolution**: Output image dimensions
- **prompt**: Detailed text description of desired image

View File

@@ -0,0 +1,51 @@
# HitPaw
HitPaw is an AI-powered visual enhancement platform providing image and video upscaling, restoration, and denoising through dedicated API services and desktop applications.
## Model Variants
### HitPaw Image Enhancer
- AI-powered photo enhancement with super-resolution up to 8x
- Face Clear Model: dual-model portrait upscaling (2x and 4x)
- Face Natural Model: texture-preserving portrait enhancement
- General Enhance Model: super-resolution for scenes and objects
- High Fidelity Model: premium upscaling for DSLR and AIGC images
- Generative Portrait/Enhance Models: diffusion-based restoration for heavily compressed images
### HitPaw Video Enhancer (VikPea)
- Frame-aware video restoration and ultra HD upscaling
- Face Soft Model: face-optimized noise and blur reduction
- Portrait Restore Model: multi-frame fusion for facial detail
- General Restore Model: GAN-based restoration for broad scenarios
- Ultra HD Model: premium upscaling from HD to ultra HD
- Generative Model: diffusion-driven repair for low-resolution video
## Key Features
- One-click portrait and scene enhancement
- Dual-model face and background processing pipelines
- Batch processing and API access for automated workflows
- Support for 30+ video input formats and 5 export formats
- Multi-frame face restoration for temporal consistency in video
- Denoising models for mobile and camera images
## Hardware Requirements
- Cloud API available (no local GPU required)
- Desktop apps for Windows, Mac, Android, and iOS
- API integration via HTTP-based interface
## Common Use Cases
- Upscaling AI-generated images to publication quality
- Restoring old or low-resolution photos and videos
- Enhancing portrait and landscape photography
- Video quality improvement for content creators
## Key Parameters
- **model**: Select enhancement model per content type
- **scale**: 2x or 4x super-resolution options
- **format**: Output format selection (mp4, mov, mkv, m4v, avi for video)

View File

@@ -0,0 +1,47 @@
# HuMo
HuMo is a human-centric video generation model by ByteDance that produces videos from collaborative multi-modal conditioning using text, image, and audio inputs.
## Model Variants
### HuMo (Wan2.1-T2V-1.3B based)
- Built on the Wan2.1-T2V-1.3B video foundation model
- Supports Text+Image (TI), Text+Audio (TA), and Text+Image+Audio (TIA) modes
- Two-stage training: subject preservation then audio-visual sync
## Key Features
- Multi-modal conditioning: text, reference images, and audio simultaneously
- Subject identity preservation from reference images across frames
- Audio-driven lip synchronization with facial expression alignment
- Focus-by-predicting strategy for facial region attention during audio sync
- Time-adaptive guidance dynamically adjusts input weights across denoising steps
- Minimal-invasive image injection maintains base model prompt understanding
- Progressive two-stage training separates identity learning from audio sync
- Supports text-controlled appearance editing while preserving identity
## Hardware Requirements
- Minimum: 24GB VRAM (RTX 3090/4090 or similar)
- Multi-GPU inference supported via FSDP and sequence parallelism
- Whisper-large-v3 audio encoder required for audio modes
- Optional audio separator for cleaner speech input
## Common Use Cases
- Digital avatar and virtual presenter creation
- Audio-driven talking head generation
- Character-consistent video clips from reference photos
- Lip-synced dialogue video from audio tracks
- Prompted reenactment with identity preservation
- Text-controlled outfit and style changes on consistent subjects
## Key Parameters
- **mode**: Generation mode (TI, TA, or TIA)
- **scale_t**: Text guidance strength (default: 7.5)
- **scale_a**: Audio guidance strength (default: 2.0)
- **frames**: Number of output frames (97 at 25 FPS = ~4 seconds)
- **height/width**: Output resolution (480p or 720p supported)
- **steps**: Denoising steps (30-50 recommended)

View File

@@ -0,0 +1,75 @@
# Hunyuan
Hunyuan is Tencent's family of open-source generative models spanning text-to-image, text-to-video, and 3D asset generation.
## Model Variants
### Hunyuan-DiT
- Text-to-image diffusion transformer with native Chinese and English support
- 1.5B parameter DiT architecture, native 1024x1024 resolution
- Bilingual text encoder for strong CJK text rendering in images
- v1.2 is the latest version with improved quality
### HunyuanVideo
- Large-scale text-to-video and image-to-video generation model
- 13B+ parameters, the largest open-source video generation model
- Dual-stream to single-stream transformer architecture with full attention
- MLLM text encoder (decoder-only LLM) for better instruction following
- Causal 3D VAE with 4x temporal, 8x spatial, 16x channel compression
- Generates 720p video (1280x720) at up to 129 frames (~5s at 24fps)
- FP8 quantized weights available to reduce memory by ~10GB
- Outperforms Runway Gen-3, Luma 1.6 in professional evaluations
- 3 workflow templates available
### Hunyuan3D 2.0
- Image-to-3D and text-to-3D asset generation system
- Two-stage pipeline: Hunyuan3D-DiT (shape) + Hunyuan3D-Paint (texture)
- Flow-based diffusion transformer for geometry generation
- High-resolution texture synthesis with geometric and diffusion priors
- Outputs textured meshes in GLB/OBJ format
- Outperforms both open and closed-source 3D generation models
- 7 workflow templates available
## Key Features
- Native bilingual support (Chinese and English) across the family
- Strong text rendering in generated images (Hunyuan-DiT)
- State-of-the-art video generation quality (HunyuanVideo)
- End-to-end 3D asset creation with texturing (Hunyuan3D)
- Multi-resolution generation across all model types
- Prompt rewrite system for improved generation quality (HunyuanVideo)
## Hardware Requirements
- Hunyuan-DiT: 11GB VRAM minimum (fp16), 16GB recommended
- HunyuanVideo 540p (544x960): 45GB VRAM minimum
- HunyuanVideo 720p (720x1280): 60GB VRAM minimum, 80GB recommended
- HunyuanVideo FP8: Saves ~10GB compared to fp16 weights
- Hunyuan3D 2.0: 16-24GB VRAM for shape + texture pipeline
## Common Use Cases
- Bilingual content creation and marketing materials
- Asian-style artwork and illustrations
- Text-in-image generation (Chinese/English)
- High-quality video generation from text or image prompts
- 3D asset creation for games, design, and prototyping
- Textured mesh generation from reference images
## Key Parameters
- **steps**: 25-50 for Hunyuan-DiT (default 40), 50 for HunyuanVideo
- **cfg_scale**: 5-8 for DiT (6 typical), 6.0 embedded for HunyuanVideo
- **flow_shift**: 7.0 for HunyuanVideo flow matching scheduler
- **video_length**: 129 frames for HunyuanVideo (~5s at 24fps)
- **resolution**: 1024x1024 for DiT, 720x1280 or 544x960 for video
- **negative_prompt**: Recommended for Hunyuan-DiT quality control
## Blog References
- [HunyuanVideo Native Support](../blog/hunyuanvideo-native-support.md) — 13B parameter video model, dual-stream transformer, MLLM text encoder
- [HunyuanVideo 1.5 Native Support](../blog/hunyuanvideo-15-native-support.md) — Lightweight 8.3B model, 720p output, runs on 24GB consumer GPUs
- [Hunyuan3D 2.0 and MultiView Native Support](../blog/hunyuan3d-20-native-support.md) — 3D model generation with PBR materials, 1.1B parameter multi-view model

View File

@@ -0,0 +1 @@
Hunyuan is Tencent's open-source generative model family spanning text-to-image, text-to-video, and 3D generation. Hunyuan-DiT is a 1.5B parameter text-to-image model with native Chinese and English support and strong CJK text rendering at 1024x1024 (11-16GB VRAM). HunyuanVideo is the largest open-source video model at 13B+ parameters, generating 720p video up to 129 frames (~5s at 24fps) using a dual-stream transformer with MLLM text encoder; it requires 45-80GB VRAM depending on resolution (FP8 saves ~10GB). Hunyuan3D 2.0 handles image-to-3D and text-to-3D generation via a two-stage pipeline producing textured GLB/OBJ meshes (16-24GB VRAM). Key strengths: bilingual content creation, state-of-the-art video quality surpassing Runway Gen-3, and end-to-end 3D asset creation. Typical parameters: 25-50 steps for DiT, 50 steps for video, cfg_scale 5-8.

View File

@@ -0,0 +1,52 @@
# Ideogram
Ideogram is an AI image generation platform founded by former Google Brain researchers, known for industry-leading text rendering accuracy in generated images. It achieves approximately 90% text rendering accuracy compared to roughly 30% for competing tools.
## Model Variants
### Ideogram 3.0
- Latest generation released March 2025
- Highest ELO rating in human evaluations across diverse prompts
- Style References support with up to 3 reference images
- Random style feature with 4.3 billion style presets
- Batch generation for scaled content production
### Ideogram 2.0
- Previous generation model
- Available as alternative option in the platform
- Solid text rendering and general image quality
## Key Features
- Best-in-class text rendering with accurate typography and spelling
- Handles complex, multi-line text compositions and curved surfaces
- Style modes: Realistic, Anime, 3D, Watercolor, Typography
- Magic Prompt for automatic prompt enhancement
- Canvas editing for post-generation refinement
- Upscaler up to 8K resolution in 2x increments
- Color palette control for brand consistency
- API available for programmatic integration
## Hardware Requirements
- Cloud API only (no local GPU required)
- API pricing at approximately $0.06 per image
- Web interface with credit-based subscription plans
## Common Use Cases
- Marketing materials with branded text and logos
- Social media graphics with text overlays
- Product packaging and label design
- Event posters, flyers, and invitations
- Book covers and editorial design
## Key Parameters
- **prompt**: Text description with quoted text for typography
- **model**: Version selection (2.0 or 3.0)
- **style**: Realistic, Anime, 3D, Watercolor, Typography
- **aspect_ratio**: 16 aspect ratio options available
- **magic_prompt**: Toggle for automatic prompt enhancement

View File

@@ -0,0 +1,51 @@
# Kandinsky
Kandinsky is a family of open-source diffusion models for video and image generation, developed by Kandinsky Lab (Sber AI, Russia). The models support both English and Russian text prompts.
## Model Variants
### Kandinsky 5.0 Video Pro (19B)
- HD video at 1280x768, 24fps (5 or 10 seconds)
- Controllable camera motion via LoRA
- Top-1 open-source T2V model on LMArena
### Kandinsky 5.0 Video Lite (2B)
- Lightweight model, #1 among open-source in its class
- CFG-distilled (2x faster) and diffusion-distilled (6x faster) variants
- Best Russian concept understanding in open source
### Kandinsky 5.0 Image Lite (6B)
- HD image output (1280x768, 1024x1024)
- Strong text rendering; image editing variant available
## Key Features
- Bilingual support (English and Russian prompts)
- Flow Matching architecture with MIT license
- Camera control via trained LoRAs
- ComfyUI and Diffusers integration
- MagCache acceleration for faster inference
## Hardware Requirements
- Video Lite: 12GB VRAM minimum with optimizations
- Video Pro: 24GB+ VRAM recommended
- NF4 quantization and FlashAttention 2/3 or SDPA supported
## Common Use Cases
- Open-source video generation research
- Russian and English bilingual content creation
- Camera-controlled video synthesis
- Image generation with text rendering
- Fine-tuning with custom LoRAs
## Key Parameters
- **prompt**: Text description in English or Russian
- **num_frames**: Number of output frames (5s or 10s)
- **resolution**: Output resolution (up to 1280x768)
- **steps**: Inference steps (varies by distillation level)

View File

@@ -0,0 +1,64 @@
# Kling
Kling is a video and image generation platform developed by Kuaishou Technology. It offers text-to-video, image-to-video, video editing, audio generation, and virtual try-on capabilities through both a creative studio and a developer API.
## Model Variants
### Kling O1
- First unified multimodal video model combining generation and editing
- Built on Multimodal Visual Language (MVL) framework
- Accepts text, image, video, and subject inputs in a single prompt
- Supports video inpainting, outpainting, style re-rendering, and shot extension
- Character and scene consistency via Element Library with director-like memory
- Generates 3-10 second videos at up to 2K resolution
### Kling 2.6
- Simultaneous audio-visual generation in a single pass
- Produces video with speech, sound effects, and ambient sounds together
- Supports Chinese and English voice generation
- Video content up to 10 seconds with synchronized audio
- Deep semantic alignment between audio and visual dynamics
### Kling (Base Models)
- Text-to-video and image-to-video with Standard and Professional modes
- Multi-image-to-video with multiple reference inputs
- Camera control with 6 basic movements and 4 master shots
- Video extension, lip-sync, and avatar generation
- Start and end frame generation for controlled transitions
## Key Features
- Unified generation and editing in a single model (O1)
- Simultaneous audio-visual generation (2.6)
- Multi-subject consistency across shots and angles
- Conversational editing via natural language prompts
- Video effects center for special effects and transformations
- Virtual try-on and image recognition capabilities
- DeepSeek integration for prompt optimization
## Hardware Requirements
- Cloud API only; no local hardware required
- Accessed via klingai.com creative studio or API platform
- Standard and Professional generation modes (speed vs. quality tradeoff)
## Common Use Cases
- Film and television pre-production and shot generation
- Social media content creation with audio
- E-commerce product videos and virtual try-on
- Advertising with one-click ad generation
- Video post-production editing via text prompts
- Multi-character narrative video creation
## Key Parameters
- **prompt**: Text description with positive and negative prompts
- **mode**: Standard (fast) or Professional (high quality)
- **duration**: Video length (3-10 seconds for O1, up to 10s for 2.6)
- **aspect_ratio**: Width-to-height ratio for output
- **camera_control**: Predefined camera movements and master shots
- **creativity_strength**: Balance between reference fidelity and creative variation

View File

@@ -0,0 +1,68 @@
# LTX-Video
LTX-Video is Lightricks' open-source DiT-based video generation model, the first capable of generating high-quality videos in real-time.
## Model Variants
### LTX-Video 2 (v0.9.7/v0.9.8)
- Major quality upgrade over the original release
- Available in 2B and 13B parameter sizes
- 13B dev: highest quality, requires more VRAM
- 13B distilled: faster inference, fewer steps needed, slight quality trade-off
- 2B distilled: lightweight option for lower VRAM usage
- FP8 quantized versions available for all sizes (13B-dev, 13B-distilled, 2B-distilled)
- Multi-condition generation: condition on multiple images or video segments at specific frames
- Spatial and temporal upscaler models for enhanced resolution and frame rate
- ICLoRA adapters for depth, pose, and canny edge conditioning
- 9 workflow templates available
### LTX-Video 0.9.1/0.9.6
- Original public releases with 2B parameter DiT architecture
- Text-to-video and image-to-video modes
- 768x512 native resolution at 24fps
- 0.9.6 distilled variant: 15x faster, real-time capable, no CFG required
- Foundation for community fine-tunes
## Key Features
- Real-time video generation on high-end GPUs (first DiT model to achieve this)
- Generates 30 FPS video at 1216x704 resolution faster than playback speed
- Multi-condition generation with per-frame image/video conditioning and strength control
- Temporal VAE for smooth, consistent motion
- Multi-scale rendering pipeline mixing dev and distilled models for speed-quality balance
- Latent upsampling pipeline for progressive resolution enhancement
## Hardware Requirements
- 2B model: 12GB VRAM minimum, 16GB recommended
- 2B distilled FP8: 8-10GB VRAM
- 13B model: 24-32GB VRAM (fp16)
- 13B FP8: 16-20GB VRAM
- 13B distilled: less VRAM than 13B dev, ideal for rapid iterations
- 32GB+ system RAM recommended for all variants
## Common Use Cases
- Short-form video content and social media clips
- Image-to-video animation from reference frames
- Video-to-video transformation and extension
- Multi-condition video generation (start/end frame, keyframes)
- Depth, pose, and edge-conditioned video generation via ICLoRA
- Rapid video prototyping and creative experimentation
## Key Parameters
- **num_frames**: Output frame count (divisible by 8 + 1, e.g. 97, 161, 257)
- **steps**: 30-50 for dev models, 8-15 for distilled variants
- **cfg_scale**: 3-5 typical for dev, not required for distilled
- **width/height**: Divisible by 32, best under 720x1280 for 13B
- **denoise_strength**: 0.3-0.5 when using latent upsampler refinement pass
- **conditioning_strength**: Per-condition strength for multi-condition generation (default 1.0)
- **seed**: For reproducible generation
## Blog References
- [LTX-Video 0.9.5 Day-1 Support](../blog/ltx-video-095-support.md) — Commercial license (OpenRail-M), multi-frame control, improved quality
- [LTX-2: Open Source Audio-Video AI](../blog/ltx-2-open-source-audio-video.md) — Synchronized audio-video generation, NVFP4 for 3x speed / 60% less VRAM

View File

@@ -0,0 +1 @@
LTX-Video is Lightricks' open-source DiT-based video generation model, the first to achieve real-time video generation. LTX-Video 2 (v0.9.7/0.9.8) is available in 2B and 13B parameter sizes, with dev, distilled, and FP8 quantized variants. It supports multi-condition generation with per-frame image/video conditioning, spatial and temporal upscalers, and ICLoRA adapters for depth, pose, and canny conditioning. The 2B model needs 12-16GB VRAM (8-10GB FP8), while the 13B model requires 24-32GB (16-20GB FP8). It generates 30fps video at 1216x704 faster than playback speed. Earlier versions (0.9.1/0.9.6) established the 2B foundation with a 15x faster distilled variant. Primary uses: short-form video, image-to-video animation, video extension, and multi-condition keyframe generation. Key parameters: 30-50 steps for dev, 8-15 for distilled, cfg_scale 3-5, frames divisible by 8+1.

View File

@@ -0,0 +1,50 @@
# Luma
Luma AI develops video and image generation models through its Dream Machine platform, powered by the Ray model family and Photon image model.
## Model Variants
### Ray3 / Ray3.14
- Native 1080p video with reasoning-driven generation
- World's first native 16-bit HDR video generation
- Character reference, Modify Video, and Draft Mode (5x faster)
### Ray2
- Production-ready text-to-video and image-to-video
- 5-9 second output at 24fps with coherent motion
### Photon
- Image generation with strong prompt following
- Character and visual reference support
- 1080p output at $0.016 per image
## Key Features
- Reasoning capability for understanding creative intent
- Visual annotation for precise layout and motion control
- HDR generation with 16-bit EXR export for pro workflows
- Keyframe control, video extension, looping, and camera control
## Hardware Requirements
- API-only access via Luma AI API
- No local hardware requirements
- Available through Dream Machine web and iOS app
## Common Use Cases
- Cinematic video production and storytelling
- Commercial advertising and product videos
- Visual effects with Modify Video workflows
- HDR content for professional post-production
## Key Parameters
- **prompt**: Text description for video generation
- **keyframes**: Start and/or end frame images
- **aspect_ratio**: Output dimensions and ratio
- **loop**: Enable seamless looping
- **camera_control**: Camera movement via text instructions

View File

@@ -0,0 +1,47 @@
# Magnific
Magnific is an AI-powered image upscaler and enhancer that uses generative AI to hallucinate new details and textures during the upscaling process.
## Model Variants
### Magnific Creative Upscaler
- Generative upscaling up to 16x (max 10,000px per dimension)
- AI engines: Illusio (illustration), Sharpy (photography), Sparkle (balanced)
- Adds hallucinated details guided by text prompts
### Magnific Precision Upscaler
- Faithful high-fidelity upscaling without creative reinterpretation
- Clean enlargement that stays true to the source image
### Mystic Image Generator
- Photorealistic text-to-image/image-to-image with LoRA styles at up to 4K
## Key Features
- Creativity slider controls AI-hallucinated detail level
- HDR control for micro-contrast and crispness
- Resemblance slider to balance fidelity vs. creative enhancement
- Optimized modes for portraits, illustrations, video games, and film
- API hosted on Freepik with Skin Enhancer endpoint
## Hardware Requirements
- Cloud-only service with no local hardware requirements
- API available through Freepik's developer platform
- Subscription-based with credit system
## Common Use Cases
- Upscaling AI-generated images for print and production
- Enhancing low-resolution concept art and illustrations
- Restoring old or compressed photographs
## Key Parameters
- Creativity: level of new detail hallucination (0-10)
- HDR: micro-contrast and sharpness (-10 to 10)
- Resemblance: fidelity to source image (-10 to 10)
- Scale Factor: 2x, 4x, 8x, or 16x magnification

View File

@@ -0,0 +1,49 @@
# Meshy
Meshy is a popular AI 3D model generator enabling text-to-3D and image-to-3D creation with PBR textures and production-ready exports.
## Model Variants
### Meshy-6
- Latest generation with highest quality geometry
- Supports symmetry and pose control (A-pose, T-pose)
- Configurable polygon counts up to 300,000
### Meshy-5
- Previous generation with art style support
- Realistic and sculpture style options
## Key Features
- Text-to-3D with two-stage workflow (preview mesh, then refine textures)
- Image-to-3D from photos, sketches, or illustrations
- Multi-image input for multi-view reconstruction
- AI texturing with PBR maps (diffuse, roughness, metallic, normal)
- Automatic rigging and 500+ animation motion library
- Smart remesh with quad or triangle topology control
- Export in FBX, GLB, OBJ, STL, 3MF, USDZ, BLEND formats
## Hardware Requirements
- Cloud API-based (no local GPU required)
- All generation runs on Meshy servers
- API available on Pro tier and above
## Common Use Cases
- Game development asset creation
- 3D printing and prototyping
- Film and VFX previsualization
- VR/AR content development
- Product design and e-commerce
## Key Parameters
- **prompt**: Text description up to 600 characters
- **ai_model**: Model version (meshy-5, meshy-6, latest)
- **topology**: Mesh type (quad or triangle)
- **target_polycount**: 100 to 300,000 polygons
- **enable_pbr**: Generate PBR material maps
- **pose_mode**: Character pose (a-pose, t-pose, or none)

View File

@@ -0,0 +1,58 @@
# MiniMax
MiniMax is a multi-modal AI company known for the Hailuo video generation models and Image-01, offering API-based video and image creation.
## Model Variants
### Hailuo 2.3
- Latest video model with improved body movement and facial expressions
- Supports anime, illustration, ink-wash, and game-CG styles
- 768p or 1080p resolution, 6 or 10 second clips
- Available in Quality and Fast variants
### Hailuo 2.0 (Hailuo 02)
- Native 1080p with Noise-aware Compute Redistribution (NCR)
- 2.5x efficiency improvement over predecessors
- Last-frame conditioning support
### Image-01
- Text-to-image generation with multiple output sizes
### T2V-01-Director
- Enhanced camera control with natural language commands
- Pan, zoom, tracking shot, and shake directives
## Key Features
- Text-to-video and image-to-video generation
- Up to 1080p resolution at 25fps
- Video clips up to 10 seconds
- Camera control with natural language commands
- Subject consistency with reference images
- Text-to-image generation with Image-01
## Hardware Requirements
- Cloud API-based (no local GPU required)
- All generation runs on MiniMax servers
- API access via platform.minimax.io
## Common Use Cases
- Social media video content creation
- Cinematic short film production
- Product advertising and e-commerce videos
- Anime and illustrated content
- Character-driven narrative scenes
## Key Parameters
- **prompt**: Text description for generation
- **model**: Model selection (hailuo-2.3, hailuo-02, image-01)
- **resolution**: Output resolution (768p or 1080p)
- **duration**: Clip length (6 or 10 seconds for video)
- **first_frame_image**: Reference image for image-to-video

View File

@@ -0,0 +1,762 @@
{
"generated": "2026-02-07",
"totalModels": 87,
"categories": {
"specific_model": [
{
"name": "Wan",
"category": "specific_model",
"templateCount": 36,
"priority": 108,
"docFile": "wan",
"hasExistingDoc": true
},
{
"name": "Nano Banana Pro",
"category": "specific_model",
"templateCount": 29,
"priority": 87,
"docFile": "nano-banana-pro",
"hasExistingDoc": false
},
{
"name": "Flux",
"category": "specific_model",
"templateCount": 24,
"priority": 72,
"docFile": "flux",
"hasExistingDoc": true
},
{
"name": "SDXL",
"category": "specific_model",
"templateCount": 4,
"priority": 12,
"docFile": "sdxl",
"hasExistingDoc": true
},
{
"name": "ACE-Step",
"category": "specific_model",
"templateCount": 7,
"priority": 21,
"docFile": "ace-step",
"hasExistingDoc": false
},
{
"name": "Seedance",
"category": "specific_model",
"templateCount": 6,
"priority": 18,
"docFile": "seedance",
"hasExistingDoc": false
},
{
"name": "Seedream",
"category": "specific_model",
"templateCount": 5,
"priority": 15,
"docFile": "seedream",
"hasExistingDoc": false
},
{
"name": "HiDream",
"category": "specific_model",
"templateCount": 5,
"priority": 15,
"docFile": "hidream",
"hasExistingDoc": false
},
{
"name": "Stable Audio",
"category": "specific_model",
"templateCount": 4,
"priority": 12,
"docFile": "stable-audio",
"hasExistingDoc": false
},
{
"name": "Chatter Box",
"category": "specific_model",
"templateCount": 4,
"priority": 12,
"docFile": "chatterbox",
"hasExistingDoc": false
},
{
"name": "Z-Image-Turbo",
"category": "specific_model",
"templateCount": 4,
"priority": 12,
"docFile": "z-image",
"hasExistingDoc": false
},
{
"name": "Kandinsky",
"category": "specific_model",
"templateCount": 3,
"priority": 9,
"docFile": "kandinsky",
"hasExistingDoc": false
},
{
"name": "OmniGen",
"category": "specific_model",
"templateCount": 3,
"priority": 9,
"docFile": "omnigen",
"hasExistingDoc": false
},
{
"name": "SeedVR2",
"category": "specific_model",
"templateCount": 3,
"priority": 9,
"docFile": "seedvr2",
"hasExistingDoc": false
},
{
"name": "Chroma",
"category": "specific_model",
"templateCount": 2,
"priority": 6,
"docFile": "chroma",
"hasExistingDoc": false
},
{
"name": "ChronoEdit",
"category": "specific_model",
"templateCount": 1,
"priority": 3,
"docFile": "chronoedit",
"hasExistingDoc": false
},
{
"name": "HuMo",
"category": "specific_model",
"templateCount": 1,
"priority": 3,
"docFile": "humo",
"hasExistingDoc": false
},
{
"name": "NewBie",
"category": "specific_model",
"templateCount": 1,
"priority": 3,
"docFile": "newbie",
"hasExistingDoc": false
},
{
"name": "Ovis-Image",
"category": "specific_model",
"templateCount": 1,
"priority": 3,
"docFile": "ovis-image",
"hasExistingDoc": false
}
],
"provider_name": [
{
"name": "Google",
"category": "provider_name",
"templateCount": 29,
"priority": 0,
"mapsTo": ["gemini", "veo", "nano-banana-pro"],
"hasExistingDoc": false
},
{
"name": "BFL",
"category": "provider_name",
"templateCount": 28,
"priority": 0,
"mapsTo": ["flux"],
"hasExistingDoc": false
},
{
"name": "Stability",
"category": "provider_name",
"templateCount": 19,
"priority": 0,
"mapsTo": ["sdxl", "stable-audio", "reimagine"],
"hasExistingDoc": false
},
{
"name": "ByteDance",
"category": "provider_name",
"templateCount": 11,
"priority": 0,
"mapsTo": ["seedance", "seedvr2", "seedream"],
"hasExistingDoc": false
},
{
"name": "OpenAI",
"category": "provider_name",
"templateCount": 11,
"priority": 0,
"mapsTo": ["gpt-image-1"],
"hasExistingDoc": false
},
{
"name": "Lightricks",
"category": "provider_name",
"templateCount": 9,
"priority": 0,
"mapsTo": ["ltx-video"],
"hasExistingDoc": false
},
{
"name": "Tencent",
"category": "provider_name",
"templateCount": 5,
"priority": 0,
"mapsTo": ["hunyuan"],
"hasExistingDoc": false
},
{
"name": "Qwen",
"category": "provider_name",
"templateCount": 2,
"priority": 0,
"mapsTo": ["qwen"],
"hasExistingDoc": true
},
{
"name": "Nvidia",
"category": "provider_name",
"templateCount": 1,
"priority": 0,
"mapsTo": [],
"hasExistingDoc": false
}
],
"api_only": [
{
"name": "Vidu",
"category": "api_only",
"templateCount": 10,
"priority": 20,
"docFile": "vidu",
"hasExistingDoc": false
},
{
"name": "Kling",
"category": "api_only",
"templateCount": 9,
"priority": 18,
"docFile": "kling",
"hasExistingDoc": false
},
{
"name": "Recraft",
"category": "api_only",
"templateCount": 6,
"priority": 12,
"docFile": "recraft",
"hasExistingDoc": false
},
{
"name": "Runway",
"category": "api_only",
"templateCount": 5,
"priority": 10,
"docFile": "runway",
"hasExistingDoc": false
},
{
"name": "Tripo",
"category": "api_only",
"templateCount": 5,
"priority": 10,
"docFile": "tripo",
"hasExistingDoc": false
},
{
"name": "GPT-Image-1",
"category": "api_only",
"templateCount": 4,
"priority": 8,
"docFile": "gpt-image-1",
"hasExistingDoc": false
},
{
"name": "MiniMax",
"category": "api_only",
"templateCount": 4,
"priority": 8,
"docFile": "minimax",
"hasExistingDoc": false
},
{
"name": "Grok",
"category": "api_only",
"templateCount": 4,
"priority": 8,
"docFile": "grok",
"hasExistingDoc": false
},
{
"name": "Luma",
"category": "api_only",
"templateCount": 4,
"priority": 8,
"docFile": "luma",
"hasExistingDoc": false
},
{
"name": "Moonvalley",
"category": "api_only",
"templateCount": 4,
"priority": 8,
"docFile": "moonvalley",
"hasExistingDoc": false
},
{
"name": "Topaz",
"category": "api_only",
"templateCount": 4,
"priority": 8,
"docFile": "topaz",
"hasExistingDoc": false
},
{
"name": "PixVerse",
"category": "api_only",
"templateCount": 3,
"priority": 6,
"docFile": "pixverse",
"hasExistingDoc": false
},
{
"name": "Meshy",
"category": "api_only",
"templateCount": 3,
"priority": 6,
"docFile": "meshy",
"hasExistingDoc": false
},
{
"name": "Rodin",
"category": "api_only",
"templateCount": 3,
"priority": 6,
"docFile": "rodin",
"hasExistingDoc": false
},
{
"name": "Magnific",
"category": "api_only",
"templateCount": 3,
"priority": 6,
"docFile": "magnific",
"hasExistingDoc": false
},
{
"name": "WaveSpeed",
"category": "api_only",
"templateCount": 3,
"priority": 6,
"docFile": "wavespeed",
"hasExistingDoc": false
},
{
"name": "BRIA",
"category": "api_only",
"templateCount": 2,
"priority": 4,
"docFile": "bria",
"hasExistingDoc": false
},
{
"name": "Veo",
"category": "api_only",
"templateCount": 2,
"priority": 4,
"docFile": "veo",
"hasExistingDoc": false
},
{
"name": "HitPaw",
"category": "api_only",
"templateCount": 2,
"priority": 4,
"docFile": "hitpaw",
"hasExistingDoc": false
},
{
"name": "Z-Image",
"category": "api_only",
"templateCount": 1,
"priority": 2,
"docFile": "z-image",
"hasExistingDoc": false
},
{
"name": "Anima",
"category": "api_only",
"templateCount": 1,
"priority": 2,
"docFile": "anima",
"hasExistingDoc": false
},
{
"name": "Reimagine",
"category": "api_only",
"templateCount": 1,
"priority": 2,
"docFile": "reimagine",
"mapsTo": ["stability"],
"hasExistingDoc": false
},
{
"name": "Ideogram",
"category": "api_only",
"templateCount": 1,
"priority": 2,
"docFile": "ideogram",
"hasExistingDoc": false
},
{
"name": "Gemini3 Pro Image Preview",
"category": "api_only",
"templateCount": 16,
"priority": 32,
"docFile": "gemini",
"hasExistingDoc": false
}
],
"utility_model": [
{
"name": "SVD",
"category": "utility_model",
"templateCount": 1,
"priority": 1,
"docFile": "svd",
"hasExistingDoc": false
},
{
"name": "Real-ESRGAN",
"category": "utility_model",
"templateCount": 1,
"priority": 1,
"docFile": "real-esrgan",
"hasExistingDoc": false
},
{
"name": "Depth Anything v2",
"category": "utility_model",
"templateCount": 1,
"priority": 1,
"docFile": "depth-anything-v2",
"hasExistingDoc": false
},
{
"name": "FlashVSR",
"category": "utility_model",
"templateCount": 1,
"priority": 1,
"docFile": "flashvsr",
"hasExistingDoc": false
}
],
"variant": [
{
"name": "Wan2.1",
"category": "variant",
"templateCount": 21,
"priority": 0,
"mapsTo": "wan",
"hasExistingDoc": true
},
{
"name": "Wan2.2",
"category": "variant",
"templateCount": 15,
"priority": 0,
"mapsTo": "wan",
"hasExistingDoc": true
},
{
"name": "Qwen-Image-Edit",
"category": "variant",
"templateCount": 11,
"priority": 0,
"mapsTo": "qwen",
"hasExistingDoc": true
},
{
"name": "LTX-2",
"category": "variant",
"templateCount": 9,
"priority": 0,
"mapsTo": "ltx-video",
"hasExistingDoc": true
},
{
"name": "Qwen-Image",
"category": "variant",
"templateCount": 7,
"priority": 0,
"mapsTo": "qwen",
"hasExistingDoc": true
},
{
"name": "Hunyuan3D",
"category": "variant",
"templateCount": 7,
"priority": 0,
"mapsTo": "hunyuan",
"hasExistingDoc": true
},
{
"name": "Google Gemini Image",
"category": "variant",
"templateCount": 6,
"priority": 0,
"mapsTo": "gemini",
"hasExistingDoc": false
},
{
"name": "Flux.2 Klein",
"category": "variant",
"templateCount": 6,
"priority": 0,
"mapsTo": "flux",
"hasExistingDoc": true
},
{
"name": "Kling O1",
"category": "variant",
"templateCount": 5,
"priority": 0,
"mapsTo": "kling",
"hasExistingDoc": false
},
{
"name": "Vidu Q2",
"category": "variant",
"templateCount": 5,
"priority": 0,
"mapsTo": "vidu",
"hasExistingDoc": false
},
{
"name": "SD3.5",
"category": "variant",
"templateCount": 4,
"priority": 0,
"mapsTo": "sdxl",
"hasExistingDoc": false
},
{
"name": "Google Gemini",
"category": "variant",
"templateCount": 3,
"priority": 0,
"mapsTo": "gemini",
"hasExistingDoc": false
},
{
"name": "Flux.2 Dev",
"category": "variant",
"templateCount": 3,
"priority": 0,
"mapsTo": "flux",
"hasExistingDoc": true
},
{
"name": "Flux.2",
"category": "variant",
"templateCount": 3,
"priority": 0,
"mapsTo": "flux",
"hasExistingDoc": true
},
{
"name": "Wan2.5",
"category": "variant",
"templateCount": 3,
"priority": 0,
"mapsTo": "wan",
"hasExistingDoc": true
},
{
"name": "Kontext",
"category": "variant",
"templateCount": 3,
"priority": 0,
"mapsTo": "flux",
"hasExistingDoc": false
},
{
"name": "Wan2.6",
"category": "variant",
"templateCount": 3,
"priority": 0,
"mapsTo": "wan",
"hasExistingDoc": true
},
{
"name": "Hunyuan Video",
"category": "variant",
"templateCount": 3,
"priority": 0,
"mapsTo": "hunyuan",
"hasExistingDoc": true
},
{
"name": "Vidu Q3",
"category": "variant",
"templateCount": 2,
"priority": 0,
"mapsTo": "vidu",
"hasExistingDoc": false
},
{
"name": "LTXV",
"category": "variant",
"templateCount": 2,
"priority": 0,
"mapsTo": "ltx-video",
"hasExistingDoc": true
},
{
"name": "Qwen-Image-Layered",
"category": "variant",
"templateCount": 2,
"priority": 0,
"mapsTo": "qwen",
"hasExistingDoc": true
},
{
"name": "SD1.5",
"category": "variant",
"templateCount": 2,
"priority": 0,
"mapsTo": "sdxl",
"hasExistingDoc": false
},
{
"name": "Gemini-2.5-Flash",
"category": "variant",
"templateCount": 1,
"priority": 0,
"mapsTo": "gemini",
"hasExistingDoc": false
},
{
"name": "Qwen-Image 2512",
"category": "variant",
"templateCount": 1,
"priority": 0,
"mapsTo": "qwen",
"hasExistingDoc": true
},
{
"name": "Seedream 4.0",
"category": "variant",
"templateCount": 1,
"priority": 0,
"mapsTo": "seedream",
"hasExistingDoc": false
},
{
"name": "GPT-Image-1.5",
"category": "variant",
"templateCount": 1,
"priority": 0,
"mapsTo": "gpt-image-1",
"hasExistingDoc": false
},
{
"name": "Kling2.6",
"category": "variant",
"templateCount": 1,
"priority": 0,
"mapsTo": "kling",
"hasExistingDoc": false
},
{
"name": "Wan-Move",
"category": "variant",
"templateCount": 1,
"priority": 0,
"mapsTo": "wan",
"hasExistingDoc": true
},
{
"name": "Motion Control",
"category": "variant",
"templateCount": 1,
"priority": 0,
"mapsTo": "wan",
"hasExistingDoc": false
}
],
"skip": [
{
"name": "None",
"category": "skip",
"templateCount": 1,
"priority": 0,
"hasExistingDoc": false
},
{
"name": "nano-banana",
"category": "skip",
"templateCount": 1,
"priority": 0,
"note": "Duplicate of Nano Banana Pro",
"hasExistingDoc": false
}
]
},
"priorityOrder": [
"wan",
"nano-banana-pro",
"flux",
"gemini",
"ace-step",
"vidu",
"kling",
"seedance",
"seedream",
"hidream",
"sdxl",
"stable-audio",
"chatterbox",
"z-image",
"recraft",
"runway",
"tripo",
"kandinsky",
"omnigen",
"seedvr2",
"gpt-image-1",
"minimax",
"grok",
"luma",
"moonvalley",
"topaz",
"chroma",
"pixverse",
"meshy",
"rodin",
"magnific",
"wavespeed",
"bria",
"veo",
"hitpaw",
"newbie",
"ovis-image",
"chronoedit",
"humo",
"anima",
"reimagine",
"ideogram",
"svd",
"real-esrgan",
"depth-anything-v2",
"flashvsr"
]
}

View File

@@ -0,0 +1,53 @@
# Moonvalley (Marey)
Marey is Moonvalley's AI video generation model for professional filmmakers, delivering studio-grade quality and trained exclusively on licensed footage.
## Model Variants
### Marey Realism v1.5
- Latest production model with cinematic detail
- 1080p resolution at 24fps, up to 5-second clips
- Available via ComfyUI native nodes and fal.ai
### Marey Director Controls
- 3D-aware camera control from single images
- Motion transfer from reference videos
- Trajectory control for object path definition
- Pose transfer and keyframing with multi-image timeline
## Key Features
- Text-to-video and image-to-video generation
- Camera control with 3D scene understanding
- Motion transfer from reference video clips
- Trajectory control via drawn paths
- Pose transfer for expressive character animation
- Shot extension for seamless duration increase
- Commercially safe (trained on licensed data only)
## Hardware Requirements
- Cloud API-based (no local GPU required)
- Available via Moonvalley platform, ComfyUI, and fal.ai
- Subscription tiers starting at $14.99/month
## Common Use Cases
- Professional film and commercial production
- Cinematic B-roll generation
- Previsualization and storyboarding
- Music video and social media content
- Product advertising with dynamic camera
- Animation and character-driven storytelling
## Key Parameters
- **prompt**: Text description of desired video scene
- **image**: Reference image for image-to-video mode
- **camera_control**: Camera movement specification
- **motion_reference**: Video reference for motion transfer
- **trajectory**: Drawn path for object movement
- **duration**: Clip length (up to 5 seconds)
- **resolution**: Output resolution (up to 1080p at 24fps)

View File

@@ -0,0 +1,53 @@
# Nano Banana Pro
Nano Banana Pro is Google DeepMind's flagship image generation and editing model, accessed through ComfyUI's API nodes. Internally it is the Gemini 3 Pro Image model, designed for production-ready high-fidelity visuals.
## Model Variants
### Nano Banana Pro (Gemini 3 Pro Image)
- State-of-the-art reasoning-powered image generation
- Supports up to 14 reference image inputs
- Native 4K output resolution (up to 4096x4096)
- Complex multi-turn image generation and editing
- Model ID: `gemini-3-pro-image-preview`
### Gemini 2.5 Flash Image (Nano Banana)
- Cost-effective alternative optimized for speed
- Balanced price-to-performance for interactive workflows
- Character consistency and prompt-based editing
- Model ID: `gemini-2.5-flash-image`
## Key Features
- **World knowledge**: Generates accurate real-world images using Google Search's knowledge base
- **Text rendering**: Clean text generation with detection and translation across 10 languages
- **Multi-image fusion**: Blend up to 14 input images into a single coherent output
- **Studio controls**: Adjust angles, focus, color grading in generated images
- **Character consistency**: Maintain subject identity across multiple generations
- **Prompt-based editing**: Targeted transformations via natural language instructions
## Hardware Requirements
- No local GPU required — runs as a cloud API service
- Accessed via ComfyUI API nodes (requires ComfyUI login and network access)
- Available on Comfy Cloud or local ComfyUI with API node support
## Common Use Cases
- High-fidelity text-to-image generation
- Multi-reference style transfer and image blending
- Product visualization and mockups
- Sketch-to-image and blueprint-to-3D visualization
- Text rendering and translation in images
- Iterative prompt-based image editing
## Key Parameters
- **prompt**: Text description of desired image or edit
- **aspect_ratio**: Supported ratios include 1:1, 3:2, 4:3, 9:16, 16:9, 21:9
- **temperature**: 0.0-2.0 (default 1.0)
- **topP**: 0.0-1.0 (default 0.95)
- **max_output_tokens**: Up to 32,768 tokens per response
- **input images**: Up to 14 reference images per prompt

View File

@@ -0,0 +1,43 @@
# NewBie
NewBie image Exp0.1 is a 3.5B parameter open-source text-to-image model built on the Next-DiT architecture, developed by the NewBie-AI community. It is specifically pretrained on high-quality anime data for detailed and visually striking anime-style image generation.
## Model Variants
### NewBie image Exp0.1
- 3.5B parameter DiT model based on Next-DiT architecture
- Uses Gemma3-4B-it as primary text encoder with Jina CLIP v2 for pooled features
- FLUX.1-dev 16-channel VAE for rich color rendering and fine texture detail
- Supports natural language, tags, and XML structured prompts
- Non-commercial community license (Newbie-NC-1.0) for model weights
## Key Features
- Exceptional anime and ACG (Anime, Comics, Games) style generation
- XML structured prompting for improved attribute binding and element disentanglement
- Strong multi-character scene generation with accurate attribute assignment
- ComfyUI integration via dedicated custom nodes
- LoRA training support with community trainer
- Built on research from the Lumina architecture family
## Hardware Requirements
- Minimum: 12GB VRAM (bfloat16 or float16)
- Recommended: 24GB VRAM for comfortable generation
- Requires Gemma3-4B-it and Jina CLIP v2 text encoders
- Python 3.10, PyTorch 2.6.0+, Transformers 4.57.1+
## Common Use Cases
- Anime and illustration generation
- Character design with precise attribute control
- Multi-character scene composition
- Fan art and creative anime artwork
## Key Parameters
- **num_inference_steps**: 28 recommended
- **height/width**: 1024x1024 native resolution
- **prompt_format**: Natural language, tags, or XML structured
- **torch_dtype**: bfloat16 recommended (float16 fallback)

View File

@@ -0,0 +1,53 @@
# OmniGen2
OmniGen2 is a multimodal generation model with dual decoding pathways for text and image, built on the Qwen-VL-2.5 foundation by VectorSpaceLab.
## Model Variants
### OmniGen2
- 3B vision-language encoder (Qwen-VL-2.5) + 4B image decoder
- Dual decoding with unshared parameters for text and image
- Decoupled image tokenizer
- Apache 2.0 license
### OmniGen v1
- Earlier single-pathway architecture
- Fewer capabilities than OmniGen2
- Superseded by OmniGen2
## Key Features
- Text-to-image generation with high fidelity and aesthetics
- Instruction-guided image editing (state-of-the-art among open-source models)
- In-context generation combining multiple reference inputs (humans, objects, scenes)
- Visual understanding inherited from Qwen-VL-2.5
- CPU offload support reduces VRAM usage by nearly 50%
- Sequential CPU offload available for under 3GB VRAM (slower inference)
- Supports negative prompts and configurable guidance scales
## Hardware Requirements
- Minimum: NVIDIA RTX 3090 or equivalent (~17GB VRAM)
- With CPU offload: ~9GB VRAM
- With sequential CPU offload: under 3GB VRAM (significantly slower)
- Flash Attention optional but recommended for best performance
- CUDA 12.4+ recommended
- Default output resolution: 1024x1024
## Common Use Cases
- Text-to-image generation
- Instruction-based photo editing
- Subject-driven image generation from reference photos
- Multi-image composition and in-context editing
## Key Parameters
- **text_guidance_scale**: Controls adherence to text prompt (CFG)
- **image_guidance_scale**: Controls similarity to reference image (1.2-2.0 for editing, 2.5-3.0 for in-context)
- **num_inference_step**: Diffusion steps (default 50)
- **max_pixels**: Maximum total pixel count for input images (default 1024x1024)
- **negative_prompt**: Text describing undesired qualities (e.g., "blurry, low quality, watermark")
- **scheduler**: ODE solver choice (euler or dpmsolver++)

View File

@@ -0,0 +1,43 @@
# Ovis-Image
Ovis-Image is a 7B text-to-image model by AIDC-AI, built on Ovis-U1, optimized for high-quality text rendering in generated images. It achieves state-of-the-art results on the CVTG-2K text rendering benchmark while remaining compact enough for single-GPU deployment.
## Model Variants
### Ovis-Image-7B
- 2B (Ovis2.5-2B) + 7B parameter architecture
- State-of-the-art on CVTG-2K benchmark for text rendering accuracy
- Competitive with 20B+ models (Qwen-Image) and GPT-4o on text-centric tasks
- Uses FLUX-based autoencoder for latent encoding
- Apache 2.0 license
## Key Features
- Excellent text rendering with correct spelling and consistent typography
- High fidelity on text-heavy, layout-sensitive prompts
- Handles posters, banners, logos, UI mockups, and infographics
- Supports diverse fonts, sizes, and aspect ratios
- Strong performance on both English and Chinese text generation
- Available via Diffusers library with OvisImagePipeline
## Hardware Requirements
- Minimum: 16GB VRAM (bfloat16)
- Recommended: 24GB VRAM for comfortable use
- Fits on a single high-end GPU
- Tested with Python 3.10, PyTorch 2.6.0, Transformers 4.57.1
## Common Use Cases
- Generating posters and banners with accurate text
- Logo and brand asset creation
- UI mockup and infographic generation
- Marketing materials with embedded typography
## Key Parameters
- **num_inference_steps**: 50 recommended
- **guidance_scale**: 5.0
- **resolution**: 1024x1024 native
- **negative_prompt**: Supported for quality control

View File

@@ -0,0 +1,46 @@
# PixVerse
PixVerse is an AI video generation platform founded in 2023 and backed by Alibaba, offering text-to-video and image-to-video capabilities with over 100 million registered users.
## Model Variants
### PixVerse V5.5
- Latest model with improved fidelity, text-to-video, image-to-video, and modification
### PixVerse R1
- Real-time AI video generation model
- Interactive control where users direct character actions as video unfolds
### PixVerse V4.5 / V5
- Previous generation models with strong cinematic quality and trending effects
## Key Features
- Text-to-video generation from natural language prompts
- Image-to-video animation with realistic physics simulation
- Fusion mode combining up to 3 images into one video
- Key frame control and video extension with AI continuity
- AI Video Modify for text-prompt-based editing
## Hardware Requirements
- Cloud-based platform with no local GPU required
- Web app at app.pixverse.ai and mobile apps (iOS/Android)
- API at platform.pixverse.ai for developer integration
## Common Use Cases
- Social media content creation (AI Kiss, Hug, Dance effects)
- Marketing and promotional video production
- Old photo revival and animation
- Cinematic narrative and stylistic art generation
## Key Parameters
- prompt: text description of the desired video content
- duration: video length (typically 5s clips)
- resolution: output quality (360p to 720p+)
- aspect_ratio: 16:9, 9:16, 1:1, and other ratios

View File

@@ -0,0 +1,77 @@
# Qwen
Qwen is Alibaba's family of vision-language and image generation models, spanning visual understanding, image editing, and image generation.
## Model Variants
### Qwen2.5-VL
- Multimodal vision-language model from the Qwen team
- Available in 3B, 7B, and 72B parameter sizes
- Image understanding, video comprehension (1+ hour videos), and visual localization
- Visual agent capabilities: computer use, phone use, dynamic tool calling
- Structured output generation for invoices, forms, and tables
- Dynamic resolution and frame rate training for video understanding
- Optimized ViT encoder with window attention, SwiGLU, and RMSNorm
### Qwen-Image-Edit
- Specialized image editing model with instruction-following
- Supports inpainting, outpainting, style transfer, and content-aware edits
- 11 workflow templates available
### Qwen-Image
- Text-to-image generation model from the Qwen family
- 7 workflow templates available
### Qwen-Image-Layered
- Layered image generation for composable outputs
- Generates images with separate foreground/background layers
- 2 workflow templates available
### Qwen-Image 2512
- Specific variant optimized for particular generation tasks
- 1 workflow template available
## Key Features
- Strong visual understanding with state-of-the-art benchmark results
- Native multi-language support including Chinese and English
- Visual agent capabilities for computer and phone interaction
- Video event capture with temporal segment pinpointing
- Bounding box and point-based visual localization
- Structured JSON output for document and table extraction
- Instruction-based image editing with precise control
## Hardware Requirements
- 3B model: 6-8GB VRAM
- 7B model: 16GB VRAM, flash_attention_2 recommended for multi-image/video
- 72B model: Multi-GPU setup required (80GB+ per GPU)
- Context length: 32,768 tokens default, extendable to 64K+ with YaRN
- Dynamic pixel budget: 256-1280 tokens per image (configurable min/max pixels)
## Common Use Cases
- Image editing based on text instructions
- Visual question answering and image description
- Long video comprehension and event extraction
- Document OCR and structured data extraction
- Visual agent tasks (screen interaction, UI navigation)
- Layered image generation for design workflows
- Text-to-image generation with strong prompt following
## Key Parameters
- **max_new_tokens**: Controls output length for VL model responses
- **min_pixels / max_pixels**: Control image token budget (e.g. 256x28x28 to 1280x28x28)
- **temperature**: Generation diversity for text outputs
- **resized_height / resized_width**: Direct image dimension control (rounded to nearest 28)
- **fps**: Frame rate for video input processing in Qwen2.5-VL
## Blog References
- [Qwen Image Edit 2511 & Qwen Image Layered](../blog/qwen-image-edit-2511.md) — Better character consistency, RGBA layer decomposition, built-in LoRA support

View File

@@ -0,0 +1 @@
Qwen is Alibaba's family of vision-language and image generation models. Qwen2.5-VL is a multimodal vision-language model available in 3B (6-8GB VRAM), 7B (16GB), and 72B (multi-GPU 80GB+) sizes, capable of image understanding, hour-long video comprehension, visual localization, visual agent tasks (computer/phone use), and structured JSON output for document extraction. Qwen-Image-Edit provides instruction-based image editing with inpainting, outpainting, and style transfer. Qwen-Image handles text-to-image generation, while Qwen-Image-Layered produces composable foreground/background layer outputs. The family features native Chinese/English support, strong prompt following, and state-of-the-art visual understanding benchmarks. Key parameters include dynamic pixel budgets (256-1280 tokens per image), configurable frame rates for video input, and temperature for text diversity. Primary uses: image editing, visual QA, video comprehension, document OCR, and layered image generation.

View File

@@ -0,0 +1,61 @@
# Real-ESRGAN
Real-ESRGAN is a practical image and video super-resolution model that extends ESRGAN with improved training on pure synthetic data for real-world restoration.
## Model Variants
### RealESRGAN_x4plus
- General-purpose 4× upscaling model for real-world images
- RRDB (Residual-in-Residual Dense Block) architecture
- Handles noise, blur, JPEG compression artifacts
### RealESRGAN_x4plus_anime_6B
- Optimized for anime and illustration images
- Smaller 6-block model for faster inference
- Better edge preservation for line art
### RealESRGAN_x2plus
- 2× upscaling variant for moderate enlargement
- Lower risk of hallucinated details
### realesr-animevideov3
- Lightweight model designed for anime video frames
- Temporal consistency for video processing
## Key Features
- Trained entirely on synthetic degradation data (no paired real-world data needed)
- Second-order degradation modeling simulates real-world compression chains
- GFPGAN integration for face enhancement during upscaling
- Tiling support for processing large images with limited VRAM
- FP16 (half precision) inference for faster processing
- NCNN Vulkan portable executables for cross-platform GPU support (Intel/AMD/NVIDIA)
- Supports 2×, 3×, and 4× upscaling with arbitrary output scale via LANCZOS4 resize
## Hardware Requirements
- Minimum: 2GB VRAM with tiling enabled
- Recommended: 4GB+ VRAM for comfortable use
- NCNN Vulkan build runs on any GPU with Vulkan support
- CPU inference supported but significantly slower
## Common Use Cases
- Upscaling old or low-resolution photographs
- Enhancing compressed web images
- Anime and manga image upscaling
- Video frame super-resolution
- Restoring degraded historical images
- Pre-processing for print from low-resolution sources
## Key Parameters
- **outscale**: Final upsampling scale factor (default: 4)
- **tile**: Tile size for memory management (0 = no tiling)
- **face_enhance**: Enable GFPGAN face enhancement (default: false)
- **model_name**: Select model variant (RealESRGAN_x4plus, anime_6B, etc.)
- **denoise_strength**: Balance noise removal vs detail preservation (realesr-general-x4v3)

View File

@@ -0,0 +1,50 @@
# Recraft
Recraft is an AI image generation platform known for its V3 model and unique ability to produce both raster and vector (SVG) images from text prompts.
## Model Variants
### Recraft V3
- Top-ranked model on Artificial Analysis Text-to-Image Leaderboard
- Supports raster image generation at $0.04 per image
- Supports vector SVG generation at $0.08 per image
- Accurate text rendering at any size in generated images
### Recraft 20B
- More cost-effective variant at $0.022 per raster image
- Vector generation at $0.044 per image
- Suitable for high-volume production workflows
## Key Features
- Native vector SVG image generation from text prompts
- Accurate text rendering (headlines, labels, signs) in images
- Custom brand style creation from reference images
- Generation in exact brand colors for brand consistency
- AI-powered image vectorization (PNG/JPG to SVG)
- Background removal, creative upscaling, and crisp upscaling
- Multiple style presets: photorealism, clay, retro-pop, hand-drawn, 80s
## Hardware Requirements
- API-only access via Recraft API
- No local hardware requirements
- Available through Recraft Studio web interface
## Common Use Cases
- Logo and icon design (SVG output)
- Brand-consistent marketing asset generation
- Poster and advertisement creation with text
- Scalable vector illustrations for web and print
- Product mockup generation
- SEO blog imagery at scale
## Key Parameters
- **prompt**: Text description of the desired image
- **style**: Visual style (realistic_image, digital_illustration, vector_illustration, icon)
- **colors**: Brand color palette for consistent output
- **format**: Output format (raster PNG/JPG or vector SVG)

View File

@@ -0,0 +1,57 @@
# Rodin
Rodin is a 3D generation API by Hyper3D (DeemosTech) that creates production-ready 3D models from text or images with PBR materials.
## Model Variants
### Rodin Gen-2
- Most advanced model with 10 billion parameters
- Built on the BANG architecture
- 4x improved geometric mesh quality over Gen-1
- Generation time approximately 90 seconds
### Rodin Gen-1.5 Regular
- Detailed 3D assets with customizable quality
- Adjustable polygon counts and 2K textures
- Generation time approximately 70 seconds
### Rodin Sketch
- Fast prototyping with basic geometry and 1K textures
- GLB format only, generation in approximately 20 seconds
## Key Features
- Text-to-3D and image-to-3D generation
- Multi-view image input (up to 5 images) with fuse and concat modes
- PBR and Shaded material options
- Quad and triangle mesh modes
- HighPack add-on for 4K textures and high-poly models
- Bounding box ControlNet for dimension constraints
- T/A pose control for humanoid models
## Hardware Requirements
- Cloud API-based (no local GPU required)
- All generation runs on Hyper3D servers
- API key required via hyper3d.ai dashboard
## Common Use Cases
- Game asset production
- VR/AR content creation
- Product visualization
- Character modeling with pose control
- Rapid 3D prototyping
## Key Parameters
- **prompt**: Text description for text-to-3D mode
- **images**: Up to 5 reference images for image-to-3D
- **quality**: Detail level (high, medium, low, extra-low)
- **mesh_mode**: Face type (Quad or Raw triangles)
- **material**: Material type (PBR, Shaded, or All)
- **geometry_file_format**: Output format (glb, fbx, obj, stl, usdz)
- **seed**: Randomization seed (0-65535)

View File

@@ -0,0 +1,50 @@
# Runway
Runway is a generative AI company producing state-of-the-art video generation models, accessible via API and web interface.
## Model Variants
### Gen-3 Alpha
- Text-to-video and image-to-video at 1280x768, 24fps
- 5 or 10 second output, extendable up to 40 seconds
- Photorealistic human character generation
### Gen-3 Alpha Turbo
- Faster, lower-cost variant (5 credits/sec vs 10)
- Requires input image; supports first, middle, and last keyframes
- Video extension up to 34 seconds total
### Gen-4 Turbo
- Latest generation with improved motion and prompt adherence
- Image reference support and text-to-image (gen4_image)
## Key Features
- Advanced camera controls (Motion Brush, Director Mode)
- C2PA provenance metadata for content authenticity
- Expressive human characters with gestures and emotions
- Wide range of cinematic styles and terminology support
## Hardware Requirements
- API-only access via Runway developer portal
- No local hardware requirements
- Enterprise tier available for higher rate limits
## Common Use Cases
- Film pre-visualization and storyboarding
- Commercial advertisement production
- Social media video content
- Visual effects and motion graphics
- Music video and artistic video creation
## Key Parameters
- **prompt**: Text description guiding video generation
- **duration**: Output length (5 or 10 seconds)
- **ratio**: Aspect ratio (1280:768 or 768:1280)
- **keyframes**: Start, middle, and/or end frame images

View File

@@ -0,0 +1,63 @@
# Stable Diffusion 3.5
Stable Diffusion 3.5 is Stability AI's text-to-image model family based on the Multimodal Diffusion Transformer (MMDiT) architecture with rectified flow matching.
## Model Variants
### Stable Diffusion 3.5 Large
- 8.1 billion parameter MMDiT model
- Highest quality and prompt adherence in the SD family
- 1 megapixel native resolution (1024×1024)
- 28-50 inference steps recommended
### Stable Diffusion 3.5 Large Turbo
- Distilled version of SD 3.5 Large
- 4-step inference for fast generation
- Guidance scale of 0 (classifier-free guidance disabled)
- Comparable quality to full model at fraction of the time
### Stable Diffusion 3.5 Medium
- 2.5 billion parameter MMDiT-X architecture
- Designed for consumer hardware (9.9GB VRAM for transformer)
- Dual attention blocks in first 12 transformer layers
- Multi-resolution generation from 0.25 to 2 megapixels
- Skip Layer Guidance recommended for better coherency
## Key Features
- Three text encoders: CLIP ViT-L, OpenCLIP ViT-bigG (77 tokens each), T5-XXL (256 tokens)
- QK-normalization for stable training and easier fine-tuning
- Rectified flow matching replaces traditional DDPM/DDIM sampling
- Strong text rendering and typography in generated images
- Diverse output styles (photography, 3D, painting, line art)
- Highly customizable base for fine-tuning and LoRA training
- T5-XXL encoder optional (can be removed to save memory with minimal quality loss)
- Supports negative prompts for excluding unwanted elements
## Hardware Requirements
- Large: 24GB+ VRAM recommended (fp16), quantizable to fit smaller GPUs
- Large Turbo: 16GB+ VRAM recommended
- Medium: 10GB VRAM minimum (excluding text encoders)
- NF4 quantization available via bitsandbytes for low-VRAM GPUs
- CPU offloading supported via diffusers pipeline
## Common Use Cases
- Photorealistic image generation
- Artistic illustration and concept art
- Typography and text-heavy designs
- Product visualization
- Fine-tuning and LoRA development
- ControlNet-guided generation
## Key Parameters
- **steps**: 28-50 for Large, 4 for Large Turbo, 20-40 for Medium
- **guidance_scale**: 4.5-7.5 for Large/Medium, 0 for Large Turbo
- **max_sequence_length**: T5 token limit (77 or 256, higher = better prompt understanding)
- **resolution**: 1024×1024 native, flexible aspect ratios around 1MP
- **negative_prompt**: Text describing elements to exclude (not supported by Turbo)

View File

@@ -0,0 +1,75 @@
# Stable Diffusion
Stable Diffusion is Stability AI's family of open-source image and video generation models, spanning multiple architectures from U-Net to diffusion transformers.
## Model Variants
### SDXL (Stable Diffusion XL)
- Stability AI's flagship text-to-image model (6.6B parameter U-Net)
- Native 1024x1024 resolution with flexible aspect ratios around 1MP
- Two text encoders (CLIP ViT-L + OpenCLIP ViT-bigG)
- Optional refiner model for second-stage detail enhancement
- Turbo and Lightning distilled variants for 1-4 step generation
- Largest ecosystem of LoRAs, fine-tunes, and community models
### SD3.5 (Stable Diffusion 3.5)
- Diffusion transformer (DiT) architecture, successor to SDXL
- Three text encoders (CLIP ViT-L, OpenCLIP ViT-bigG, T5-XXL) for stronger prompt following
- Available in Large (8B) and Medium (2B) parameter sizes
- Improved text rendering and compositional accuracy over SDXL
- 4 workflow templates available
### SD1.5 (Stable Diffusion 1.5)
- The classic 512x512 latent diffusion model
- Single CLIP ViT-L text encoder, 860M parameter U-Net
- Still widely used for its massive LoRA and checkpoint ecosystem
- Lower VRAM requirements make it accessible on consumer hardware
- 2 workflow templates available
### SVD (Stable Video Diffusion)
- Image-to-video generation model based on Stable Diffusion
- Generates short video clips (14 or 25 frames) from a single image
- Related model for motion generation from static inputs
### Stability API Products
- Reimagine: Stability's API-based image variation and transformation service
## Key Features
- Excellent composition, layout, and photorealism (SDXL/SD3.5)
- Large open-source ecosystem with thousands of community fine-tunes
- Flexible aspect ratios and multi-resolution support
- Dual/triple CLIP text encoding for nuanced prompt interpretation
- Strong text rendering in SD3.5 via T5-XXL encoder
## Hardware Requirements
- SD1.5: 4-6GB VRAM (fp16), runs on most consumer GPUs
- SDXL Base: 8GB VRAM minimum (fp16), 12GB recommended
- SDXL Base + Refiner: 16GB+ VRAM
- SD3.5 Medium: 8-12GB VRAM
- SD3.5 Large: 16-24GB VRAM (fp16), quantized versions for 12GB cards
## Common Use Cases
- Photorealistic image generation
- Artistic illustrations and concept art
- Product photography and design
- Character and portrait generation
- LoRA-based custom style and subject training
- Image-to-video with SVD
## Key Parameters
- **steps**: 20-40 for SDXL base, 15-25 for refiner, 28+ for SD3.5
- **cfg_scale**: 5-10 (7 default for SDXL), 3.5-7 for SD3.5
- **sampler**: DPM++ 2M Karras and Euler are popular for SDXL; Euler for SD3.5
- **resolution**: 1024x1024 native for SDXL/SD3.5, 512x512 for SD1.5
- **clip_skip**: Often set to 1-2; important for SD1.5 LoRA compatibility
- **denoise_strength**: 0.7-0.8 when using the SDXL refiner (img2img)
- **negative_prompt**: Supported in SDXL/SD1.5; not used in SD3.5 by default

View File

@@ -0,0 +1 @@
Stable Diffusion is Stability AI's open-source image and video generation family. SDXL is the flagship text-to-image model (6.6B U-Net, dual CLIP encoders) generating 1024x1024 images with the largest ecosystem of LoRAs and community fine-tunes; it requires 8-12GB VRAM with Turbo/Lightning variants for 1-4 step generation. SD3.5 is the DiT-based successor with triple text encoders (including T5-XXL) in Large (8B, 16-24GB) and Medium (2B, 8-12GB) sizes, offering improved text rendering and compositional accuracy. SD1.5 remains popular for its massive ecosystem at just 4-6GB VRAM (512x512). SVD handles image-to-video generation (14 or 25 frames). Key parameters: 20-40 steps for SDXL, cfg_scale 5-10 (7 default), DPM++ 2M Karras sampler. Primary uses: photorealistic generation, artistic illustration, product photography, character generation, and LoRA-based custom training.

View File

@@ -0,0 +1,64 @@
# Seedance
Seedance is ByteDance's video generation model family, designed for cinematic, high-fidelity video creation from text and images. The 1.0 series established a standard for fluid motion and multi-shot consistency, while the 1.5 series adds native joint audio-visual generation.
## Model Variants
### Seedance 1.5 Pro
- Native audio-visual generation producing video and audio in a single pass
- Multilingual lip-sync supporting English, Mandarin, Japanese, Korean, and Spanish
- 1080p output with 5-12 second duration
- Advanced directorial camera controls (dolly zoom, tracking shots, whip pans)
- Captures micro-expressions, non-verbal cues, and emotional transitions
### Seedance 1.0 Pro
- Production-quality 1080p video generation
- Text-to-video and image-to-video with first and last frame control
- Native multi-shot storytelling with subject and style consistency across cuts
- Cinematic camera grammar interpretation (35mm film, noir lighting, drone shots)
- 2-12 second video duration at 24-30fps
### Seedance 1.0 Pro Fast
- Faster, more cost-effective version of 1.0 Pro
- Same capabilities with reduced generation time
### Seedance 1.0 Lite
- Optimized for speed and iteration at 720p or 1080p
- Lower cost per generation for rapid prototyping
## Key Features
- Smooth, stable motion with wide dynamic range for large-scale movements
- Native multi-shot storytelling maintaining consistency across transitions
- Diverse stylistic expression (photorealism, cyberpunk, illustration, pixel art)
- Precise prompt following for complex actions, multi-agent interactions, and camera work
- Joint audio-visual synthesis with environmental sounds and dialogue (1.5)
- Supports multiple aspect ratios (16:9, 9:16, 1:1, 4:3, 21:9, and more)
## Hardware Requirements
- Cloud API only; no local weights publicly available
- Accessed via seed.bytedance.com, Scenario, fal.ai, and other API providers
- 1080p 5-second video costs approximately $0.62 via fal.ai (Pro)
- Lite version available at lower cost ($0.18 per 720p 5-second video)
## Common Use Cases
- Cinematic shorts and scene previsualization
- Music video concept development
- Product demonstration and marketing videos
- Character-focused animation sequences
- Social media content with audio (1.5)
- Moodboard and style exploration for creative teams
## Key Parameters
- **prompt**: Text description of desired scene, action, and camera work
- **image_url**: Source image for image-to-video generation (first frame)
- **duration**: Video length (2-12 seconds for 1.0, 5-12 seconds for 1.5)
- **resolution**: 480p, 720p, or 1080p output
- **aspect_ratio**: 16:9, 9:16, 1:1, 4:3, 21:9, 9:21

View File

@@ -0,0 +1,50 @@
# Seedream
Seedream is ByteDance's text-to-image generation model, capable of producing high-quality images with strong text rendering, bilingual support (English and Chinese), and native high-resolution output.
## Model Variants
### Seedream 3.0
- Native 2K resolution output without post-processing
- Bilingual image generation (English and Chinese)
- 3-second end-to-end generation for 1K images
- Improved text rendering for small fonts and long text layouts
### Seedream 4.0
- Unified architecture for text-to-image and image editing
- Native output up to 4K resolution
- Multi-image reference input (up to 6 source images)
- 1.8-second inference for 2K images
- Batch input and output for multiple generations
- Natural language image editing capabilities
## Key Features
- Accurate text rendering in both English and Chinese
- Knowledge-driven generation for educational illustrations and charts
- Strong character consistency across multiple angles
- Prompt-based image editing without separate tools
- Versatile style support from photorealism to anime
- Leading scores on Artificial Analysis Image Arena
## Hardware Requirements
- API-only access via ByteDance Volcano Engine
- No local hardware requirements for end users
- Third-party API providers available (e.g., EvoLink)
## Common Use Cases
- Poster and advertisement design with embedded text
- E-commerce product photography
- Character design with multi-angle consistency
- Educational illustration and infographic generation
- Brand-consistent marketing materials
## Key Parameters
- **prompt**: Text description of the desired image
- **resolution**: Output resolution (up to 4K supported)
- **aspect_ratio**: Supports 16:9, 4:3, 1:1, and custom ratios

View File

@@ -0,0 +1,47 @@
# SeedVR2
SeedVR2 is a one-step diffusion-based video restoration model developed by ByteDance Seed and NTU S-Lab, published at ICLR 2026.
## Model Variants
### SeedVR2-3B
- 3B parameter DiT with one-step inference for video and image upscaling
- Available in FP16, FP8, and GGUF quantized formats
### SeedVR2-7B
- 7B parameter model with Sharp variant for maximum detail
- Multi-GPU inference; supports 1080p and 2K on 4x H100-80GB
### SeedVR (Original)
- Multi-step diffusion model (CVPR 2025 Highlight)
- Arbitrary-resolution restoration without pretrained diffusion prior
## Key Features
- One-step inference achieving 10x speedup over multi-step methods
- Adaptive window attention with dynamic sizing for high-resolution inputs
- Adversarial post-training against real data for faithful detail recovery
- ComfyUI integration via official SeedVR2 Video Upscaler nodes
- Apache 2.0 open-source license
## Hardware Requirements
- Minimum: 8-12GB VRAM with GGUF quantization and tiled VAE
- Recommended: 24GB+ VRAM (RTX 4090) for 3B model at 1080p
- High-end: 4x H100-80GB for 7B model at 2K resolution
## Common Use Cases
- Upscaling AI-generated video to 1080p or 4K
- Restoring degraded or compressed video footage
- Image super-resolution and detail recovery
## Key Parameters
- resolution: target shortest-edge resolution (720, 1080, 2160)
- batch_size: frames per batch, must follow 4n+1 formula (5, 9, 13, 17, 21)
- seed: random seed for reproducible generation
- color_fix_type: wavelet, adain, hsv, or none

View File

@@ -0,0 +1,53 @@
# Stable Audio Open
Stable Audio Open 1.0 is Stability AI's open-source text-to-audio model for generating sound effects, production elements, and short musical clips.
## Model Variants
### Stable Audio Open 1.0
- 1.2B parameter latent diffusion model
- Transformer-based diffusion (DiT) architecture
- T5-base text encoder for conditioning
- Variational autoencoder for audio compression
- Stability AI Community License (non-commercial)
### Stable Audio (Commercial)
- Full-length music generation up to 3 minutes with audio-to-audio and inpainting
- Available via Stability AI platform API, commercial license
## Key Features
- Generates up to 47 seconds of stereo audio at 44.1kHz
- Text-prompted sound effects, drum beats, ambient sounds, and foley
- Variable-length output with timing control
- Fine-tunable on custom audio datasets
- Trained exclusively on Creative Commons licensed audio (CC0, CC BY, CC Sampling+)
- Strong performance for sound effects and field recordings
- Compatible with both stable-audio-tools and diffusers libraries
## Hardware Requirements
- Minimum: 8GB VRAM (fp16)
- Recommended: 12GB+ VRAM for comfortable inference
- Half-precision (fp16) supported for reduced memory
- Chunked decoding available for memory-constrained setups
- Inference speed: 8-20 diffusion steps per second depending on GPU
## Common Use Cases
- Sound effect and foley generation
- Drum beats and instrument riff creation
- Ambient soundscapes and background audio
- Music production elements and samples
- Audio prototyping for film and game sound design
## Key Parameters
- **steps**: Number of inference steps (100-200 recommended)
- **cfg_scale**: Classifier-free guidance scale (typically 7)
- **seconds_total**: Target audio duration (up to 47 seconds)
- **seconds_start**: Start time offset for timing control
- **negative_prompt**: Text describing undesired audio qualities
- **sampler_type**: Diffusion sampler (dpmpp-3m-sde recommended)

View File

@@ -0,0 +1,55 @@
# Stable Video Diffusion
Stable Video Diffusion (SVD) is Stability AI's image-to-video diffusion model that generates short video clips from a single conditioning image. In user studies, SVD was preferred over GEN-2 and PikaLabs for video quality.
## Model Variants
### SVD-XT (25 frames)
- Generates 25 frames at 576x1024 resolution
- Finetuned from the 14-frame SVD base model
- Includes temporally consistent f8-decoder
- Standard frame-wise decoder also available
### SVD (14 frames)
- Original release generating 14 frames
- Foundation for community fine-tunes and extensions
- Same 576x1024 native resolution
## Key Features
- Image-to-video generation from a single still image
- Temporally consistent video output with finetuned decoder
- Preferred over GEN-2 and PikaLabs in human evaluation studies
- SynthID-compatible watermarking enabled by default
- Latent diffusion architecture for efficient generation
## Hardware Requirements
- Minimum: 16GB VRAM
- Recommended: A100 80GB for full quality (tested configuration)
- SVD generation ~100s, SVD-XT ~180s on A100 80GB
- Optimizations available for lower VRAM cards with quality tradeoffs
## Common Use Cases
- Animating still images into short video clips
- Product visualization and motion graphics
- Creative video experiments and art
- Research on generative video models
## Key Parameters
- **num_frames**: 14 (SVD) or 25 (SVD-XT)
- **resolution**: 576x1024 native
- **conditioning_frame**: Input image at same resolution
- **duration**: Up to ~4 seconds (25 frames)
## Limitations
- Short videos only (4 seconds maximum)
- No text-based control (image conditioning only)
- Cannot render legible text in output
- Faces and people may not generate properly
- May produce videos without motion or with very slow camera pans

View File

@@ -0,0 +1,45 @@
# Topaz
Topaz Labs provides AI-powered image and video enhancement software for upscaling, denoising, sharpening, and restoration.
## Model Variants
### Topaz Photo AI
- All-in-one image enhancement with 11 AI tools including Sharpen, Denoise, Recover Faces, and Upscale
- RAW image support and plugin integration with Photoshop and Lightroom
### Topaz Video AI
- 19 AI models: Proteus, Artemis, Gaia, Iris, Nyx, Starlight, and more
- Upscale video from SD to 4K/8K/16K with frame interpolation and stabilization
### Bloom
- Creative upscaler that removes the artificial look from AI-generated images
- Realism mode for natural skin, hair, and eyes on AI faces
## Key Features
- Multiple specialized AI models optimized for different content types
- Enterprise API with Face Realism, Colorization, and Video Colorization
- Local and cloud rendering with After Effects and DaVinci Resolve plugins
## Hardware Requirements
- Minimum: 8GB VRAM for GPU-accelerated processing
- Recommended: NVIDIA RTX 3080+ with 32GB+ RAM for video
- Available on Mac and Windows as standalone or plugin
## Common Use Cases
- Upscaling old or low-resolution footage to 4K+
- Denoising low-light photography and restoring archival video
- Enhancing AI-generated images for photorealism
## Key Parameters
- AI Model Selection: choose specialized models per content type
- Scale Factor: 2x, 4x, or higher depending on tool
- Denoise Strength: adjustable noise reduction level
- Sharpen Amount: controls detail enhancement intensity

View File

@@ -0,0 +1,50 @@
# Tripo
Tripo is an AI-powered 3D generation platform that creates production-ready 3D models from text or images in seconds, developed by VAST AI Research.
## Model Variants
### Tripo v3.0
- Sculpture-level geometry precision with sharp edges
- Best for high-fidelity production assets
### Tripo v2.0
- Industry-leading geometry with PBR material support
- High accuracy for detailed models
### Tripo v1.4
- Fast generation with realistic texture effects
- Best for rapid prototyping
## Key Features
- Text-to-3D and image-to-3D generation
- Multi-image input for high-fidelity reconstruction
- 4K PBR-ready texture generation
- Automatic rigging and animation
- Model segmentation for part-based editing
- Export in STL, OBJ, FBX, GLB, and USDZ formats
## Hardware Requirements
- Cloud API-based (no local GPU required)
- TripoSR open-source variant requires 8GB+ VRAM
## Common Use Cases
- Game asset creation
- 3D printing prototyping
- AR/VR content development
- Product visualization and e-commerce
- Character design and animation
## Key Parameters
- **prompt**: Text description of desired 3D model
- **image**: Reference image (JPG, PNG, WEBP, up to 5MB)
- **texture_resolution**: Up to 4K with PBR maps
- **format**: Output format (GLB, FBX, OBJ, STL, USDZ)
- **style**: Optional stylization (Lego, Voxel, Voronoi)

Some files were not shown because too many files have changed in this diff Show More