## Summary Follow-up to the closed earlier attempt in #11646. This PR keeps the same user-facing goal, but changes the implementation to reuse the existing missing model pipeline for refresh instead of maintaining a separate candidate-only recheck path. Adds a missing model refresh action in the Errors tab by reusing the existing missing model pipeline, so users can re-check models after downloading or manually placing files without reloading the workflow. ## Changes - **What**: - Adds `app.refreshMissingModels()` as a reusable refresh entry point for the current root graph. - Splits node definition reloading into `app.reloadNodeDefs()` so missing-model refresh can pull fresh `object_info` without showing the generic combo refresh success flow. - Reuses the existing missing model pipeline instead of adding a separate candidate-only checker. The refresh path serializes the current graph, reuses active workflow model metadata when available, falls back to current missing-model metadata, and then reruns the same candidate discovery/enrichment/surfacing flow used during workflow load. - Adds missing model refresh state and error handling to `missingModelStore`. - Adds a Refresh button next to Download all in the missing model card action bar. - Moves Download all from the Errors tab header into the missing model card, so the Download all and Refresh actions render or hide together. - Changes Download all visibility from “more than one downloadable model” to “at least one downloadable model.” - Keeps the action bar hidden when there are no downloadable missing models; Cloud still does not render this action area. - Normalizes active workflow `pendingWarnings` updates so resolved missing model warnings do not get revived by stale empty warning objects. - Adds test IDs and coverage for the new action bar, refresh state, refresh delegation, pending warning sync, and E2E refresh behavior. - **Breaking**: None. - **Dependencies**: None. ## Review Focus The main design choice is intentionally reusing the missing model pipeline for refresh instead of implementing a smaller candidate-only recheck. The earlier candidate-only approach was cheaper, but it created a separate source of truth for missing-model resolution and made edge cases harder to reason about. In particular, it could diverge from the behavior used when a workflow is loaded, and it did not naturally handle the case where a model becomes missing after the workflow is already open. This version pays the cost of refreshing node definitions and rerunning the missing-model scan for the current graph, but keeps the refresh behavior aligned with workflow load semantics. Expected behavior by environment: - OSS browser: - The action bar appears when at least one missing model has a downloadable URL and directory. - Download all uses the existing browser download path. - Refresh reloads `object_info`, refreshes node definitions/combo values, reruns missing-model detection for the current graph, and clears the error if the selected model is now available. - OSS desktop: - The same action bar appears under the same downloadable-model condition. - Download all uses the existing Electron DownloadManager path. - Refresh uses the same missing-model pipeline as browser, so manually placed files or desktop-downloaded files can be rechecked without reloading the workflow. - Cloud: - The action bar remains hidden because model download/import is not supported in this section for Cloud. A few boundaries are intentional: - This PR does not add automatic filesystem watching. Browser OSS cannot reliably observe local model folder changes, so the user-triggered Refresh button remains the cross-environment mechanism. - This PR does not redesign the public `refreshComboInNodes` API beyond extracting `reloadNodeDefs()` for reuse. Further cleanup of toast behavior or a more explicit object-info reload API can be follow-up work. - This PR keeps refresh scoped to missing-model validation; missing media and missing nodes continue to use their existing flows. Linear: FE-417 ## Screenshots (if applicable) https://github.com/user-attachments/assets/2e02799f-1374-4377-b7b3-172241517772 ## Validation - `pnpm format` - `pnpm lint` (passes; existing unrelated warning remains in `src/platform/workspace/composables/useWorkspaceBilling.test.ts`) - `pnpm typecheck` - `pnpm test:unit` - `pnpm test:browser:local -- --project=chromium browser_tests/tests/propertiesPanel/errorsTabMissingModels.spec.ts` - `pnpm build` - `NX_SKIP_NX_CACHE=true DISTRIBUTION=desktop USE_PROD_CONFIG=true NODE_OPTIONS='--max-old-space-size=8192' pnpm exec nx build` - Manual desktop verification through `~/Projects/desktop` after copying the desktop build into `assets/ComfyUI/web_custom_versions/desktop_app`: - confirmed the FE bundle is built with `DISTRIBUTION = "desktop"` - confirmed missing model Download uses the desktop download path instead of browser download - confirmed Refresh can clear the missing model error after the model is available - Push hook: `pnpm knip --cache` ┆Issue is synchronized with this [Notion page](https://www.notion.so/PR-11661-feat-refresh-missing-models-through-pipeline-34f6d73d3650811488defee54a7a6667) by [Unito](https://www.unito.io)
Playwright Testing for ComfyUI_frontend
This document outlines the setup, usage, and common patterns for Playwright browser tests in the ComfyUI_frontend project.
Prerequisites
CRITICAL: Start ComfyUI backend with --multi-user flag:
python main.py --multi-user
Without this flag, parallel tests will conflict and fail randomly.
Setup
ComfyUI devtools
ComfyUI_devtools is included in this repository under tools/devtools/. During CI/CD, these files are automatically copied to the custom_nodes directory.
ComfyUI_devtools adds additional API endpoints and nodes to ComfyUI for browser testing.
For local development, copy the devtools files to your ComfyUI installation:
cp -r tools/devtools/* /path/to/your/ComfyUI/custom_nodes/ComfyUI_devtools/
Node.js & Playwright Prerequisites
Ensure you have the Node.js version specified in .nvmrc installed.
Then, set up the Chromium test driver:
pnpm exec playwright install chromium --with-deps
Environment Configuration
Create .env from the template:
cp .env_example .env
Key settings for debugging:
# Remove Vue dev overlay that blocks UI elements
DISABLE_VUE_PLUGINS=true
# Test against dev server (recommended) or backend directly
PLAYWRIGHT_TEST_URL=http://localhost:5173 # Dev server
# PLAYWRIGHT_TEST_URL=http://localhost:8188 # Direct backend
# Path to ComfyUI for backing up user data/settings before tests
TEST_COMFYUI_DIR=/path/to/your/ComfyUI
Common Setup Issues
Release API Mocking
By default, all tests mock the release API (api.comfy.org/releases) to prevent release notification popups from interfering with test execution. This is necessary because the release notifications can appear over UI elements and block test interactions.
To test with real release data, you can disable mocking:
await comfyPage.setup({ mockReleases: false })
For tests that specifically need to test release functionality, see the example in tests/releaseNotifications.spec.ts.
Running Tests
Always use UI mode for development:
pnpm test:browser:local --ui
UI mode features:
- Locator picker: Click the target icon, then click any element to get the exact locator code to use in your test. The code appears in the Locator tab.
- Step debugging: Step through your test line-by-line by clicking Source tab
- Time travel: In the Actions tab/panel, click any step to see the browser state at that moment
- Console/Network Tabs: View logs and API calls at each step
- Attachments Tab: View all snapshots with expected and actual images
For CI or headless testing:
pnpm test:browser:local # Run all tests
pnpm test:browser:local widget.spec.ts # Run specific test file
Test Structure
Browser tests in this project follow a specific organization pattern:
-
Fixtures: Located in
fixtures/- These provide test setup and utilitiesComfyPage.ts- The main fixture for interacting with ComfyUIComfyMouse.ts- Utility for mouse interactions with the canvas- Components fixtures in
fixtures/components/- Page object models for UI components
-
Tests: Located in
tests/- The actual test specifications- Organized by functionality (e.g.,
widget.spec.ts,interaction.spec.ts) - Snapshot directories (e.g.,
widget.spec.ts-snapshots/) contain reference screenshots
- Organized by functionality (e.g.,
-
Utilities: Located in
utils/- Common utility functionslitegraphUtils.ts- Utilities for working with LiteGraph nodes
Writing Effective Tests
When writing new tests, follow these patterns:
Test Structure
// Import the test fixture
import { comfyPageFixture as test } from '@e2e/fixtures/ComfyPage'
test.describe('Feature Name', () => {
// Set up test environment if needed
test.beforeEach(async ({ comfyPage }) => {
// Common setup
})
test('should do something specific', async ({ comfyPage }) => {
// Test implementation
})
})
Leverage Existing Fixtures and Helpers
Always check for existing helpers and fixtures before implementing new ones:
- ComfyPage: Main fixture with methods for canvas interaction and node management
- ComfyMouse: Helper for precise mouse operations on the canvas
- Helpers: Check
browser_tests/helpers/for specialized helpers like:actionbar.ts: Interact with the action barmanageGroupNode.ts: Group node management operationstemplates.ts: Template workflows operations
- Component Fixtures: Check
browser_tests/fixtures/components/for UI component helpers - Utility Functions: Check
browser_tests/utils/andbrowser_tests/fixtures/utils/for shared utilities
Most common testing needs are already addressed by these helpers, which will make your tests more consistent and reliable.
Import Conventions
- Prefer
@e2e/*for imports withinbrowser_tests/ - Continue using
@/*for imports fromsrc/ - Avoid introducing new deep relative imports within
browser_tests/when the alias is available
Key Testing Patterns
-
Focus elements explicitly: Canvas-based elements often need explicit focus before interaction:
// Click the canvas first to focus it before pressing keys await comfyPage.canvas.click() await comfyPage.page.keyboard.press('a') -
Mark canvas as dirty if needed: Some interactions need explicit canvas updates:
// After programmatically changing node state, mark canvas dirty await comfyPage.page.evaluate(() => { window['app'].graph.setDirtyCanvas(true, true) }) -
Use node references over coordinates: Node references from
fixtures/utils/litegraphUtils.tsprovide stable ways to interact with nodes:// Prefer this: const node = await comfyPage.getNodeRefsByType('LoadImage')[0] await node.click('title') // Over this: await comfyPage.canvas.click({ position: { x: 100, y: 100 } }) -
Wait for canvas to render after UI interactions:
await comfyPage.nextFrame() -
Clean up persistent server state: While most state is reset between tests, anything stored on the server persists:
// Reset settings that affect other tests (these are stored on server) await comfyPage.setSetting('Comfy.ColorPalette', 'dark') await comfyPage.setSetting('Comfy.NodeBadge.NodeIdBadgeMode', 'None') // Clean up uploaded files if needed comfyPage.deleteFileAfterTest({ filename: 'image.png' }) -
Prefer functional assertions over screenshots: Use screenshots only when visual verification is necessary:
// Prefer this: await expect.poll(() => node.isPinned()).toBe(true) await expect.poll(() => node.getProperty('title')).toBe('Expected Title') // Over this - only use when needed: await expect(comfyPage.canvas).toHaveScreenshot('state.png') -
Use minimal test workflows: When creating test workflows, keep them as minimal as possible:
// Include only the components needed for the test await comfyPage.loadWorkflow('single_ksampler') -
Debug helpers for visual debugging (remove before committing):
ComfyPage includes temporary debug methods for troubleshooting:
test('debug failing interaction', async ({ comfyPage }, testInfo) => { // Add visual markers to see click positions await comfyPage.debugAddMarker({ x: 100, y: 200 }) // Attach screenshot with markers to test report await comfyPage.debugAttachScreenshot(testInfo, 'node-positions', { element: 'canvas', markers: [{ position: { x: 100, y: 200 } }] }) // Show canvas overlay for easier debugging await comfyPage.debugShowCanvasOverlay() // Remember to remove debug code before committing! })Available debug methods:
debugAddMarker(position)- Red circle at positiondebugAttachScreenshot(testInfo, name)- Attach to test reportdebugShowCanvasOverlay()- Show canvas as overlaydebugGetCanvasDataURL()- Get canvas as base64
Common Patterns and Utilities
Page Object Pattern
Tests use the Page Object pattern to create abstractions over the UI:
// Using the ComfyPage fixture
test('Can toggle boolean widget', async ({ comfyPage }) => {
await comfyPage.loadWorkflow('widgets/boolean_widget')
const node = (await comfyPage.getFirstNodeRef())!
const widget = await node.getWidget(0)
await widget.click()
})
Node References
The NodeReference class provides helpers for interacting with LiteGraph nodes:
// Getting node by type and interacting with it
const nodes = await comfyPage.getNodeRefsByType('LoadImage')
const loadImageNode = nodes[0]
const widget = await loadImageNode.getWidget(0)
await widget.click()
Visual Regression Testing
Tests use screenshot comparisons to verify UI state:
// Take a screenshot and compare to reference
await expect(comfyPage.canvas).toHaveScreenshot('boolean_widget_toggled.png')
Waiting for Animations
Always call nextFrame() after actions that trigger animations:
await comfyPage.canvas.click({ position: { x: 100, y: 100 } })
await comfyPage.nextFrame() // Wait for canvas to redraw
Mouse Interactions
Canvas operations use special helpers to ensure proper timing:
// Using ComfyMouse for drag and drop
await comfyMouse.dragAndDrop(
{ x: 100, y: 100 }, // From
{ x: 200, y: 200 } // To
)
// Standard ComfyPage helpers
await comfyPage.drag({ x: 100, y: 100 }, { x: 200, y: 200 })
await comfyPage.pan({ x: 200, y: 200 })
await comfyPage.zoom(-100) // Zoom in
Workflow Management
Tests use workflows stored in assets/ for consistent starting points:
// Load a test workflow
await comfyPage.loadWorkflow('single_ksampler')
// Wait for workflow to load and stabilize
await comfyPage.nextFrame()
Custom Assertions
The project includes custom Playwright assertions through comfyExpect:
// Check if a node is in a specific state
await expect(node).toBePinned()
await expect(node).toBeBypassed()
await expect(node).toBeCollapsed()
Troubleshooting Common Issues
Flaky Tests
- Timing Issues: Always wait for animations to complete with
nextFrame() - Coordinate Sensitivity: Canvas coordinates are viewport-relative; use node references when possible
- Test Isolation: Tests run in parallel; avoid dependencies between tests
- Screenshots vary: Ensure your OS and browser match the reference environment (Linux)
- Async / await: Race conditions are a very common cause of test flakiness
Screenshot Testing
Due to variations in system font rendering, screenshot expectations are platform-specific. Please note:
- Do not commit local screenshot expectations to the repository
- We maintain Linux screenshot expectations as our GitHub Action runner operates in a Linux environment
- While developing, you can generate local screenshots for your tests, but these will differ from CI-generated ones
Working with Screenshots Locally
Option 1 - Skip screenshot tests (add to playwright.config.ts):
export default defineConfig({
grep: process.env.CI ? undefined : /^(?!.*screenshot).*$/
})
Option 2 - Generate local baselines for comparison:
pnpm test:browser:local --update-snapshots
Creating New Screenshot Baselines
For PRs from Comfy-Org/ComfyUI_frontend branches:
- Write test with
toHaveScreenshot('filename.png') - Create PR and add
New Browser Test Expectationlabel - CI will generate and commit the Linux baseline screenshots
Note: Fork PRs cannot auto-commit screenshots. A maintainer will need to commit the screenshots manually for you (don't worry, they'll do it).
Viewing Test Reports
Automated Test Deployment
The project automatically deploys Playwright test reports to Cloudflare Pages for every PR and push to main branches.
Accessing Test Reports
- From PR comments: Click the "View Report" links for each browser
- Direct URLs: Reports are available at
https://[branch].comfyui-playwright-[browser].pages.dev(branch-specific deployments) - From GitHub Actions: Download artifacts from failed runs
How It Works
-
Test execution: All browser tests run in parallel across multiple browsers
-
Report generation: HTML reports are generated for each browser configuration
-
Cloudflare deployment: Each browser's report deploys to its own Cloudflare Pages project with branch isolation:
comfyui-playwright-chromium(with branch-specific URLs)comfyui-playwright-mobile-chrome(with branch-specific URLs)comfyui-playwright-chromium-2x(2x scale, with branch-specific URLs)comfyui-playwright-chromium-0-5x(0.5x scale, with branch-specific URLs)
-
PR comments: GitHub automatically updates PR comments with:
- ✅/❌ Test status for each browser
- Direct links to interactive test reports
- Real-time progress updates as tests complete
Resources
- Playwright UI Mode - Interactive test debugging
- Playwright Debugging Guide
- act - Run GitHub Actions locally for CI debugging