mirror of
https://github.com/Comfy-Org/ComfyUI_frontend.git
synced 2026-04-20 14:30:41 +00:00
Migrate workflow_templates/site into the frontend monorepo as apps/hub so the hub can use @comfyorg/design-system and shared packages. Changes to existing files: - pnpm-workspace.yaml: add @astrojs/sitemap, @astrojs/vercel, lucide-vue-next - eslint.config.ts: add hub ignores and i18n/import rule overrides - .oxlintrc.json: add hub scripts to ignore patterns - knip.config.ts: add hub workspace config apps/hub adaptations from source: - Replace local cn() with @comfyorg/tailwind-utils (19 files) - Integrate @comfyorg/design-system/css/base.css in global.css - Make TEMPLATES_DIR configurable via HUB_TEMPLATES_DIR env var - Add HUB_SKIP_SYNC flag for builds without template data - Remove Vite 8-incompatible rollupOptions.output.manualChunks - Fix stylelint violations (modern color notation, number precision) - Gitignore generated content (thumbnails, synced templates, AI cache)
2.2 KiB
2.2 KiB
ControlNet
ControlNet guides image generation using structural conditions extracted from reference images — such as edge maps, depth information, or human poses. Instead of relying solely on text prompts for composition, ControlNet lets you specify the spatial layout precisely. This bridges the gap between text-to-image flexibility and the structural precision needed for professional workflows.
How It Works in ComfyUI
- Key nodes involved:
ControlNetLoader,ControlNetApplyAdvanced, preprocessor nodes (CannyEdgePreprocessor,DepthAnythingPreprocessor,DWPosePreprocessor,LineartPreprocessor) - Typical workflow pattern: Load reference image → preprocess to extract condition (edges/depth/pose) → load ControlNet model → apply condition to sampling → generate image with structural guidance
ControlNet Types
- Canny: Detects edges to preserve outlines and shapes
- Depth: Captures spatial depth for accurate foreground/background placement
- OpenPose: Extracts human body and hand poses for character positioning
- Normal Map: Encodes surface orientation for consistent lighting and geometry
- Lineart: Follows line drawings and illustrations as generation guides
- Scribble: Uses rough sketches as loose compositional guides
Key Settings
- Strength: Controls how strongly the condition guides generation (0.0–1.0). Values of 0.5–1.0 are typical. Higher values enforce the structure more rigidly; lower values allow the model more creative freedom.
- start_percent / end_percent: Controls when the ControlNet activates during the sampling process. Starting at 0.0 and ending at 1.0 applies guidance throughout. Ending earlier (e.g., 0.8) lets the model refine fine details freely in final steps.
Tips
- Always preprocess your input image with the appropriate preprocessor node before feeding it to ControlNet. Raw images will not produce correct conditioning.
- Combine multiple ControlNets for precise control — for example, Depth for spatial layout plus OpenPose for character positioning. Stack them by chaining
ControlNetApplyAdvancednodes. - If your generation looks distorted or overcooked, lower the ControlNet strength. Values above 0.8 can fight with the text prompt and produce artifacts.