Files
ComfyUI_frontend/apps/hub/knowledge/concepts/ip-adapter.md
dante01yoon bbd0a6b201 feat: migrate workflow template site as apps/hub
Migrate workflow_templates/site into the frontend monorepo as apps/hub
so the hub can use @comfyorg/design-system and shared packages.

Changes to existing files:
- pnpm-workspace.yaml: add @astrojs/sitemap, @astrojs/vercel, lucide-vue-next
- eslint.config.ts: add hub ignores and i18n/import rule overrides
- .oxlintrc.json: add hub scripts to ignore patterns
- knip.config.ts: add hub workspace config

apps/hub adaptations from source:
- Replace local cn() with @comfyorg/tailwind-utils (19 files)
- Integrate @comfyorg/design-system/css/base.css in global.css
- Make TEMPLATES_DIR configurable via HUB_TEMPLATES_DIR env var
- Add HUB_SKIP_SYNC flag for builds without template data
- Remove Vite 8-incompatible rollupOptions.output.manualChunks
- Fix stylelint violations (modern color notation, number precision)
- Gitignore generated content (thumbnails, synced templates, AI cache)
2026-04-06 20:53:13 +09:00

1.8 KiB
Raw Blame History

IP-Adapter

IP-Adapter (Image Prompt Adapter) uses reference images to guide generation style, composition, or subject instead of — or alongside — text prompts. Rather than describing what you want in words, you show the model an image, enabling "image prompting." This is especially powerful for transferring artistic style, maintaining character consistency across generations, or conveying visual concepts that are difficult to express in text.

How It Works in ComfyUI

  • Key nodes: IPAdapterModelLoader, IPAdapterApply (or IPAdapterAdvanced), CLIPVisionLoader, CLIPVisionEncode, PrepImageForClipVision
  • Typical workflow pattern: Load IP-Adapter model + CLIP Vision model → prepare and encode reference image → apply adapter to the main model → connect to sampler → decode

Key Settings

  • weight (0.01.0): Controls the influence of the reference image on the output. A range of 0.50.8 is typical; higher values make the output closer to the reference
  • weight_type: Determines how the reference is interpreted — standard for general use, style transfer for artistic style without copying content, composition for layout guidance
  • start_at / end_at (0.01.0): Controls when the adapter is active during sampling. Limiting the range (e.g., 0.00.8) can improve prompt responsiveness while retaining reference influence

Tips

  • Use the style_transfer weight type when you want to borrow an artistic style without reproducing the reference image's content
  • Combine IP-Adapter with a text prompt for the best results — the text adds detail and specificity on top of the visual guidance
  • Face-specific IP-Adapter models (e.g., ip-adapter-faceid) exist for portrait consistency across multiple generations
  • Lower the weight if your output looks too similar to the reference image