mirror of
https://github.com/Comfy-Org/ComfyUI_frontend.git
synced 2026-04-20 14:30:41 +00:00
Migrate workflow_templates/site into the frontend monorepo as apps/hub so the hub can use @comfyorg/design-system and shared packages. Changes to existing files: - pnpm-workspace.yaml: add @astrojs/sitemap, @astrojs/vercel, lucide-vue-next - eslint.config.ts: add hub ignores and i18n/import rule overrides - .oxlintrc.json: add hub scripts to ignore patterns - knip.config.ts: add hub workspace config apps/hub adaptations from source: - Replace local cn() with @comfyorg/tailwind-utils (19 files) - Integrate @comfyorg/design-system/css/base.css in global.css - Make TEMPLATES_DIR configurable via HUB_TEMPLATES_DIR env var - Add HUB_SKIP_SYNC flag for builds without template data - Remove Vite 8-incompatible rollupOptions.output.manualChunks - Fix stylelint violations (modern color notation, number precision) - Gitignore generated content (thumbnails, synced templates, AI cache)
1.8 KiB
1.8 KiB
IP-Adapter
IP-Adapter (Image Prompt Adapter) uses reference images to guide generation style, composition, or subject instead of — or alongside — text prompts. Rather than describing what you want in words, you show the model an image, enabling "image prompting." This is especially powerful for transferring artistic style, maintaining character consistency across generations, or conveying visual concepts that are difficult to express in text.
How It Works in ComfyUI
- Key nodes:
IPAdapterModelLoader,IPAdapterApply(orIPAdapterAdvanced),CLIPVisionLoader,CLIPVisionEncode,PrepImageForClipVision - Typical workflow pattern: Load IP-Adapter model + CLIP Vision model → prepare and encode reference image → apply adapter to the main model → connect to sampler → decode
Key Settings
- weight (0.0–1.0): Controls the influence of the reference image on the output. A range of 0.5–0.8 is typical; higher values make the output closer to the reference
- weight_type: Determines how the reference is interpreted —
standardfor general use,style transferfor artistic style without copying content,compositionfor layout guidance - start_at / end_at (0.0–1.0): Controls when the adapter is active during sampling. Limiting the range (e.g., 0.0–0.8) can improve prompt responsiveness while retaining reference influence
Tips
- Use the
style_transferweight type when you want to borrow an artistic style without reproducing the reference image's content - Combine IP-Adapter with a text prompt for the best results — the text adds detail and specificity on top of the visual guidance
- Face-specific IP-Adapter models (e.g.,
ip-adapter-faceid) exist for portrait consistency across multiple generations - Lower the weight if your output looks too similar to the reference image