mirror of
https://github.com/Comfy-Org/ComfyUI_frontend.git
synced 2026-04-26 09:19:43 +00:00
Migrate workflow_templates/site into the frontend monorepo as apps/hub so the hub can use @comfyorg/design-system and shared packages. Changes to existing files: - pnpm-workspace.yaml: add @astrojs/sitemap, @astrojs/vercel, lucide-vue-next - eslint.config.ts: add hub ignores and i18n/import rule overrides - .oxlintrc.json: add hub scripts to ignore patterns - knip.config.ts: add hub workspace config apps/hub adaptations from source: - Replace local cn() with @comfyorg/tailwind-utils (19 files) - Integrate @comfyorg/design-system/css/base.css in global.css - Make TEMPLATES_DIR configurable via HUB_TEMPLATES_DIR env var - Add HUB_SKIP_SYNC flag for builds without template data - Remove Vite 8-incompatible rollupOptions.output.manualChunks - Fix stylelint violations (modern color notation, number precision) - Gitignore generated content (thumbnails, synced templates, AI cache)
2.2 KiB
2.2 KiB
GPT-Image-1
GPT-Image-1 is OpenAI's natively multimodal image generation model, capable of generating and editing images from text and image inputs. It is accessed in ComfyUI through API nodes.
Model Variants
GPT-Image-1.5
- Latest and most advanced GPT Image model
- Best overall quality with superior instruction following
- High input fidelity for the first 5 input images
- Supports generate vs. edit action control
- Multi-turn editing via the Responses API
GPT-Image-1
- Production-grade image generation and editing
- High input fidelity for the first input image
- Supports up to 16 input images for editing
- Up to 10 images per generation request
GPT-Image-1-Mini
- Cost-effective variant for lower quality requirements
- Same API surface as GPT-Image-1
- Suitable for rapid prototyping and high-volume workloads
Key Features
- Superior text rendering in generated images
- Real-world knowledge for accurate depictions
- Transparent background support (PNG and WebP)
- Mask-based inpainting with prompt guidance
- Multi-image editing: combine up to 16 reference images
- Streaming partial image output during generation
- Content moderation with adjustable strictness
Hardware Requirements
- No local GPU required — cloud API service via OpenAI
- Accessed through ComfyUI API nodes
- Requires OpenAI API key and organization verification
Common Use Cases
- Text-to-image generation with detailed prompts
- Image editing and compositing from multiple references
- Product photography and mockup generation
- Inpainting with mask-guided editing
- Transparent asset generation (stickers, logos, icons)
- Multi-turn iterative image refinement
Key Parameters
- prompt: Text description up to 32,000 characters
- size:
1024x1024,1536x1024(landscape),1024x1536(portrait), orauto - quality:
low,medium,high, orauto(affects cost and detail) - n: Number of images to generate (1-10)
- background:
transparent,opaque, orauto - output_format:
png,jpeg, orwebp - moderation:
auto(default) orlow(less restrictive) - input_fidelity:
low(default) orhighfor preserving input image details