mirror of
https://github.com/Comfy-Org/ComfyUI_frontend.git
synced 2026-04-20 14:30:41 +00:00
Migrate workflow_templates/site into the frontend monorepo as apps/hub so the hub can use @comfyorg/design-system and shared packages. Changes to existing files: - pnpm-workspace.yaml: add @astrojs/sitemap, @astrojs/vercel, lucide-vue-next - eslint.config.ts: add hub ignores and i18n/import rule overrides - .oxlintrc.json: add hub scripts to ignore patterns - knip.config.ts: add hub workspace config apps/hub adaptations from source: - Replace local cn() with @comfyorg/tailwind-utils (19 files) - Integrate @comfyorg/design-system/css/base.css in global.css - Make TEMPLATES_DIR configurable via HUB_TEMPLATES_DIR env var - Add HUB_SKIP_SYNC flag for builds without template data - Remove Vite 8-incompatible rollupOptions.output.manualChunks - Fix stylelint violations (modern color notation, number precision) - Gitignore generated content (thumbnails, synced templates, AI cache)
2.2 KiB
2.2 KiB
Vidu
Vidu is a video generation API developed by ShengShu Technology. It supports text-to-video, image-to-video, reference-to-video with multi-entity consistency, and start-end frame interpolation. Vidu is known for fast generation speeds (as low as 10 seconds) and strong anime-style output.
Model Variants
Vidu 2.0
- Extended 8-second video generation at up to 1080p
- Text-to-video and image-to-video modes
- First and last frame control for transitions
- Available via the Vidu API and third-party platforms
Vidu Q1
- Reference-to-video with multi-entity consistency
- Supports up to 7 reference images with semantic understanding
- Infers missing elements from text prompts and reference context
- Generates coherent scenes combining multiple characters, objects, and environments
Vidu Q2
- Optimized for quality and speed balance
- Supports 6-8 second generation at up to 1080p
- 1080p image generation included in higher tiers
Vidu Q3
- Latest generation model with improved output quality
- Available through the Vidu platform and API
Key Features
- Ultra-fast inference (videos generated in as few as 10 seconds)
- Multi-entity consistency across characters, objects, and scenes
- First and last frame control for precise transitions
- Superior anime and 2D animation quality
- Up to 1080p resolution output with multiple aspect ratios
- Optimized scene templates for interactive effects and e-commerce
Hardware Requirements
- Cloud API only; no local hardware required
- Accessed via platform.vidu.com or third-party API providers
- Credit-based pricing with free tier available
Common Use Cases
- Anime and 2D animation series production
- E-commerce product video creation
- Social media content (TikTok, Reels, Shorts)
- Reference-based multi-character storytelling
- Marketing and advertising videos
- Start-end frame transitions and morphing effects
Key Parameters
- prompt: Text description of desired video content
- image_url: Source image for image-to-video generation
- duration: Video length (4-8 seconds depending on model)
- resolution: Output resolution (720p or 1080p)
- style: Visual style selection (realistic or animated)
- movement_amplitude: Controls intensity of motion in output