Files
ComfyUI_frontend/apps/hub/knowledge/concepts/embeddings.md
dante01yoon bbd0a6b201 feat: migrate workflow template site as apps/hub
Migrate workflow_templates/site into the frontend monorepo as apps/hub
so the hub can use @comfyorg/design-system and shared packages.

Changes to existing files:
- pnpm-workspace.yaml: add @astrojs/sitemap, @astrojs/vercel, lucide-vue-next
- eslint.config.ts: add hub ignores and i18n/import rule overrides
- .oxlintrc.json: add hub scripts to ignore patterns
- knip.config.ts: add hub workspace config

apps/hub adaptations from source:
- Replace local cn() with @comfyorg/tailwind-utils (19 files)
- Integrate @comfyorg/design-system/css/base.css in global.css
- Make TEMPLATES_DIR configurable via HUB_TEMPLATES_DIR env var
- Add HUB_SKIP_SYNC flag for builds without template data
- Remove Vite 8-incompatible rollupOptions.output.manualChunks
- Fix stylelint violations (modern color notation, number precision)
- Gitignore generated content (thumbnails, synced templates, AI cache)
2026-04-06 20:53:13 +09:00

1.7 KiB
Raw Blame History

Textual Embeddings

Textual embeddings are learned text representations that encode specific concepts, styles, or objects into the CLIP text encoder's vocabulary. These tiny files (~10100 KB) effectively add new "words" to your prompt vocabulary, letting you reference complex visual concepts — a particular art style, a specific character, or a set of undesirable artifacts — with a single token. Because they operate at the text-encoding level, embeddings integrate seamlessly with your existing prompts and require no changes to the model itself.

How It Works in ComfyUI

  • Key nodes: CLIPTextEncode — reference embeddings directly in your prompt text using the syntax embedding:name_of_embedding
  • Typical workflow pattern: Place embedding files in ComfyUI/models/embeddings/ → type embedding:name_of_embedding inside your positive or negative prompt in a CLIPTextEncode node → connect to sampler as usual

Key Settings

  • Prompt weighting: Embeddings have no dedicated strength slider, but you can adjust their influence with prompt weighting syntax, e.g., (embedding:name_of_embedding:1.2) to increase strength or (embedding:name_of_embedding:0.6) to soften it
  • Placement: Add embeddings to the negative prompt to suppress unwanted features, or to the positive prompt to invoke a learned concept

Tips

  • Embeddings are commonly used in negative prompts (e.g., embedding:EasyNegative, embedding:bad-hands-5) to reduce common artifacts like malformed hands or distorted faces
  • Make sure the embedding matches your base model version — an SD 1.5 embedding will not work correctly with an SDXL checkpoint
  • You can combine multiple embeddings with regular text in the same prompt for fine-grained control