mirror of
https://github.com/Comfy-Org/ComfyUI_frontend.git
synced 2026-05-05 21:54:50 +00:00
Compare commits
24 Commits
bl/queue-b
...
test/cov-l
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f7f237658c | ||
|
|
f88492387b | ||
|
|
c09f1d7d15 | ||
|
|
4a40c050d9 | ||
|
|
d253d87c92 | ||
|
|
4033dde983 | ||
|
|
61a444ed99 | ||
|
|
385a1d421d | ||
|
|
341fef46a9 | ||
|
|
24b548aebc | ||
|
|
6ea278da30 | ||
|
|
560e53c68f | ||
|
|
1999b7fba0 | ||
|
|
285421a87c | ||
|
|
5523df1aea | ||
|
|
65876c635d | ||
|
|
04918360eb | ||
|
|
af70d88860 | ||
|
|
c955309b26 | ||
|
|
7abd9d12c8 | ||
|
|
35492bc530 | ||
|
|
c30177a749 | ||
|
|
cb443d455e | ||
|
|
a378ebb5af |
@@ -1,695 +0,0 @@
|
||||
---
|
||||
name: bug-dump-ingest
|
||||
description: 'Syncs the #bug-dump Slack channel into Linear as the system of record AND auto-fixes verified real bugs via red-green-fix. Every Linear operation (create, search, link, label) is performed by posting an @Linear mention in the bug-dump thread — no Linear MCP, no API key. Flow: fetch → mandatory dedupe gate (@Linear search + gh PR search) → false-defect verification → post @Linear create in thread (tool call) → parse bot card for FE-NNNN + URL → post :white_check_mark: confirmation reply → if candidate is a verified real bug with no dedupe hit and no open PR, invoke red-green-fix automatically to produce failing test + fix + PR. Respects team emoji scheme (:white_check_mark: ticket created, :pr-open: PR open, :question: needs context, :repeat: duplicate). Use when asked to sync #bug-dump to Linear, triage slack bugs, run a bug-dump sweep, or ingest bug reports. Triggers on: bug-dump, sync bug-dump, ingest bugs, triage slack bugs, bug sweep.'
|
||||
---
|
||||
|
||||
# Bug Dump Ingest
|
||||
|
||||
**Primary job: sync `#bug-dump` (Slack: `C0A4XMHANP3`) into Linear as the source of truth, then auto-fix the verified real bugs.** Linear is where status, labels, and follow-up triage happen — this skill gets every bug into Linear with enough context that a downstream agent or human can work from Linear alone. **Every Linear action is performed by mentioning `@Linear` in the bug-dump thread**; there is no Linear MCP and no API key path. When pre-flight verification confirms a candidate is a real bug (not dedupe, not already in a PR, not out of scope), the skill then invokes `red-green-fix` automatically.
|
||||
|
||||
```text
|
||||
fetch → pre-flight dedupe gate (@Linear search + gh) → verify false defects → present approvals
|
||||
→ POST "@Linear create ..." thread reply via slack_send_message (mandatory tool call)
|
||||
→ poll slack_read_thread → parse Linear bot card for FE-NNNN + URL
|
||||
→ POST :white_check_mark: confirmation thread reply via slack_send_message
|
||||
→ if verification = "real bug" AND no dedupe AND no open PR:
|
||||
invoke Skill(skill="red-green-fix") → POST :pr-open: thread reply
|
||||
```
|
||||
|
||||
### Non-negotiable rules
|
||||
|
||||
1. **Linear actions are Slack tool calls.** The skill MUST drive Linear by calling `mcp__plugin_slack_slack__slack_send_message` with `thread_ts` set and text that mentions `@Linear`. There is no MCP-direct path and no API-key path. Printing `@Linear create ...` into the Claude CLI response is NOT a substitute — the Slack thread reply is what triggers the Linear bot, and its card is the canonical receipt.
|
||||
2. **Dedupe is a gate, not a suggestion.** No candidate is proposed for creation until `@Linear search` AND `gh pr` search have been run and recorded. A hit short-circuits creation to `L` (link) or `pr-open`.
|
||||
3. **Auto-fix real bugs.** When the dedupe gate is clean AND false-defect verification is clean AND the candidate isn't on the handoff-exclusion list (see § Handoff conditions), after Linear creation the skill invokes `red-green-fix` via the `Skill` tool — without waiting for an extra human prompt.
|
||||
|
||||
### What the skill cannot do
|
||||
|
||||
The Slack MCP exposes no `reactions.add` tool, so the skill cannot put a `:white_check_mark:` reaction on the parent message. The thread reply with the leading `:white_check_mark:` emoji is the skill's canonical marker; a human can additionally add the parent reaction for channel visibility (see § Parent reaction — optional visibility nudge). Both are respected by Processed Detection.
|
||||
|
||||
## Team emoji scheme
|
||||
|
||||
| Emoji | Meaning | Who adds it | Skill behavior |
|
||||
| -------------------- | ------------------ | ------------------------------------------------------ | ---------------------------------------------- |
|
||||
| `:white_check_mark:` | Ticket created | Human on parent (after skill files); also in bot reply | Skip in future sweeps |
|
||||
| `:pr-open:` | PR open | Human | Skip creation; include PR link in approval row |
|
||||
| `:question:` | Needs more context | Human | Skip creation; agent may ask for clarification |
|
||||
| `:repeat:` | Duplicate | Human | Skip creation; link existing Linear issue |
|
||||
|
||||
## Design Priority
|
||||
|
||||
Optimize for **coverage, label quality, and proven fixes** over fix-path cleverness. Linear is the downstream triage surface — once every bug is there with status, labels, and context, agents and humans can work from Linear alone. A Linear ticket with a wrong severity is cheap to fix; a Slack-only bug is invisible to downstream tooling; a "filed but not fixed" real regression wastes a human turn that the skill could have spent on a red-green PR.
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. **Scope** — default window: messages in the last 48h. Override with `--since YYYY-MM-DD` or a Slack permalink list.
|
||||
2. **Fetch** — `slack_read_channel` for `C0A4XMHANP3`; `slack_read_thread` per message with replies.
|
||||
3. **Filter** — drop already-processed (see Processed Detection).
|
||||
4. **Classify** — bug / discussion / meta (see Classification Rules).
|
||||
5. **Pre-flight dedupe gate (MANDATORY)** — for every bug candidate, run `@Linear search` AND `gh pr` search BEFORE proposing (see § Pre-flight Dedupe Gate). A hit means the candidate goes into the batch as `L` (link) or `pr-open`, not as a new create.
|
||||
6. **Verify false defects** — per candidate, run quick checks before proposing (see False-Defect Verification).
|
||||
7. **Extract** — normalize to ticket schema (see Ticket Schema).
|
||||
8. **Human approval** — batch table, collect Y/N/?/S/L/R per candidate (see Interactive Approval). Default recommendation for clean candidates is `Y` (file + auto-fix).
|
||||
9. **Post `@Linear create` thread reply — MANDATORY TOOL CALL** — for each approved `Y`/`L` row, call `mcp__plugin_slack_slack__slack_send_message` with `channel_id=C0A4XMHANP3`, `thread_ts=<parent-ts>`, and text starting with `@Linear create` (see § Linear Slack Bot Integration). Do NOT print the command into chat as a substitute.
|
||||
10. **Capture the Linear bot card** — poll `slack_read_thread` up to 3× with ~3s spacing, parse the first Linear-app reply for the `FE-NNNN` identifier and `https://linear.app/...` URL. No URL = not ingested; never fabricate one.
|
||||
11. **Post `:white_check_mark:` confirmation reply — MANDATORY TOOL CALL** — call `slack_send_message` again with text starting with `:white_check_mark: Filed to Linear: <URL>` so future sweeps can detect the marker via `has::white_check_mark: from:me`. Record both `ts` values in the session log.
|
||||
12. **Auto-fix (clean candidates only)** — if dedupe gate is clean AND false-defect verification is clean AND the candidate isn't on the Handoff-Exclusion list, immediately invoke the `red-green-fix` skill via the `Skill` tool. See § Fix Workflow for the exact call contract.
|
||||
13. **Log** — append to session log; update `processed.json`.
|
||||
|
||||
## System Context
|
||||
|
||||
| Item | Value |
|
||||
| ------------------ | -------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Source channel | `#bug-dump` (`C0A4XMHANP3`) |
|
||||
| Destination | Linear `Frontend Engineering` team, via the Linear Slack app (`@Linear`). Team is named in every `@Linear create` message. |
|
||||
| Default state | `Triage` — every `@Linear create` message includes `Status: Triage` |
|
||||
| State dir | `~/temp/bug-dump-ingest/` |
|
||||
| Processed registry | `~/temp/bug-dump-ingest/processed.json` |
|
||||
| Session log | `~/temp/bug-dump-ingest/session-YYYY-MM-DD.md` |
|
||||
| Drafts (failure) | `~/temp/bug-dump-ingest/drafts/*.md` — written only when `@Linear` never replies, so the human can retry manually |
|
||||
|
||||
## Label Taxonomy
|
||||
|
||||
Every created Linear issue MUST get the following labels, passed as a comma-separated list in the `Labels:` line of the `@Linear create` message. The Linear Slack app creates missing labels on first use:
|
||||
|
||||
| Label kind | Values | Source |
|
||||
| ------------ | ------------------------------------------------------------------------------ | ------------------------- |
|
||||
| `source:` | `source:bug-dump` | Always (marks Slack sync) |
|
||||
| `area:` | `area:ui`, `area:node-system`, `area:workflow`, `area:cloud`, `area:templates` | Area Heuristics |
|
||||
| `env:` | `env:cloud-prod`, `env:cloud-dev`, `env:local`, `env:electron` | Env Heuristics |
|
||||
| `severity:` | `sev:high`, `sev:medium`, `sev:low` | Severity Heuristics |
|
||||
| `reporter:` | `reporter:<slack-handle>` (kebab-case) | From message author |
|
||||
| Status flags | `needs-repro`, `needs-backend`, `regression`, `pr-open` | When applicable |
|
||||
|
||||
Label rules:
|
||||
|
||||
- Always include `source:bug-dump`, exactly one `area:`, at least one `env:` (or `env:unknown`), exactly one `severity:`, exactly one `reporter:`.
|
||||
- `needs-repro` — set when repro steps were ambiguous; signals "human should confirm before fix".
|
||||
- `needs-backend` — set when fix is clearly in ComfyUI backend, not this frontend repo.
|
||||
- `regression` — set when the bug mentions a version/upgrade correlation.
|
||||
- `pr-open` — set instead of creating a fresh ticket when a fix PR already exists; the Linear issue becomes a tracker.
|
||||
|
||||
Labels are the primary affordance for downstream triage — invest in getting them right, not just in the title.
|
||||
|
||||
## Processed Detection
|
||||
|
||||
A top-level message is considered already-handled (skip creation) if ANY of:
|
||||
|
||||
- Its timestamp appears in `processed.json`.
|
||||
- It carries a `:white_check_mark:` reaction on the parent — ticket already created.
|
||||
- It carries a `:pr-open:` reaction — fix PR is open; skill records the PR link in the session log rather than creating a fresh Linear issue.
|
||||
- It carries a `:repeat:` reaction — duplicate; skill attempts to find the original Linear issue and link it in the session log.
|
||||
- It carries a `:question:` reaction — needs more context; skill skips creation and records for follow-up.
|
||||
- Its thread contains a reply with a `https://linear.app/` URL (fetch via `slack_read_thread`).
|
||||
- Its thread contains a reply starting with `:white_check_mark:` from the skill's bot user.
|
||||
- It is a system/meta message (`has joined the channel`, bot-only message).
|
||||
- Its thread already contains resolution confirmation (`"solved"`, `"resolved"`, `:done:` reaction from the reporter) AND has no fix PR referenced — treat as "resolved without ticket, skip".
|
||||
|
||||
Never re-ingest a message already marked in any of the above ways.
|
||||
|
||||
Filter query for Slack search-based sweeps:
|
||||
|
||||
```text
|
||||
in:<#C0A4XMHANP3> -has::white_check_mark: -has::pr-open: -has::repeat: -has::question: after:YYYY-MM-DD
|
||||
```
|
||||
|
||||
## False-Defect Verification
|
||||
|
||||
Before a candidate hits the approval batch, run cheap checks to demote obvious non-bugs. Goal: keep the approval table high-signal. This is not a full repro — just fast heuristics that catch the top false-positive classes.
|
||||
|
||||
| Check | Command / Signal | Demote-to |
|
||||
| ---------------------------------------- | ---------------------------------------------------------------- | ---------- |
|
||||
| Reporter self-resolved in same msg | "no action needed", "solved", "nvm", "fixed it" | `resolved` |
|
||||
| Reporter self-resolved in thread | `slack_read_thread` → reporter's last reply contains "solved" | `resolved` |
|
||||
| Fix PR merged on main | `gh search prs "in:title <keyword>" --state merged --limit 3` | `fixed` |
|
||||
| Fix PR open (already-filed) | `gh search prs "<keyword>" --state open --limit 3` | `pr-open` |
|
||||
| Linear issue exists (open) | Linear `searchIssues` on title keywords → any open match | `dedupe` |
|
||||
| Behavior is documented / intended | grep `docs/` and `src/locales/en/*.json` for the feature | `expected` |
|
||||
| Not reproducible — feature doesn't exist | grep `src/` for mentioned component/feature → 0 hits | `stale` |
|
||||
| Env drift only (local setup issue) | Thread contains "my machine", "my setup", "proxy" without others | `env` |
|
||||
|
||||
For each demoted candidate, record the demotion reason in the approval table as `Verify: <tag>` so the human can override if they disagree. Never hard-skip based on verification alone — always show the row with the demotion.
|
||||
|
||||
### Recommended verify commands
|
||||
|
||||
```bash
|
||||
# 1. Search recent PRs for the feature in question
|
||||
gh search prs "<keyword>" --repo Comfy-Org/ComfyUI_frontend --limit 5
|
||||
|
||||
# 2. Grep for the feature / component mentioned
|
||||
rg -l "<ComponentOrFeatureName>" src/ apps/
|
||||
|
||||
# 3. Check if it's a known i18n / documented setting
|
||||
rg "<setting-key>" src/locales/en/ docs/
|
||||
```
|
||||
|
||||
Keep verification under ~30s per candidate. If it takes longer, propose a ticket and let the human decide — don't let verification become the bottleneck.
|
||||
|
||||
## Classification Rules
|
||||
|
||||
For each unprocessed top-level message, decide:
|
||||
|
||||
| Class | Signal | Action |
|
||||
| ----------------- | --------------------------------------------------------------------------------------------------------- | ---------------------------- |
|
||||
| **bug** | Describes unexpected behavior, visual glitch, error, regression, crash. Usually has repro steps or media. | Propose Linear ticket |
|
||||
| **discussion** | Design question, rollout thoughts, team chatter, PR planning (e.g. "how about we make a PR to do...") | Skip |
|
||||
| **question** | User asking if something is expected or known | Skip unless answered = bug |
|
||||
| **meta** | Channel joins, bot messages, cross-posts without content | Skip |
|
||||
| **already-filed** | Thread shows PR already open OR existing Linear link | Skip, log with existing link |
|
||||
|
||||
When ambiguous, default to **bug** and let the human decide in the approval batch.
|
||||
|
||||
## Ticket Schema
|
||||
|
||||
Normalize each bug to this shape before presenting:
|
||||
|
||||
```json
|
||||
{
|
||||
"slack_ts": "1776639963.837519",
|
||||
"slack_permalink": "https://comfy-organization.slack.com/archives/C0A4XMHANP3/p1776639963837519",
|
||||
"reporter": "Ali Ranjah (wavey)",
|
||||
"title": "Unet model dropdown missing selected model",
|
||||
"description": "Body with repro steps, env, attachments list, thread summary",
|
||||
"env": ["cloud prod"],
|
||||
"severity": "low | medium | high",
|
||||
"area": "ui | node-system | workflow | cloud | templates | unknown",
|
||||
"attachments": [{ "name": "...", "id": "F...", "type": "image/png" }],
|
||||
"thread_resolution": "solved | open | none"
|
||||
}
|
||||
```
|
||||
|
||||
Keep descriptions copy-paste friendly: lead with repro bullets, then env, then "See Slack: <permalink>". Attach thread summary only if it adds context beyond the top-level message.
|
||||
|
||||
### Severity Heuristics
|
||||
|
||||
- **high** — crash, data loss, blocks a template or core feature, affects paying users broadly (e.g. "job ends in 30m on Pro", "widget values reset").
|
||||
- **medium** — visible regression, template error, wrong pricing, broken UX on a common path.
|
||||
- **low** — cosmetic, single-template edge case, minor tooltip/boundary issue.
|
||||
|
||||
When unsure, mark `medium` and flag for human in the approval batch.
|
||||
|
||||
### Area Heuristics
|
||||
|
||||
- `ui` — visual glitches, palette issues, popover clipping, dropdown styling.
|
||||
- `node-system` — canvas perf, reroute, node drag, widget rendering, undo.
|
||||
- `workflow` — template failures, save/load, refresh regressions.
|
||||
- `cloud` — jobs, pricing, assets, auth, queue.
|
||||
- `templates` — specific template errors.
|
||||
|
||||
## Pre-flight Dedupe Gate (MANDATORY)
|
||||
|
||||
Before any candidate enters the approval table, run BOTH checks below and record the result in the row's `Dedup` and `PR` columns. This is a hard gate — no candidate may be proposed for creation without a verdict.
|
||||
|
||||
### Check 1 — Open Linear issues (via `@Linear search`)
|
||||
|
||||
Extract 3-5 keyword terms from the proposed title (strip stopwords). Post a search command to the bug-dump thread — use a scratch thread if no parent `ts` is available yet, but prefer the candidate's own parent thread so the search card becomes part of that thread's audit trail:
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<parent-ts>",
|
||||
text: "@Linear search <keyword-1> <keyword-2>\nTeam: Frontend Engineering\nStatus: open"
|
||||
})
|
||||
```
|
||||
|
||||
Poll `slack_read_thread` for up to 10s; parse the Linear app's card reply for `FE-NNNN` identifiers and URLs. Run the search twice with different keyword subsets if the first returns zero hits — reworded titles are the top false-negative class.
|
||||
|
||||
If `@Linear search` is not supported by the workspace's Linear app version, fall back to a Slack search for prior `@Linear` card replies in the channel:
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_search_public({
|
||||
query: "in:<#C0A4XMHANP3> from:@Linear <keyword-1> <keyword-2>"
|
||||
})
|
||||
```
|
||||
|
||||
This scans past Linear bot replies in the channel — any reply containing a matching `FE-NNNN` URL is a candidate duplicate. Record which dedupe path was used in the session log.
|
||||
|
||||
Treat a hit as a duplicate if any of:
|
||||
|
||||
- Title overlap ≥ 80% (after lowercasing + stopword removal)
|
||||
- Same reporter + same component reference in description
|
||||
- Same stack trace or error code
|
||||
|
||||
**Verdict:** set `Dedup: FE-NNNN` and default recommendation to `L` (link, don't create). The human may still override to `Y` to file a separate ticket.
|
||||
|
||||
### Check 2 — Open or merged fix PRs on GitHub
|
||||
|
||||
```bash
|
||||
# Open PRs matching title keywords
|
||||
gh pr list --repo Comfy-Org/ComfyUI_frontend --state open \
|
||||
--search "<keyword-1> <keyword-2>" --limit 5 \
|
||||
--json number,title,url,createdAt
|
||||
|
||||
# Recent merged fixes (last 30d) — catches "already fixed, waiting to ship"
|
||||
gh pr list --repo Comfy-Org/ComfyUI_frontend --state merged \
|
||||
--search "<keyword-1> <keyword-2> merged:>=<YYYY-MM-DD>" --limit 5 \
|
||||
--json number,title,url,mergedAt
|
||||
```
|
||||
|
||||
Treat a hit as a match if the PR title/body mentions the same component or bug phrase and the PR is unmerged or merged within the window covering the reporter's observation.
|
||||
|
||||
**Verdict:**
|
||||
|
||||
- Open PR match → set `PR: #NNNN (open)`, recommendation `pr-open` (file Linear with `pr-open` label linking the PR, skip auto-fix).
|
||||
- Merged PR match → set `PR: #NNNN (merged)`, recommendation `fixed` (demote in verify, usually skip; human can override if the reporter claims the fix didn't land).
|
||||
|
||||
### Failure handling
|
||||
|
||||
If either check errors (Linear Slack app silent or not in channel, `gh` auth expired), DO NOT proceed to proposal — stop the sweep, report the failure to the user, and let them decide whether to re-run or manually dedupe. A silent skip of dedupe is never acceptable; it's the single biggest source of duplicate tickets.
|
||||
|
||||
Log each dedupe query + top hits in `~/temp/bug-dump-ingest/session-YYYY-MM-DD.md` under a per-candidate `Dedup trace:` block so the human can audit.
|
||||
|
||||
## Interactive Approval
|
||||
|
||||
Present candidates in batches of 5-10. Table format (10 columns):
|
||||
|
||||
```text
|
||||
# | Slack (author, time) | Proposed title | Env | Sev | Area | Dedup | PR | Verify | Rec
|
||||
----+------------------------+-----------------------------------------+------------+------+------------+------------+---------------+-------------+-----
|
||||
1 | wavey, 04-20 08:06 | Unet dropdown missing selected model | cloud prod | low | ui | - | - | resolved | N
|
||||
2 | Denys, 04-18 05:45 | Pro plan jobs end at 30 minutes | cloud prod | high | cloud | - | - | clean | Y
|
||||
3 | Terry Jia, 04-18 12:52 | Nodes 2.0 canvas lag on large workflows | - | high | node-system| FE-4521 | - | clean | L
|
||||
4 | Pablo, 04-17 08:52 | Multi-asset delete popup shows hashes | cloud prod | low | ui | - | #11402 (open) | clean | pr-open
|
||||
```
|
||||
|
||||
Each row MUST show: Slack author + date, proposed title, env tags, severity, area, **dedupe status from the Pre-flight Dedupe Gate**, **open/merged PR hit from the Pre-flight Dedupe Gate**, verify tag (from False-Defect Verification), and agent recommendation.
|
||||
|
||||
### Default recommendation logic
|
||||
|
||||
The skill computes `Rec` deterministically from the gate results:
|
||||
|
||||
- `L` — Dedupe hit on open Linear issue.
|
||||
- `pr-open` — Open GitHub PR hit.
|
||||
- `fixed` — Merged PR hit within the reporter's observation window.
|
||||
- `N` — Verify tag is `resolved`, `expected`, `stale`, or `env` only.
|
||||
- `?` — Repro incomplete or classification ambiguous.
|
||||
- `Y` — Everything clean AND candidate is not on the § Handoff-Exclusion list. This is the "file + auto-fix" path.
|
||||
- `Y (file-only)` — Clean but on the handoff-exclusion list (e.g. touches LGraphNode, needs backend). File Linear, skip auto-fix.
|
||||
|
||||
### Response format
|
||||
|
||||
- `Y` — default path: create Linear ticket, post `:white_check_mark:` thread reply, AND if the candidate is eligible (dedupe clean, verify clean, not on handoff-exclusion list), immediately invoke `red-green-fix` via the `Skill` tool. See § Fix Workflow.
|
||||
- `S` — **skip auto-fix** for this row: create Linear ticket + thread reply only, do NOT run red-green-fix. Use when the human knows a specific person is already investigating or wants to batch fixes.
|
||||
- `N` — skip entirely (log reason in session file).
|
||||
- `?` — mark as needs-context; skill posts a thread reply asking for repro details and prompts the human to add `:question:` to the parent.
|
||||
- `L` — link to existing Linear issue instead of creating (skill asks which one if the Pre-flight Dedupe Gate didn't return an exact match).
|
||||
- `R` — duplicate of another bug-dump message; skill links the two and prompts the human for `:repeat:` on the parent.
|
||||
- `E` — edit proposed title/description before creating (skill shows draft for inline tweaks).
|
||||
- Bulk responses accepted: `1 N, 2 Y, 3 L FE-4521, 4 pr-open #11402, 5 ?` — any row omitted from the response is treated as its computed `Rec` default.
|
||||
|
||||
Do not post any `@Linear create` messages until all candidates in the batch have a terminal decision. Auto-fix invocations run sequentially AFTER every `@Linear create` has produced a parsed `FE-NNNN`, so every `red-green-fix` call has a `Fixes FE-NNNN` to put in the PR body.
|
||||
|
||||
## Linear Slack Bot Integration (@Linear)
|
||||
|
||||
Every Linear action — create, search, link, label, status change — is performed by posting a message to the candidate's thread in `#bug-dump` that mentions `@Linear`. The Linear Slack app parses the mention and responds with a card in the same thread. There is no Linear MCP path and no `LINEAR_API_KEY` path; see `reference/linear-api.md` § "Why no direct API path" for the rationale.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- The Comfy Slack workspace already has the Linear Slack app installed (this is how humans add `@Linear` mentions today).
|
||||
- Channel `C0A4XMHANP3` is connected to the `Frontend Engineering` Linear team.
|
||||
- No per-machine setup. If a `@Linear` invocation produces no bot reply, the app is not in the channel — surface to the human, do NOT retry silently.
|
||||
|
||||
### Create an issue
|
||||
|
||||
For each approved `Y` candidate, call:
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<parent-ts>",
|
||||
text: "@Linear create\nTeam: Frontend Engineering\nTitle: <title>\nStatus: Triage\nLabels: source:bug-dump, area:<area>, env:<env>, sev:<severity>, reporter:<handle>\n\n<description>\n\nSource: <slack-permalink>"
|
||||
})
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
- First line MUST be `@Linear create` — this is the command token.
|
||||
- `Team: Frontend Engineering` is required on every create — without it the bot falls back to the workspace default, which may route to a different team.
|
||||
- `Status: Triage` pins the initial state (per § System Context).
|
||||
- `Labels:` — comma-separated, full `source:bug-dump, area:*, env:*, sev:*, reporter:*` set per § Label Taxonomy. Missing labels are auto-created by the Linear Slack app on first use.
|
||||
- Description body is markdown — see `reference/linear-api.md` § "Description body template" and `reference/schema.md` for per-field extraction.
|
||||
- Use real newlines (not literal `\n`) when constructing the text.
|
||||
|
||||
After the tool call returns, poll `slack_read_thread` for the Linear app's reply card (up to 3× with ~3s spacing). Parse the card for:
|
||||
|
||||
- An `FE-NNNN` identifier
|
||||
- A `https://linear.app/<org>/issue/FE-NNNN` URL
|
||||
|
||||
The URL is the ingested receipt. The skill then posts the `:white_check_mark:` confirmation reply (§ Slack Thread Reply).
|
||||
|
||||
### Search (dedupe)
|
||||
|
||||
See § Pre-flight Dedupe Gate § Check 1 for the search command shape and handling of the bot's reply. The search is a tool call in the candidate's thread — not a chat aside.
|
||||
|
||||
### Link an existing issue (`L` response)
|
||||
|
||||
When the human picks `L FE-4521` for a row, do NOT post `@Linear create`. Instead:
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<parent-ts>",
|
||||
text: "@Linear link FE-4521"
|
||||
})
|
||||
```
|
||||
|
||||
The bot replies with the linked issue card. Then post the `:white_check_mark:` confirmation reply (adjusted to say `Linked to Linear:` rather than `Filed to Linear:`) so Processed Detection still matches.
|
||||
|
||||
### Label / status updates
|
||||
|
||||
When a later sweep needs to flip a ticket (e.g. a PR opened after initial ingest, so add `pr-open` and link):
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<parent-ts>",
|
||||
text: "@Linear FE-4521 add-labels pr-open"
|
||||
})
|
||||
```
|
||||
|
||||
Status changes are rarely driven by this skill directly — Linear auto-moves issues to `In Review` when a PR with `Fixes FE-NNNN` is opened, and the `red-green-fix` skill handles that PR body.
|
||||
|
||||
### Captured fields per create
|
||||
|
||||
Every successful create must produce, via the Linear bot's reply card:
|
||||
|
||||
- `identifier` — e.g. `FE-4710`, used in `Fixes <LIN-ID>` references and session log
|
||||
- `url` — `https://linear.app/.../issue/FE-4710`, included verbatim in the `:white_check_mark:` reply
|
||||
- `ts` of the Linear bot's card reply — recorded in session log for audit
|
||||
|
||||
If the card is missing the URL or identifier, fall through to the failure path below — do NOT fabricate either value.
|
||||
|
||||
### Failure path
|
||||
|
||||
If the Linear bot does not reply within the poll window, OR replies with a parse error (`couldn't parse`, `no team matched`, `failed`):
|
||||
|
||||
1. Write a draft markdown file to `~/temp/bug-dump-ingest/drafts/NN-short-slug.md` containing the full `@Linear create` text that was sent plus any partial bot reply.
|
||||
2. Post a thread reply that is explicit about the failure — do NOT include `:white_check_mark:` or a fake Linear URL:
|
||||
```text
|
||||
:warning: bug-dump-ingest: @Linear did not respond. Drafted at ~/temp/bug-dump-ingest/drafts/<slug>.md — please file manually and reply with the FE-NNNN.
|
||||
```
|
||||
3. Skip auto-fix for this candidate (no Linear ID = no `Fixes` reference).
|
||||
4. Log the failure in the session log.
|
||||
|
||||
Never invent a Linear URL. Never post `:white_check_mark: Filed to Linear: ...` without a real URL parsed from a real Linear bot card.
|
||||
|
||||
## Slack Thread Reply (Ingested Marker) — MANDATORY TOOL CALL
|
||||
|
||||
Every approved candidate produces **two** mandatory `slack_send_message` calls in the parent thread:
|
||||
|
||||
1. The `@Linear create` (or `@Linear link`) command — see § Linear Slack Bot Integration.
|
||||
2. The `:white_check_mark:` confirmation reply described below, posted after a real `FE-NNNN` + URL have been parsed from the Linear bot's card.
|
||||
|
||||
The second reply is what future sweeps grep for via `has::white_check_mark: from:me`. Even though the Linear bot's own card already contains the URL, the `:white_check_mark:` prefix is the canonical Processed Detection marker — without it, a future sweep may re-ingest the same bug.
|
||||
|
||||
The skill is not done with a candidate until BOTH calls have succeeded. If either fails, do not claim the candidate is ingested.
|
||||
|
||||
### Required call shape
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<parent-message-ts>", // dotted form, e.g. "1776714531.990509"
|
||||
text: ":white_check_mark: Filed to Linear: <LINEAR_URL>\nReporter: <@USER_ID>\nSev: <severity> • Area: <area>"
|
||||
})
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
- `thread_ts` MUST be the parent message ts — never the channel ts, never omitted. An omitted `thread_ts` posts at channel level, which pollutes `#bug-dump` and breaks Processed Detection.
|
||||
- The text MUST start with `:white_check_mark:` followed by a space and `Filed to Linear:`. This exact prefix is what future sweeps grep for via `has::white_check_mark: from:me`.
|
||||
- The Linear URL MUST be present. No URL = not ingested; future sweeps will re-file the same bug.
|
||||
- Plain text only — no markdown tables, no bold, no code fences. Slack renders the emoji shortcode into a real `:white_check_mark:` only when the message is plain text.
|
||||
- Capture the returned `ts` and record it in the session log for audit.
|
||||
|
||||
### NEVER-do list (common failure mode)
|
||||
|
||||
- **Do NOT** print `@Linear create ...` or `:white_check_mark: Filed to Linear: <URL>` into the Claude CLI chat response as a substitute for calling `slack_send_message`. The CLI output is not seen by Slack. If you find yourself typing either into a plain assistant message, stop and issue the tool call instead.
|
||||
- **Do NOT** claim the thread reply was posted until the `slack_send_message` tool call has returned a success with a `ts`. If the tool call errors, surface the error and halt the batch — do not fabricate a reply.
|
||||
- **Do NOT** use any other tool (e.g. `slack_schedule_message`, `slack_send_message_draft`) as a substitute. Only an immediate `slack_send_message` with `thread_ts` set counts — the Linear Slack app does not trigger on scheduled/draft messages.
|
||||
- **Do NOT** substitute any direct Linear API call (MCP, GraphQL, curl) for the `@Linear` mention. The Slack thread is intentionally the single audit trail.
|
||||
|
||||
### Fix-path reply (after red-green-fix opens a PR)
|
||||
|
||||
When `red-green-fix` returns a PR URL for an auto-fixed candidate, the skill MUST post a second thread reply on the same parent — again via `slack_send_message`:
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<same parent ts>",
|
||||
text: ":pr-open: Fix PR: <PR_URL>\nRed-green verified: <unit|e2e> test proves the regression.\nFixes <LIN-ID>"
|
||||
})
|
||||
```
|
||||
|
||||
Same "tool call, not chat output" rule applies.
|
||||
|
||||
### Parent reaction — optional visibility nudge (not on critical path)
|
||||
|
||||
The Slack MCP does not expose `reactions.add`, so the skill cannot set a `:white_check_mark:` reaction on the parent. The thread reply above is sufficient for Processed Detection; the parent reaction is a human-only "visible in channel" nudge. At the end of the run, the skill MAY print a compact list for the human:
|
||||
|
||||
```text
|
||||
Optional: add :white_check_mark: to parent messages for in-channel visibility.
|
||||
LIN-4710 → <permalink>
|
||||
LIN-4711 → <permalink>
|
||||
```
|
||||
|
||||
This is a convenience, not a deliverable — a missing parent reaction does not cause re-ingestion.
|
||||
|
||||
## Fix Workflow (auto-invoke red-green-fix)
|
||||
|
||||
For every `Y` row whose `Rec` resolved to auto-fix (dedupe clean, verify clean, not on handoff-exclusion list), the skill MUST — after Linear creation and the `:white_check_mark:` thread reply — invoke the `red-green-fix` skill via the `Skill` tool. This is a real tool call, not a narrative handoff.
|
||||
|
||||
### Required Skill tool call
|
||||
|
||||
```text
|
||||
Skill({
|
||||
skill: "red-green-fix",
|
||||
args: "<composed prompt — see below>"
|
||||
})
|
||||
```
|
||||
|
||||
Compose `args` as a single self-contained prompt so the sub-invocation has everything it needs without re-reading the Linear issue:
|
||||
|
||||
```text
|
||||
Bug: <title>
|
||||
Linear: <LIN-ID> (<LINEAR_URL>)
|
||||
Source: Slack <permalink>
|
||||
Reporter: <display-name>
|
||||
Env: <env tags>
|
||||
Area: <area>
|
||||
Branch: fix/<lin-id-lowercase>-<short-slug>
|
||||
|
||||
Repro:
|
||||
1. <step>
|
||||
2. <step>
|
||||
|
||||
Expected: <expected behavior>
|
||||
Actual: <actual behavior>
|
||||
|
||||
Test layer (inferred from area):
|
||||
- ui → Vitest colocated + Playwright e2e tagged @regression
|
||||
- node-system → Playwright e2e primarily
|
||||
- workflow / templates → Playwright e2e
|
||||
- cloud → Vitest if client-side; otherwise STOP and label the Linear issue "needs-backend"
|
||||
|
||||
Test naming:
|
||||
- describe('<LIN-ID>: <one-line bug summary>', ...)
|
||||
- Playwright test title must include the LIN-ID.
|
||||
|
||||
PR body must include:
|
||||
- "Fixes <LIN-ID>"
|
||||
- "Source: Slack <permalink>"
|
||||
|
||||
Follow the red-green-fix two-commit sequence exactly. Do NOT skip the red commit.
|
||||
```
|
||||
|
||||
The skill MUST wait for `red-green-fix` to return before moving to the next candidate. Process one auto-fix at a time so branch state is deterministic.
|
||||
|
||||
### Verifying the invocation ran
|
||||
|
||||
After the `Skill` call returns, the skill MUST confirm at least one of:
|
||||
|
||||
1. A new git branch named `fix/<lin-id>-*` exists (`git branch --list "fix/<lin-id>-*"`).
|
||||
2. A PR URL is present in `red-green-fix`'s return payload.
|
||||
|
||||
If neither is true, the invocation silently no-op'd. Log the failure to the session log as `auto-fix skipped: invocation returned without branch or PR` and continue — do NOT post the `:pr-open:` thread reply.
|
||||
|
||||
### Inputs summary
|
||||
|
||||
- **Bug description** — the Linear description (includes repro, env, source permalink).
|
||||
- **Linear ID** — inserted into the PR body as `Fixes <LIN-ID>`.
|
||||
- **Branch name** — `fix/<lin-id>-<short-slug>` (e.g. `fix/lin-4711-pro-plan-30min-timeout`).
|
||||
- **Test layer** — inferred from `area`:
|
||||
- `ui` → unit (Vitest) + e2e (Playwright)
|
||||
- `node-system` → e2e primarily; unit if isolable
|
||||
- `workflow` / `templates` → e2e
|
||||
- `cloud` → unit if client-side logic, otherwise flag "backend — out of scope for this repo"
|
||||
|
||||
### Handoff-Exclusion list (do NOT auto-invoke red-green-fix)
|
||||
|
||||
These rows still get a Linear ticket + `:white_check_mark:` thread reply, but the skill MUST skip the `Skill(skill="red-green-fix")` call and instead post a thread nudge explaining why:
|
||||
|
||||
- Repro steps are incomplete (no clear numbered steps, no env) — reply in thread: "Need clearer repro before I can write a failing test. What's the shortest path to reproduce?"
|
||||
- Fix requires backend / ComfyUI repo changes (not frontend) — label Linear `needs-backend`.
|
||||
- Linear ticket was dedupe-linked rather than newly created — existing owner may already be fixing.
|
||||
- Severity is cosmetic AND reporter hasn't asked for a fix — file ticket only.
|
||||
- Fix would touch `LGraphNode`, `LGraphCanvas`, `LGraph`, or `Subgraph` god-objects (ADR-0003/0008 — always human decision).
|
||||
- Pre-flight Dedupe Gate found an open PR (`pr-open`) or a matching merged PR (`fixed`).
|
||||
|
||||
When a row is excluded, record the reason in the session log under `auto-fix excluded: <reason>`.
|
||||
|
||||
### Test authoring rules
|
||||
|
||||
Both tests MUST be written in the "red" commit BEFORE any fix code (per red-green-fix). Rules specific to bug-dump ingestion:
|
||||
|
||||
- **Unit test (Vitest)** — colocated next to the implementation, `<file>.test.ts`. Exercise the specific logic path reproduced by the reporter. One `describe` block named after the Linear ID:
|
||||
|
||||
```typescript
|
||||
// src/components/node/UnetDropdown.test.ts
|
||||
describe('LIN-4710: unet dropdown missing selected model', () => {
|
||||
it('includes the currently-selected model in the list even when not in available models', () => {
|
||||
// ...
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
- **E2E test (Playwright)** — under `browser_tests/tests/`, follow `writing-playwright-tests` skill. Tag with `@regression` and include the Linear ID in the test title:
|
||||
|
||||
```typescript
|
||||
test.describe(
|
||||
'LIN-4710 unet dropdown regression',
|
||||
{ tag: ['@regression'] },
|
||||
() => {
|
||||
test('keeps selected model visible in the dropdown', async ({
|
||||
comfyPage
|
||||
}) => {
|
||||
// ...
|
||||
})
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
- **Mock data types** — follow `docs/guidance/playwright.md`: mock responses typed from `packages/ingest-types`, `packages/registry-types`, `src/schemas/` — never `as any`.
|
||||
|
||||
(The Handoff-Exclusion list above governs when `red-green-fix` is NOT invoked.)
|
||||
|
||||
### PR body template
|
||||
|
||||
The red-green-fix skill's PR template is extended with a `Source` line:
|
||||
|
||||
```markdown
|
||||
## Summary
|
||||
|
||||
<Root cause>
|
||||
|
||||
- Fixes LIN-NNN
|
||||
- Source: Slack <permalink>
|
||||
|
||||
## Red-Green Verification
|
||||
|
||||
| Commit | CI Status | Purpose |
|
||||
| ------------------------------------------ | -------------------- | ------------------------------- |
|
||||
| `test: LIN-NNN add failing test for <bug>` | :red_circle: Red | Proves the test catches the bug |
|
||||
| `fix: <bug summary>` | :green_circle: Green | Proves the fix resolves the bug |
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] Unit regression test passes locally
|
||||
- [ ] E2E regression test passes locally (if UI)
|
||||
- [ ] Manual repro no longer reproduces
|
||||
- [ ] Linear ticket linked
|
||||
```
|
||||
|
||||
After the PR merges, post the second thread reply on Slack (see Slack Thread Reply § Fix-path reply).
|
||||
|
||||
## Emoji Reaction Hints (read-only)
|
||||
|
||||
The agent cannot add reactions, but respects human-set reactions when filtering. The canonical team scheme (primary):
|
||||
|
||||
| Reaction | Meaning | Action |
|
||||
| -------------------- | ------------------ | -------------------------------------------------------- |
|
||||
| `:white_check_mark:` | Ticket created | Skip — already ingested |
|
||||
| `:pr-open:` | PR open | Skip creation; record PR link in session log |
|
||||
| `:question:` | Needs more context | Skip creation; agent may post a thread reply asking |
|
||||
| `:repeat:` | Duplicate | Skip creation; link existing Linear issue in session log |
|
||||
|
||||
Incidental reactions observed in the channel — treat as soft hints only, do NOT skip solely on these:
|
||||
|
||||
| Reaction | Meaning | Action |
|
||||
| -------- | ------------------- | -------------------------------------------------- |
|
||||
| `:eyes:` | Someone is triaging | Still ingestable |
|
||||
| `:done:` | Reporter resolved | Demote to `resolved` in verify, but still show row |
|
||||
| `:+1:` | Acknowledged | Ignore |
|
||||
|
||||
Approval-table response code `R` (new) corresponds to `:repeat:` — if you pick `R`, the skill treats it as duplicate and asks for the target Linear ID.
|
||||
|
||||
## Session Log
|
||||
|
||||
Append to `~/temp/bug-dump-ingest/session-YYYY-MM-DD.md`:
|
||||
|
||||
```text
|
||||
Bug Dump Ingest Session -- 2026-04-20 11:40 KST
|
||||
|
||||
Window: 2026-04-18 00:00 — 2026-04-20 12:00 KST
|
||||
Scanned: 28 top-level messages
|
||||
Skipped (meta/discussion/processed): 14
|
||||
Proposed: 14
|
||||
Approved: 11
|
||||
Created in Linear: 10
|
||||
Draft-only (creation failed): 1
|
||||
Linked-only (dedupe): 1
|
||||
Thread replies posted: 11
|
||||
|
||||
Created:
|
||||
- LIN-4710 Unet model dropdown missing selected model -- wavey -- low/ui
|
||||
- LIN-4711 Pro plan jobs end at 30 minutes -- Denys -- high/cloud
|
||||
- ...
|
||||
|
||||
Skipped with reason:
|
||||
- 1776592837.616399 -- design discussion in thread, not a bug
|
||||
- ...
|
||||
```
|
||||
|
||||
## Gotchas
|
||||
|
||||
### Thread summaries, not raw dumps
|
||||
|
||||
Pulling the full thread often adds noise. Summarize replies to: (a) confirmed reproductions by other users, (b) env/version details added in replies, (c) links to related PRs/commits. Drop emojis-only replies, joined-channel notifications, and off-topic chatter.
|
||||
|
||||
### Cross-posts are not bugs
|
||||
|
||||
When the top-level message is just a link to a Slack message in another channel (e.g. "X posting" with a URL and nothing else), follow the link to the original source and ingest from there — do NOT create a ticket from the cross-post itself.
|
||||
|
||||
### Resolved-in-thread messages
|
||||
|
||||
If the reporter replies `"No action needed, this is solved"` (see wavey 2026-04-20 08:06), mark the ticket for SKIP in the approval table, not auto-skip. The human may still want a regression test ticket.
|
||||
|
||||
### Permalinks
|
||||
|
||||
Construct Slack permalinks as:
|
||||
|
||||
```text
|
||||
https://comfy-organization.slack.com/archives/{CHANNEL_ID}/p{TS_WITH_DOT_REMOVED}
|
||||
```
|
||||
|
||||
E.g. `1776510375.473579` → `p1776510375473579`.
|
||||
|
||||
### Attachment handling
|
||||
|
||||
Slack file IDs (e.g. `F0AT...`) are private. Do NOT link them directly in Linear. Instead, list the filename and type in the Linear description and include the Slack permalink — anyone with Slack access can see the attachments from the thread.
|
||||
|
||||
### No auto-create without approval
|
||||
|
||||
Never create Linear issues without a human `Y`. This is a hard rule — the skill exists to reduce human toil, not to replace triage judgment.
|
||||
|
||||
## Reference Files
|
||||
|
||||
- `reference/linear-api.md` — `@Linear` Slack bot command reference (create, search, link, labels, status).
|
||||
- `reference/schema.md` — full ticket schema with field-by-field extraction notes.
|
||||
- `reference/examples.md` — worked examples drawn from real #bug-dump messages.
|
||||
- `reference/verify-commands.md` — cookbook of false-defect verification commands per bug class.
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `red-green-fix` — auto-invoked via the `Skill` tool for every eligible `Y` candidate to produce a failing test + fix + PR with the red-green CI proof.
|
||||
- `writing-playwright-tests` — used by red-green-fix when an e2e test is needed.
|
||||
- `hardening-flaky-e2e-tests` — if the e2e test added in the fix PR starts flaking, jump to this skill.
|
||||
@@ -1,123 +0,0 @@
|
||||
# Worked Examples
|
||||
|
||||
Real #bug-dump messages (2026-04-17 → 2026-04-20) normalized through the skill.
|
||||
|
||||
## Example 1 — Clean bug with repro
|
||||
|
||||
**Source message** (wavey, 2026-04-20 08:06):
|
||||
|
||||
> unet model dropdown doesnt display all available models, think this is part of a larger issue with model dropdowns..
|
||||
>
|
||||
> • open flux.2 klein 4b image edit template
|
||||
> • open unet drop down --> notice selected model isnt present in the list, even though its selected
|
||||
> • execute (to check if it flags the model as missing) --> notice it still runs
|
||||
> No action needed, this is solved
|
||||
|
||||
**Thread resolution**: "No action needed, this is solved" — reporter resolved it in the same message.
|
||||
|
||||
**Classification**: bug, but `thread_resolution = solved`. Flag for human.
|
||||
|
||||
**Approval row**:
|
||||
|
||||
```text
|
||||
1 | wavey, 04-20 08:06 | Unet dropdown missing selected model | cloud | low | ui | N | N (reporter marked solved)
|
||||
```
|
||||
|
||||
Default recommendation: `N`. If human overrides to `Y`, file with a "Regression test" label so QA still tracks it.
|
||||
|
||||
---
|
||||
|
||||
## Example 2 — Clear high-severity cloud bug
|
||||
|
||||
**Source message** (Denys Puziak, 2026-04-18 05:45):
|
||||
|
||||
> I see two reports about jobs ending in 30 minutes while the user is on the Pro plan
|
||||
> cc @Hunter
|
||||
> https://discord.com/channels/.../1494078128971055145
|
||||
|
||||
**Classification**: bug, `env: [cloud prod]` (Pro plan = cloud), `severity: high` (paying users), `area: cloud`.
|
||||
|
||||
**Proposed title**: `Pro plan jobs end at 30 minutes`
|
||||
|
||||
**Description** (excerpt):
|
||||
|
||||
```markdown
|
||||
**Reporter:** Denys Puziak
|
||||
**Env:** cloud prod
|
||||
**Severity (proposed):** high
|
||||
**Area:** cloud
|
||||
|
||||
## Repro
|
||||
|
||||
1. User on Pro plan submits a job
|
||||
2. Job ends at 30 minutes instead of the Pro plan limit
|
||||
|
||||
## Notes
|
||||
|
||||
- Two user reports aggregated by Denys
|
||||
- cc'd @Hunter
|
||||
|
||||
## Source
|
||||
|
||||
Slack: <permalink>
|
||||
Discord thread: https://discord.com/channels/.../1494078128971055145
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example 3 — Not a bug (discussion)
|
||||
|
||||
**Source message** (Christian Byrne, 2026-04-19 19:00):
|
||||
|
||||
> @Glary-Bot okay option A is clearly superior and I feel embarrassed I didn't see that line myself...
|
||||
|
||||
**Classification**: discussion (design review chatter). Skip. Log reason in session file.
|
||||
|
||||
---
|
||||
|
||||
## Example 4 — Meta-action / PR planning
|
||||
|
||||
**Source message** (Christian Byrne, 2026-04-19 09:30):
|
||||
|
||||
> @Glary-Bot how about we make a PR to do:
|
||||
>
|
||||
> 1. Audit the rest of the codebase...
|
||||
> 2. Create a helper in src/base...
|
||||
|
||||
**Classification**: discussion (PR-plan proposal). Skip.
|
||||
|
||||
---
|
||||
|
||||
## Example 5 — Performance regression
|
||||
|
||||
**Source message** (Terry Jia, 2026-04-18 12:52):
|
||||
|
||||
> With Nodes 2.0, large workflows (hundreds of nodes) make the canvas extremely laggy and unusable for actual work — switching tabs takes several seconds or more. Switching back to Litegraph, performance is significantly better.
|
||||
|
||||
**Classification**: bug, `area: node-system`, `severity: high`.
|
||||
|
||||
**Dedupe**: Post `@Linear search nodes 2.0 performance canvas lag` (Team: Frontend Engineering, Status: open) in the candidate's thread. Likely matches exist — flag `Dedup? ?` and ask human which ticket to link to.
|
||||
|
||||
---
|
||||
|
||||
## Example 6 — Reporter says it's a question, not a report
|
||||
|
||||
**Source message** (Luke, 2026-04-17 08:27):
|
||||
|
||||
> Is NodeInfo supposed to show information or docs about the node? It just brings up the node sidebar
|
||||
|
||||
**Classification**: question → ambiguous. Read thread. If replies confirm "that's unexpected, should show docs", upgrade to bug. If "yes that's intended", skip.
|
||||
|
||||
Default recommendation in the approval batch: `?` (needs expansion).
|
||||
|
||||
---
|
||||
|
||||
## Example 7 — Bug with PR already in flight
|
||||
|
||||
**Source message** (Pablo, 2026-04-17 08:52):
|
||||
|
||||
> when deleting multiple assets on cloud -> the confirmation popup still has the assets hashes as names instead of the display name
|
||||
|
||||
**Reaction**: `pr-open (1)` — someone's opened a PR.
|
||||
|
||||
**Classification**: `already-filed` branch. Skip creation; in the session log, note "PR already open". If the human wants a tracking Linear ticket anyway, still fileable with a link to the PR.
|
||||
@@ -1,160 +0,0 @@
|
||||
# Linear Slack Bot (@Linear) Reference
|
||||
|
||||
The skill drives Linear exclusively through the Linear Slack app (`@Linear`). **There is no Linear MCP, no `LINEAR_API_KEY`, no GraphQL.** Every Linear read/write happens as a Slack message that mentions `@Linear` in the `#bug-dump` thread, and the Linear Slack app performs the action and posts a reply card containing the issue URL.
|
||||
|
||||
## Why Slack-only
|
||||
|
||||
- The `#bug-dump` thread is already the source of truth; keeping the entire lifecycle (report → ticket → PR → resolution) in one thread means Processed Detection can grep the thread instead of a separate registry.
|
||||
- No API key rotation, no MCP server install, no OAuth browser flow — works on any machine that already has the Slack MCP configured.
|
||||
- The Linear Slack app's reply card (with issue URL, title, status, and assignee) IS the canonical receipt; the skill records its `ts` in the session log.
|
||||
|
||||
## Prerequisites (one-time, per workspace)
|
||||
|
||||
The Comfy Slack workspace must already have the Linear Slack app installed (it is — that's how humans use `@Linear` reactions today) and `#bug-dump` (channel `C0A4XMHANP3`) must have Linear enabled for the `Frontend Engineering` team. Nothing else to configure. If a `@Linear` invocation silently does nothing, the bot isn't present in the channel — surface that to the human rather than re-trying.
|
||||
|
||||
## Supported operations
|
||||
|
||||
Every operation is a `mcp__plugin_slack_slack__slack_send_message` call with `channel_id=C0A4XMHANP3` and `thread_ts=<parent-ts>`. The `text` is a natural-language instruction to the Linear bot. Keep the text concise — Linear parses the first line as the command intent.
|
||||
|
||||
### 1. Create an issue from the thread
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<parent-ts>",
|
||||
text: "@Linear create\nTeam: Frontend Engineering\nTitle: <title>\nStatus: Triage\nLabels: source:bug-dump, area:<area>, env:<env>, sev:<severity>, reporter:<handle>\n\n<description body>\n\nSource: <slack-permalink>"
|
||||
})
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
- Start with `@Linear create` on its own line — this is the command token the bot keys on.
|
||||
- Always specify `Team: Frontend Engineering`. Without it, the bot falls back to the Slack workspace's default team, which may not be FE.
|
||||
- `Status: Triage` pins the initial workflow state.
|
||||
- `Labels:` — comma-separated. If a label doesn't exist yet in Linear, the bot creates it on first use (verified in Linear workspace settings). Keep the taxonomy exactly as SKILL.md § Label Taxonomy.
|
||||
- `<description body>` — markdown per `reference/schema.md` Description Template. Use real newlines, not literal `\n`.
|
||||
- End with `Source: <slack-permalink>` so the Linear issue body links back even if the auto-attachment of the parent message fails.
|
||||
|
||||
The Linear bot replies in the same thread with a card that contains:
|
||||
|
||||
- The Linear URL (`https://linear.app/comfy-org/issue/FE-NNNN`)
|
||||
- Status, assignee (initially unassigned), and applied labels
|
||||
- A "View in Linear" button
|
||||
|
||||
Parse the URL out of the bot's reply text (or attachments). If no card reply appears within ~10s of polling `slack_read_thread`, treat it as a creation failure — do NOT proceed to the `:white_check_mark:` confirmation reply.
|
||||
|
||||
### 2. Search existing open issues (dedupe)
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<parent-ts>",
|
||||
text: "@Linear search <keyword-1> <keyword-2>\nTeam: Frontend Engineering\nStatus: open"
|
||||
})
|
||||
```
|
||||
|
||||
The bot replies with a card listing up to ~5 matching open issues. Parse identifier (`FE-NNNN`) and URL per row. Treat a hit as a duplicate per SKILL.md § Pre-flight Dedupe Gate § Check 1.
|
||||
|
||||
If `@Linear search` is not supported in the installed Slack app version, fall back to Slack-native search across the `#bug-dump` thread replies (previous `@Linear` cards contain title + URL — grep those for the same keywords). Record which path was used in the session log so the human can see dedupe coverage.
|
||||
|
||||
### 3. Link an existing issue (dedupe: `L` response)
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<parent-ts>",
|
||||
text: "@Linear link FE-4521"
|
||||
})
|
||||
```
|
||||
|
||||
The bot replies with the linked issue card. The skill then posts its own `:white_check_mark: Linked to Linear: <URL>` confirmation reply (see SKILL.md § Slack Thread Reply).
|
||||
|
||||
### 4. Add labels to an existing issue
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<parent-ts>",
|
||||
text: "@Linear FE-4521 add-labels pr-open"
|
||||
})
|
||||
```
|
||||
|
||||
Used when an open PR is discovered after ticket creation and the Linear issue should flip to `pr-open`.
|
||||
|
||||
### 5. Change status
|
||||
|
||||
```text
|
||||
mcp__plugin_slack_slack__slack_send_message({
|
||||
channel_id: "C0A4XMHANP3",
|
||||
thread_ts: "<parent-ts>",
|
||||
text: "@Linear FE-4521 status In Progress"
|
||||
})
|
||||
```
|
||||
|
||||
Rarely used by the skill directly — usually status changes come from the `red-green-fix` PR lifecycle (Linear auto-moves to `In Review` when a PR references `Fixes FE-4521`).
|
||||
|
||||
## Description body template
|
||||
|
||||
The text that follows the command headers is rendered verbatim as the Linear issue description (markdown). Use this template — see `reference/schema.md` for field-by-field extraction notes:
|
||||
|
||||
```markdown
|
||||
**Reporter:** <slack-display-name>
|
||||
**Env:** cloud prod / local / electron / ...
|
||||
**Severity (proposed):** high/medium/low
|
||||
**Area:** ui / node-system / workflow / cloud / templates
|
||||
|
||||
## Repro
|
||||
|
||||
1. ...
|
||||
2. ...
|
||||
|
||||
## Expected
|
||||
|
||||
...
|
||||
|
||||
## Actual
|
||||
|
||||
...
|
||||
|
||||
## Attachments (in Slack thread)
|
||||
|
||||
- image.png (png, 315 KB)
|
||||
- Screen Recording.mov (mov, 37 MB)
|
||||
|
||||
## Source
|
||||
|
||||
Slack: <permalink>
|
||||
Thread summary: <1-3 bullets if thread adds context>
|
||||
```
|
||||
|
||||
The Slack permalink is load-bearing — it's the canonical route to attachments, reporter, and any follow-up discussion. Do NOT embed Slack file IDs (`F0AT...`) directly; they're permissioned.
|
||||
|
||||
## Parsing the bot's reply
|
||||
|
||||
After each `slack_send_message` that mentions `@Linear`, poll `slack_read_thread` (with `channel_id=C0A4XMHANP3`, `thread_ts=<parent-ts>`) up to 3 times, ~3s apart. Scan replies authored by the Linear Slack app user for:
|
||||
|
||||
- Any `https://linear.app/<org>/issue/FE-\d+` URL → capture as the issue URL.
|
||||
- The `FE-NNNN` identifier pattern → capture as the issue identifier.
|
||||
- An error phrase (`couldn't`, `failed`, `not found`, `no team matched`) → treat as failure; surface the full bot text to the human.
|
||||
|
||||
Record the bot reply's `ts` alongside the captured URL and identifier in the session log.
|
||||
|
||||
## Failure modes & handling
|
||||
|
||||
| Symptom | Likely cause | Handling |
|
||||
| ------------------------------------------------- | -------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| No bot reply within 10s | Linear app not in channel, or bot outage | Halt the batch, surface to human, do NOT fabricate a Linear URL. Remaining approved candidates stay queued for re-run. |
|
||||
| Bot replies with "no team matched" | Team name typo or Linear workspace drift | Re-send with the exact team name from the Linear workspace (default: `Frontend Engineering`). If it still fails, ask the human to verify. |
|
||||
| Bot replies with "couldn't parse labels" | One of the labels has syntax the bot rejects | Drop the offending label, re-send; log the partial-label failure so the human can patch after. |
|
||||
| Bot creates the issue but reply lacks the URL | Rare bot format change | Re-fetch the thread after ~5s; if URL still absent, open Linear search via `@Linear search <title>` and recover the identifier + URL. |
|
||||
| Multiple `@Linear` replies match (duplicate card) | The skill retried without polling first | Keep the earliest card's URL; log the extras. Never re-issue `@Linear create` for the same candidate without confirming the first card failed. |
|
||||
|
||||
Never retry `@Linear create` without first running `@Linear search` for the same title keywords — a duplicate card is worse than an initial failure because the human has to close one of them manually.
|
||||
|
||||
## Why no direct API path
|
||||
|
||||
- The Linear MCP (official or community) would require either OAuth setup or `LINEAR_API_KEY` in env — both are per-machine hurdles the skill should not depend on.
|
||||
- Direct GraphQL against `api.linear.app` has the same key-management cost and bypasses the Slack thread as the audit trail.
|
||||
- Routing every action through `@Linear` in the thread gives humans full visibility in the channel (the bot's card is the receipt) and Processed Detection becomes a simple Slack thread read.
|
||||
|
||||
If a future need requires capabilities the `@Linear` Slack app doesn't expose (bulk operations, private field edits, webhooks), stop and surface the limitation to the human rather than quietly adding an API-key path — the "Slack-only" constraint is intentional.
|
||||
@@ -1,94 +0,0 @@
|
||||
# Ticket Schema — Extraction Notes
|
||||
|
||||
Field-by-field guidance for normalizing a Slack #bug-dump message into a ticket.
|
||||
|
||||
## `slack_ts`
|
||||
|
||||
The top-level message timestamp from `slack_read_channel` response (`Message TS:` field). Always store the dotted form (`1776510375.473579`). This is the ingestion identity used in `processed.json`.
|
||||
|
||||
## `slack_permalink`
|
||||
|
||||
Construct:
|
||||
|
||||
```text
|
||||
https://comfy-organization.slack.com/archives/C0A4XMHANP3/p<ts-without-dot>
|
||||
```
|
||||
|
||||
Example: `1776510375.473579` → `.../p1776510375473579`.
|
||||
|
||||
## `reporter`
|
||||
|
||||
The display name + parenthetical nickname if present. Examples from the channel:
|
||||
|
||||
- `Ali Ranjah (wavey)`
|
||||
- `Denys Puziak`
|
||||
- `Christian Byrne`
|
||||
|
||||
Do NOT use the Slack user ID (`U087MJCDHHC`) in Linear — names are more readable.
|
||||
|
||||
## `title`
|
||||
|
||||
Rules:
|
||||
|
||||
- Start with a verb or noun phrase describing the observed defect, not the reporter.
|
||||
- ≤ 80 chars.
|
||||
- Include env qualifier ("cloud prod", "local dev", "electron") only if ambiguous.
|
||||
- Strip emoji and reactions from the original message when extracting.
|
||||
|
||||
Transformations:
|
||||
|
||||
| Slack message (excerpt) | Title |
|
||||
| ----------------------------------------------------------------------- | --------------------------------------------------- |
|
||||
| "unet model dropdown doesnt display all available models..." | Unet dropdown missing selected model |
|
||||
| "Dates are broken on Settings -> Secrets. Cloud Prod" | Settings → Secrets dates broken on cloud prod |
|
||||
| "LTX-2: Audio to VIdeo template results in the "RuntimeError..." error" | LTX-2 Audio-to-Video template RuntimeError on cloud |
|
||||
|
||||
## `description`
|
||||
|
||||
Structure — see `linear-api.md` § "Description body template". Key rules:
|
||||
|
||||
- Lead with **Repro** numbered list. Extract from the message body; if no steps are given, write "Repro: [Slack message body quoted verbatim]" and flag for human in approval.
|
||||
- Preserve the reporter's own words in the Repro section when they include "step 1 / step 2" markers.
|
||||
- Collapse multi-paragraph asides into "Notes" at the end.
|
||||
|
||||
## `env`
|
||||
|
||||
Detect from message text using these terms:
|
||||
|
||||
| Text in message | Tag |
|
||||
| -------------------------- | ---------------------- |
|
||||
| `cloud prod`, `prod cloud` | `cloud prod` |
|
||||
| `cloud dev` | `cloud dev` |
|
||||
| `cloud` | `cloud` (unqual.) |
|
||||
| `local`, `localhost` | `local` |
|
||||
| `electron`, `desktop` | `electron` |
|
||||
| `nodes 2.0`, `LG` | (feature tag, not env) |
|
||||
|
||||
A message can have multiple env tags. If none are detectable, set `env: []` and flag "env unclear" in the approval row.
|
||||
|
||||
## `severity`
|
||||
|
||||
Heuristics in SKILL.md. When uncertain, mark `medium` and note in approval table: `Sev: medium (flag)`.
|
||||
|
||||
## `area`
|
||||
|
||||
Single tag. Use the one that best fits; tiebreak toward the more actionable team:
|
||||
|
||||
- `cloud` > `workflow` when the reported behavior is specific to cloud-hosted features (billing, queue, jobs)
|
||||
- `node-system` > `ui` when the defect is canvas interaction, not just visual
|
||||
- `templates` only when a named template is the subject
|
||||
|
||||
## `attachments`
|
||||
|
||||
From `slack_read_channel` message `Files:` field. Parse name, ID, type. Never include the Slack file ID in the Linear description — those are permissioned — just the filename and type.
|
||||
|
||||
## `thread_resolution`
|
||||
|
||||
Fetch via `slack_read_thread`. Scan replies for:
|
||||
|
||||
- `solved`, `resolved`, `fixed`, `no action needed` → `solved`
|
||||
- A `:done:` reaction from the reporter → `solved`
|
||||
- A `https://github.com/Comfy-Org/ComfyUI_frontend/pull/` URL in a reply → `pr-open` (keep but note in description)
|
||||
- Otherwise → `open`
|
||||
|
||||
If `solved` and no PR merged, flag in approval table: reporter marked solved — confirm before filing.
|
||||
@@ -1,99 +0,0 @@
|
||||
# Verify Commands Cookbook
|
||||
|
||||
One-shot commands for each False-Defect Verification class. Keep each under ~30s.
|
||||
|
||||
## 1. Check for existing fix PR
|
||||
|
||||
```bash
|
||||
# By keyword in title
|
||||
gh search prs --repo Comfy-Org/ComfyUI_frontend "<keyword>" --state merged --limit 5
|
||||
|
||||
# By keyword in body
|
||||
gh pr list --repo Comfy-Org/ComfyUI_frontend --search "<keyword>" --state all --limit 5
|
||||
|
||||
# Recent closing PRs near the reported date
|
||||
gh pr list --repo Comfy-Org/ComfyUI_frontend --state merged \
|
||||
--search "merged:>=<YYYY-MM-DD> <keyword>" --limit 10
|
||||
```
|
||||
|
||||
Verify tag: `fixed` if a merged PR explicitly matches; `pr-open` if an open PR matches.
|
||||
|
||||
## 2. Check for existing open Linear issue
|
||||
|
||||
```text
|
||||
# Primary: @Linear search in the candidate's bug-dump thread
|
||||
# mcp__plugin_slack_slack__slack_send_message({
|
||||
# channel_id: "C0A4XMHANP3",
|
||||
# thread_ts: "<parent-ts>",
|
||||
# text: "@Linear search <keyword-1> <keyword-2>\nTeam: Frontend Engineering\nStatus: open"
|
||||
# })
|
||||
# → poll slack_read_thread, parse the Linear app's reply card for FE-NNNN matches.
|
||||
#
|
||||
# Fallback: grep past @Linear bot replies in the channel for prior ingested titles
|
||||
# mcp__plugin_slack_slack__slack_search_public({
|
||||
# query: "in:<#C0A4XMHANP3> from:@Linear <keyword-1> <keyword-2>"
|
||||
# })
|
||||
```
|
||||
|
||||
Verify tag: `dedupe` with the `FE-NNNN` identifier in the approval row. See `reference/linear-api.md` § "Search existing open issues (dedupe)" for full handling.
|
||||
|
||||
## 3. Feature actually exists in codebase
|
||||
|
||||
```bash
|
||||
# Find the component / feature mentioned
|
||||
rg -l "<ComponentOrFeatureName>" src/ apps/ --type vue --type ts
|
||||
|
||||
# Find a setting key
|
||||
rg "<setting-key>" src/locales/en/ src/stores/settingStore.ts
|
||||
|
||||
# Find a store action
|
||||
rg "<actionName>" src/stores/ --type ts
|
||||
```
|
||||
|
||||
Verify tag: `stale` if 0 hits AND the feature name is specific (not a generic word).
|
||||
|
||||
## 4. Intended behavior check
|
||||
|
||||
```bash
|
||||
# Check docs and release notes
|
||||
rg -l "<feature keyword>" docs/ CHANGELOG.md
|
||||
|
||||
# Check if behavior is asserted in an existing test (green today)
|
||||
rg "<observed behavior>" src/**/*.test.ts browser_tests/
|
||||
```
|
||||
|
||||
Verify tag: `expected` if docs describe this as the intended behavior, or a test asserts it.
|
||||
|
||||
## 5. Reporter self-resolution
|
||||
|
||||
Already gathered via `slack_read_thread`. Look for reporter's own replies containing:
|
||||
|
||||
- "solved", "resolved", "fixed", "no action needed", "nvm", "my bad"
|
||||
- A `:done:` reaction from the reporter
|
||||
- A `:white_check_mark:` reaction
|
||||
|
||||
Verify tag: `resolved`.
|
||||
|
||||
## 6. Env-specific / local setup
|
||||
|
||||
If the message mentions "my machine", "my proxy", "my docker", "my cache" AND no other reporter has confirmed in-thread:
|
||||
|
||||
```bash
|
||||
# Check thread for cross-user confirmations
|
||||
# slack_read_thread → count distinct users replying with "same", "repro'd", "+1"
|
||||
```
|
||||
|
||||
Verify tag: `env` if only the reporter is affected.
|
||||
|
||||
## 7. Cross-post (X posting)
|
||||
|
||||
If the top-level message is just a link + "X posting":
|
||||
|
||||
```bash
|
||||
# Follow the link — use slack_search_public to find the original thread
|
||||
# slack_search_public({ query: "<in:channel from:@reporter> <before:date>" })
|
||||
```
|
||||
|
||||
If the original is already ingestable, ingest from the original's permalink. If it's a GitHub issue, prefer linking that GitHub issue to the Linear ticket instead of creating two entries.
|
||||
|
||||
Verify tag: `cross-post` with the resolved source permalink.
|
||||
BIN
.github/pr-images/fe-237-before-after.png
vendored
Normal file
BIN
.github/pr-images/fe-237-before-after.png
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 44 KiB |
@@ -76,7 +76,7 @@ const executeTask = async (task: MaintenanceTask) => {
|
||||
|
||||
message = t('maintenance.error.taskFailed')
|
||||
} catch (error) {
|
||||
message = (error as Error)?.message
|
||||
message = error instanceof Error ? error.message : undefined
|
||||
}
|
||||
|
||||
toast.add({
|
||||
|
||||
@@ -66,7 +66,7 @@ class MaintenanceTaskRunner {
|
||||
this.error = undefined
|
||||
return true
|
||||
} catch (error) {
|
||||
this.error = (error as Error)?.message
|
||||
this.error = error instanceof Error ? error.message : String(error)
|
||||
throw error
|
||||
} finally {
|
||||
this.executing = false
|
||||
|
||||
@@ -69,6 +69,50 @@ test.describe('Homepage @smoke', () => {
|
||||
).toBeVisible()
|
||||
})
|
||||
|
||||
test('CaseStudySpotlight CTA sizes to its content, not the column', async ({
|
||||
page
|
||||
}) => {
|
||||
const contentColumn = page.getByTestId('case-study-content')
|
||||
const cta = contentColumn.getByRole('link', {
|
||||
name: /see all case studies/i
|
||||
})
|
||||
|
||||
await cta.scrollIntoViewIfNeeded()
|
||||
await expect(cta).toBeVisible()
|
||||
|
||||
const [columnBox, ctaBox] = await Promise.all([
|
||||
contentColumn.boundingBox(),
|
||||
cta.boundingBox()
|
||||
])
|
||||
|
||||
expect(columnBox).not.toBeNull()
|
||||
expect(ctaBox).not.toBeNull()
|
||||
expect(ctaBox!.width).toBeLessThan(columnBox!.width * 0.7)
|
||||
})
|
||||
|
||||
test('CaseStudySpotlight CTA has breathing room above it on mobile @mobile', async ({
|
||||
page
|
||||
}) => {
|
||||
const contentColumn = page.getByTestId('case-study-content')
|
||||
const subheading = contentColumn.getByText(
|
||||
/Videos & case studies from teams/i
|
||||
)
|
||||
const cta = contentColumn.getByRole('link', {
|
||||
name: /see all case studies/i
|
||||
})
|
||||
|
||||
await cta.scrollIntoViewIfNeeded()
|
||||
|
||||
const [subBox, ctaBox] = await Promise.all([
|
||||
subheading.boundingBox(),
|
||||
cta.boundingBox()
|
||||
])
|
||||
|
||||
expect(subBox).not.toBeNull()
|
||||
expect(ctaBox).not.toBeNull()
|
||||
expect(ctaBox!.y - (subBox!.y + subBox!.height)).toBeGreaterThanOrEqual(24)
|
||||
})
|
||||
|
||||
test('BuildWhatSection is visible', async ({ page }) => {
|
||||
// "DOESN'T EXIST" is the actual badge text rendered in the Build What section
|
||||
await expect(page.getByText("DOESN'T EXIST")).toBeVisible()
|
||||
|
||||
83
apps/website/scripts/README.md
Normal file
83
apps/website/scripts/README.md
Normal file
@@ -0,0 +1,83 @@
|
||||
# Website Scripts
|
||||
|
||||
## `refresh-ashby-snapshot.ts`
|
||||
|
||||
Pulls the latest job postings from Ashby and writes
|
||||
`src/data/ashby-roles.snapshot.json`. Invoked by the `Release: Website`
|
||||
GitHub Actions workflow; also runnable locally via
|
||||
`pnpm --filter @comfyorg/website ashby:refresh-snapshot`.
|
||||
|
||||
## `process-videos.sh`
|
||||
|
||||
Generates multi-resolution VP9/WebM + H.264/MP4 variants and a poster
|
||||
frame for marketing videos using `ffmpeg`. Run **locally** before
|
||||
uploading the outputs to `media.comfy.org`; this is not wired into CI.
|
||||
|
||||
```sh
|
||||
apps/website/scripts/process-videos.sh \
|
||||
./video-sources \
|
||||
./dist/videos \
|
||||
"640 960 1280 1920"
|
||||
```
|
||||
|
||||
### Output
|
||||
|
||||
For each source video at `./video-sources/foo.mp4`, you get:
|
||||
|
||||
```text
|
||||
foo-640.webm foo-640.mp4
|
||||
foo-960.webm foo-960.mp4
|
||||
foo-1280.webm foo-1280.mp4
|
||||
foo-1920.webm foo-1920.mp4
|
||||
foo-poster.jpg
|
||||
```
|
||||
|
||||
The naming convention is enforced by `buildVideoSources()` in
|
||||
`src/utils/video.ts`, which the `<SiteVideo>` Vue component uses to
|
||||
emit `<source>` URLs.
|
||||
|
||||
### Pairing with `<SiteVideo>`
|
||||
|
||||
Once the assets are uploaded, render them with:
|
||||
|
||||
```vue
|
||||
<SiteVideo
|
||||
name="foo"
|
||||
base-url="https://media.comfy.org/website/marketing"
|
||||
:width="1280"
|
||||
:formats="['webm', 'mp4']"
|
||||
poster="https://media.comfy.org/website/marketing/foo-poster.jpg"
|
||||
autoplay
|
||||
loop
|
||||
/>
|
||||
```
|
||||
|
||||
### `<SiteVideo>` vs `<VideoPlayer>`
|
||||
|
||||
- **`SiteVideo`** — lightweight multi-source `<video>` for decorative or
|
||||
autoplay marketing clips. No custom controls, no captions UI.
|
||||
- **`VideoPlayer`** — full-featured player with custom scrubber, mute,
|
||||
fullscreen, and caption toggles. Use this for content with subtitles or
|
||||
user-driven playback.
|
||||
|
||||
If you need both responsive sources and the rich `VideoPlayer` chrome, the
|
||||
two are not yet combined; either pick one or extend `VideoPlayer` to accept
|
||||
a source list.
|
||||
|
||||
### Encoder choices
|
||||
|
||||
- **VP9/WebM** at CRF 32 — preferred by Chrome and Firefox; smaller files.
|
||||
- **H.264/MP4** at CRF 23, High profile, `+faststart` — universal fallback,
|
||||
required for Safari iOS.
|
||||
- **Poster JPG** at q4 — extracted from t=1s when the clip is long enough,
|
||||
otherwise t=0; scaled to 1280w. Use this as the `poster` attribute so
|
||||
the video shows something while loading.
|
||||
|
||||
### Why a single resolution per video
|
||||
|
||||
`<source media="...">` inside `<video>` is unreliable across browsers
|
||||
(Safari ignores it). The simplest correct strategy is to ship one
|
||||
well-sized resolution and let CSS scale it down on smaller viewports.
|
||||
The script generates multiple widths so you can pick a different one
|
||||
per page (e.g. 1280w for a hero, 640w for a thumbnail), or wire up
|
||||
JavaScript-based selection later if metrics demand it.
|
||||
110
apps/website/scripts/process-videos.sh
Executable file
110
apps/website/scripts/process-videos.sh
Executable file
@@ -0,0 +1,110 @@
|
||||
#!/usr/bin/env bash
|
||||
#
|
||||
# Generate multi-resolution VP9/WebM + H.264/MP4 variants and a poster frame
|
||||
# for every source video in a given directory. Intended to be run locally
|
||||
# before uploading the outputs to media.comfy.org.
|
||||
#
|
||||
# Usage:
|
||||
# apps/website/scripts/process-videos.sh <input-dir> <output-dir> [widths]
|
||||
#
|
||||
# Example:
|
||||
# apps/website/scripts/process-videos.sh \
|
||||
# ./video-sources \
|
||||
# ./dist/videos \
|
||||
# "640 960 1280 1920"
|
||||
#
|
||||
# Defaults to widths "1280" if omitted.
|
||||
#
|
||||
# Output naming matches buildVideoSources() in src/utils/video.ts:
|
||||
# <name>-<width>.webm
|
||||
# <name>-<width>.mp4
|
||||
# <name>-poster.jpg (single 1280w poster, suitable for SiteVideo)
|
||||
#
|
||||
# Requires ffmpeg and ffprobe on PATH. Tested with ffmpeg 6.x and 7.x.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
if [[ $# -lt 2 ]]; then
|
||||
cat <<USAGE >&2
|
||||
Usage: $0 <input-dir> <output-dir> [widths]
|
||||
widths: space-separated list, e.g. "640 1280 1920" (default: "1280")
|
||||
USAGE
|
||||
exit 64
|
||||
fi
|
||||
|
||||
input_dir=$1
|
||||
output_dir=$2
|
||||
widths=${3:-1280}
|
||||
|
||||
for tool in ffmpeg ffprobe; do
|
||||
if ! command -v "$tool" >/dev/null 2>&1; then
|
||||
echo "error: $tool not found on PATH" >&2
|
||||
exit 127
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ ! -d $input_dir ]]; then
|
||||
echo "error: input dir not found: $input_dir" >&2
|
||||
exit 66
|
||||
fi
|
||||
|
||||
mkdir -p "$output_dir"
|
||||
|
||||
shopt -s nullglob nocaseglob
|
||||
sources=("$input_dir"/*.{mp4,mov,webm,mkv})
|
||||
shopt -u nullglob nocaseglob
|
||||
|
||||
if [[ ${#sources[@]} -eq 0 ]]; then
|
||||
echo "error: no source videos in $input_dir (looked for .mp4 .mov .webm .mkv)" >&2
|
||||
exit 66
|
||||
fi
|
||||
|
||||
for src in "${sources[@]}"; do
|
||||
name=$(basename "$src")
|
||||
name=${name%.*}
|
||||
echo "==> $name"
|
||||
|
||||
for w in $widths; do
|
||||
webm_out="$output_dir/${name}-${w}.webm"
|
||||
mp4_out="$output_dir/${name}-${w}.mp4"
|
||||
|
||||
echo " encoding ${w}w VP9/WebM -> $webm_out"
|
||||
ffmpeg -y -hide_banner -loglevel error \
|
||||
-i "$src" \
|
||||
-vf "scale=${w}:-2:flags=lanczos" \
|
||||
-c:v libvpx-vp9 -b:v 0 -crf 32 -row-mt 1 -tile-columns 2 \
|
||||
-c:a libopus -b:a 96k \
|
||||
-f webm "$webm_out"
|
||||
|
||||
echo " encoding ${w}w H.264/MP4 -> $mp4_out"
|
||||
ffmpeg -y -hide_banner -loglevel error \
|
||||
-i "$src" \
|
||||
-vf "scale=${w}:-2:flags=lanczos" \
|
||||
-c:v libx264 -crf 23 -preset slow -profile:v high -pix_fmt yuv420p \
|
||||
-c:a aac -b:a 128k \
|
||||
-movflags +faststart \
|
||||
"$mp4_out"
|
||||
done
|
||||
|
||||
poster_out="$output_dir/${name}-poster.jpg"
|
||||
duration_raw=$(ffprobe -v error -show_entries format=duration \
|
||||
-of default=noprint_wrappers=1:nokey=1 "$src" 2>/dev/null || true)
|
||||
if [[ $duration_raw =~ ^[0-9]+([.][0-9]+)?$ ]]; then
|
||||
duration="$duration_raw"
|
||||
else
|
||||
duration=0
|
||||
fi
|
||||
if awk -v d="$duration" 'BEGIN { exit !(d >= 1.0) }'; then
|
||||
poster_seek=1
|
||||
else
|
||||
poster_seek=0
|
||||
fi
|
||||
echo " extracting poster (t=${poster_seek}s) -> $poster_out"
|
||||
ffmpeg -y -hide_banner -loglevel error \
|
||||
-ss "$poster_seek" -i "$src" \
|
||||
-vframes 1 -vf "scale=1280:-2:flags=lanczos" \
|
||||
-q:v 4 \
|
||||
"$poster_out"
|
||||
done
|
||||
|
||||
echo "done. upload contents of $output_dir to media.comfy.org."
|
||||
51
apps/website/src/assets/marketing/README.md
Normal file
51
apps/website/src/assets/marketing/README.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Marketing Assets
|
||||
|
||||
Source images committed here are processed by Astro at build time and emitted
|
||||
as multiple formats (AVIF, WebP) at multiple widths (640w, 960w, 1280w, 1920w).
|
||||
|
||||
## Usage
|
||||
|
||||
Drop a high-resolution source image (PNG or JPG) here, then render it with
|
||||
Astro's built-in `<Picture>` component plus the shared defaults:
|
||||
|
||||
```astro
|
||||
---
|
||||
import { Picture } from 'astro:assets'
|
||||
import {
|
||||
MARKETING_FORMATS,
|
||||
MARKETING_WIDTHS
|
||||
} from '../utils/marketingImage'
|
||||
import hero from '../assets/marketing/hero.png'
|
||||
---
|
||||
<Picture
|
||||
src={hero}
|
||||
alt="ComfyUI workflow preview"
|
||||
formats={[...MARKETING_FORMATS]}
|
||||
widths={[...MARKETING_WIDTHS]}
|
||||
sizes="(max-width: 768px) 100vw, 50vw"
|
||||
/>
|
||||
```
|
||||
|
||||
The component generates a `<picture>` element with `<source>` tags for AVIF
|
||||
and WebP, plus an `<img>` fallback. Output files are hashed and emitted under
|
||||
`dist/_website/` for long-term caching.
|
||||
|
||||
A custom Astro wrapper component is intentionally not provided: Astro's
|
||||
discriminated union `LocalImageProps | RemoteImageProps` for `<Picture>` makes
|
||||
a thin wrapper that mutates `widths` / `formats` impractical to type safely
|
||||
without `as` casts. The shared constants give us the same consistency benefit
|
||||
without that cost.
|
||||
|
||||
## When to use this vs. `media.comfy.org`
|
||||
|
||||
- **Use `src/assets/marketing/`** for static marketing images that are part of
|
||||
page content (hero shots, product imagery, illustrations). Build-time
|
||||
processing gives you AVIF/WebP variants automatically.
|
||||
- **Use `media.comfy.org`** for video content, large/changing image libraries
|
||||
(gallery), and anything shared across properties.
|
||||
|
||||
## Source image guidelines
|
||||
|
||||
- Provide the largest size you'll ever need (≥1920px wide).
|
||||
- PNG for screenshots/illustrations with sharp edges; JPG for photographs.
|
||||
- Astro will downscale; it will not upscale. Always supply at least 1920w.
|
||||
@@ -88,7 +88,7 @@ const contactColumn = {
|
||||
{ label: t('footer.sales', locale), href: routes.contact },
|
||||
{
|
||||
label: t('footer.support', locale),
|
||||
href: externalLinks.discord,
|
||||
href: externalLinks.support,
|
||||
external: true
|
||||
},
|
||||
{ label: t('footer.press', locale), href: 'mailto:press@comfy.org' }
|
||||
|
||||
68
apps/website/src/components/common/SiteVideo.vue
Normal file
68
apps/website/src/components/common/SiteVideo.vue
Normal file
@@ -0,0 +1,68 @@
|
||||
<script setup lang="ts">
|
||||
import { cn } from '@comfyorg/tailwind-utils'
|
||||
import { computed } from 'vue'
|
||||
|
||||
import { buildVideoSources, videoKey } from '../../utils/video'
|
||||
import type { VideoFormat } from '../../utils/video'
|
||||
|
||||
const {
|
||||
name,
|
||||
baseUrl,
|
||||
width = 1280,
|
||||
formats = ['webm', 'mp4'],
|
||||
poster,
|
||||
alt,
|
||||
autoplay = false,
|
||||
loop = false,
|
||||
muted = autoplay,
|
||||
controls = false,
|
||||
preload = autoplay ? 'auto' : 'metadata',
|
||||
containerClass,
|
||||
videoClass
|
||||
} = defineProps<{
|
||||
name: string
|
||||
baseUrl: string
|
||||
width?: number
|
||||
formats?: VideoFormat[]
|
||||
poster?: string
|
||||
alt?: string
|
||||
autoplay?: boolean
|
||||
loop?: boolean
|
||||
muted?: boolean
|
||||
controls?: boolean
|
||||
preload?: 'auto' | 'metadata' | 'none'
|
||||
containerClass?: string
|
||||
videoClass?: string
|
||||
}>()
|
||||
|
||||
const sources = computed(() =>
|
||||
buildVideoSources({ name, baseUrl, width, formats })
|
||||
)
|
||||
const remountKey = computed(() => videoKey(sources.value))
|
||||
const decorative = computed(() => !alt && !controls)
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<div :class="cn('relative', containerClass)">
|
||||
<video
|
||||
:key="remountKey"
|
||||
:class="cn('size-full', videoClass)"
|
||||
:poster
|
||||
:preload
|
||||
:autoplay
|
||||
:loop
|
||||
:muted
|
||||
:controls
|
||||
:aria-label="alt"
|
||||
:aria-hidden="decorative ? true : undefined"
|
||||
playsinline
|
||||
>
|
||||
<source
|
||||
v-for="source in sources"
|
||||
:key="source.src"
|
||||
:src="source.src"
|
||||
:type="source.type"
|
||||
/>
|
||||
</video>
|
||||
</div>
|
||||
</template>
|
||||
@@ -35,7 +35,10 @@ const routes = getRoutes(locale)
|
||||
</div>
|
||||
|
||||
<!-- Right: content -->
|
||||
<div class="flex flex-col justify-between p-6 lg:flex-1">
|
||||
<div
|
||||
data-testid="case-study-content"
|
||||
class="flex flex-col justify-between p-6 lg:flex-1"
|
||||
>
|
||||
<div class="flex flex-col gap-8">
|
||||
<p
|
||||
class="text-primary-comfy-yellow text-sm font-bold tracking-widest uppercase"
|
||||
@@ -52,12 +55,8 @@ const routes = getRoutes(locale)
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<div class="flex flex-col gap-3 sm:flex-row">
|
||||
<BrandButton
|
||||
:href="routes.customers"
|
||||
variant="outline"
|
||||
class="flex-1 text-center"
|
||||
>
|
||||
<div class="mt-8 flex flex-col items-start gap-3 sm:flex-row lg:mt-0">
|
||||
<BrandButton :href="routes.customers" variant="outline">
|
||||
{{ t('caseStudy.seeAll', locale) }}
|
||||
</BrandButton>
|
||||
</div>
|
||||
|
||||
@@ -106,6 +106,11 @@ function onNavKeydown(event: KeyboardEvent) {
|
||||
navButtons()?.[next]?.focus({ preventScroll: true })
|
||||
}
|
||||
|
||||
function onCategoryHover(index: number) {
|
||||
if (isEnabled.value) return
|
||||
activeCategory.value = index
|
||||
}
|
||||
|
||||
function travelRange(el: HTMLElement) {
|
||||
if (window.matchMedia('(min-width: 1024px)').matches) return 150
|
||||
|
||||
@@ -116,31 +121,29 @@ function travelRange(el: HTMLElement) {
|
||||
}
|
||||
|
||||
const pinScrubEnd = `+=${categories.length * VH_PER_ITEM}%`
|
||||
const parallaxMediaQuery = '(max-width: 1023px)'
|
||||
useParallax([rightImgRef], {
|
||||
trigger: sectionRef,
|
||||
fromY: (el) => -travelRange(el),
|
||||
y: (el) => travelRange(el),
|
||||
start: 'top top',
|
||||
end: pinScrubEnd
|
||||
end: pinScrubEnd,
|
||||
mediaQuery: parallaxMediaQuery
|
||||
})
|
||||
useParallax([leftImgRef], {
|
||||
trigger: sectionRef,
|
||||
fromY: (el) => travelRange(el),
|
||||
y: (el) => -travelRange(el),
|
||||
start: 'top top',
|
||||
end: pinScrubEnd
|
||||
end: pinScrubEnd,
|
||||
mediaQuery: parallaxMediaQuery
|
||||
})
|
||||
</script>
|
||||
|
||||
<template>
|
||||
<section
|
||||
ref="sectionRef"
|
||||
:class="
|
||||
cn(
|
||||
'bg-primary-comfy-ink relative isolate overflow-x-clip pt-20 lg:py-24',
|
||||
isEnabled && 'lg:h-[calc(100vh+60px)]'
|
||||
)
|
||||
"
|
||||
class="bg-primary-comfy-ink relative isolate overflow-x-clip pt-20 lg:h-[calc(100vh+60px)] lg:py-24"
|
||||
>
|
||||
<svg class="absolute size-0" width="0" height="0" aria-hidden="true">
|
||||
<defs>
|
||||
@@ -202,6 +205,8 @@ useParallax([leftImgRef], {
|
||||
"
|
||||
:aria-current="index === activeCategory ? 'true' : undefined"
|
||||
@click="scrollToIndex(index)"
|
||||
@mouseenter="onCategoryHover(index)"
|
||||
@focus="onCategoryHover(index)"
|
||||
>
|
||||
{{ category.label }}
|
||||
</button>
|
||||
|
||||
@@ -101,17 +101,9 @@ const features: IncludedFeature[] = [
|
||||
class="mt-0.5 size-4 shrink-0"
|
||||
aria-hidden="true"
|
||||
/>
|
||||
<div>
|
||||
<p class="text-primary-comfy-canvas text-sm font-medium">
|
||||
{{ t(feature.titleKey, locale) }}
|
||||
</p>
|
||||
<span
|
||||
v-if="feature.isComingSoon"
|
||||
class="text-primary-comfy-yellow mt-1 inline-block text-xs"
|
||||
>
|
||||
{{ t('pricing.included.comingSoon', locale) }}
|
||||
</span>
|
||||
</div>
|
||||
<p class="text-primary-comfy-canvas text-sm font-medium">
|
||||
{{ t(feature.titleKey, locale) }}
|
||||
</p>
|
||||
</div>
|
||||
|
||||
<!-- Description -->
|
||||
|
||||
@@ -20,6 +20,9 @@ interface PinScrubOptions {
|
||||
/** Viewport-height percentage each category occupies in the scroll distance. */
|
||||
export const VH_PER_ITEM = 20
|
||||
|
||||
/** Pin/scrub is mobile-only — desktop uses hover-based category switching. */
|
||||
const PIN_SCRUB_MEDIA_QUERY = '(max-width: 1023px)'
|
||||
|
||||
function interpolateY(
|
||||
index: number,
|
||||
buttonCenters: number[],
|
||||
@@ -66,7 +69,8 @@ export function usePinScrub(refs: PinScrubRefs, options: PinScrubOptions) {
|
||||
!refs.section.value ||
|
||||
!refs.content.value ||
|
||||
!refs.nav.value ||
|
||||
prefersReducedMotion()
|
||||
prefersReducedMotion() ||
|
||||
!window.matchMedia(PIN_SCRUB_MEDIA_QUERY).matches
|
||||
)
|
||||
return
|
||||
const section: HTMLElement = refs.section.value
|
||||
|
||||
@@ -35,6 +35,7 @@ export const externalLinks = {
|
||||
docsApi: 'https://docs.comfy.org/api-reference/cloud',
|
||||
github: 'https://github.com/Comfy-Org/ComfyUI',
|
||||
platform: 'https://platform.comfy.org',
|
||||
support: 'https://support.comfy.org/hc/en-us',
|
||||
workflows: 'https://comfy.org/workflows',
|
||||
youtube: 'https://www.youtube.com/@ComfyOrg'
|
||||
} as const
|
||||
|
||||
@@ -1,24 +1,10 @@
|
||||
{
|
||||
"fetchedAt": "2026-04-24T18:59:03.989Z",
|
||||
"fetchedAt": "2026-05-02T20:15:18.321Z",
|
||||
"departments": [
|
||||
{
|
||||
"name": "DESIGN",
|
||||
"key": "design",
|
||||
"roles": [
|
||||
{
|
||||
"id": "4c5d6afb78652df7",
|
||||
"title": "Freelance Motion Designer",
|
||||
"department": "Design",
|
||||
"location": "San Francisco",
|
||||
"applyUrl": "https://jobs.ashbyhq.com/comfy-org/a7ccc2b4-4d9d-4e04-b39c-28a711995b5b/application"
|
||||
},
|
||||
{
|
||||
"id": "0f5256cf302e552b",
|
||||
"title": "Creative Artist",
|
||||
"department": "Design",
|
||||
"location": "San Francisco",
|
||||
"applyUrl": "https://jobs.ashbyhq.com/comfy-org/19ba10aa-4961-45e8-8473-66a8a7a8079d/application"
|
||||
},
|
||||
{
|
||||
"id": "e915f2c78b17f93b",
|
||||
"title": "Senior Product Designer",
|
||||
@@ -33,13 +19,6 @@
|
||||
"location": "San Francisco",
|
||||
"applyUrl": "https://jobs.ashbyhq.com/comfy-org/abc787b9-ad85-421c-8218-debd23bea096/application"
|
||||
},
|
||||
{
|
||||
"id": "5746486d87874937",
|
||||
"title": "Graphic Designer",
|
||||
"department": "Design",
|
||||
"location": "San Francisco",
|
||||
"applyUrl": "https://jobs.ashbyhq.com/comfy-org/49fa0b07-3fa1-4a3a-b2c6-d2cc684ad63f/application"
|
||||
},
|
||||
{
|
||||
"id": "547b6ba622c800a5",
|
||||
"title": "Senior Product Designer - Craft",
|
||||
@@ -115,6 +94,13 @@
|
||||
"department": "Engineering",
|
||||
"location": "San Francisco",
|
||||
"applyUrl": "https://jobs.ashbyhq.com/comfy-org/9e4b9029-c3e9-436b-82c4-a1a9f1b8c16e/application"
|
||||
},
|
||||
{
|
||||
"id": "2eb53e8943cc9396",
|
||||
"title": "Growth Engineer",
|
||||
"department": "Engineering",
|
||||
"location": "San Francisco",
|
||||
"applyUrl": "https://jobs.ashbyhq.com/comfy-org/f1fdde76-84ae-48c1-b0f9-9654dd8e7de5/application"
|
||||
}
|
||||
]
|
||||
},
|
||||
@@ -122,6 +108,27 @@
|
||||
"name": "MARKETING",
|
||||
"key": "marketing",
|
||||
"roles": [
|
||||
{
|
||||
"id": "4c5d6afb78652df7",
|
||||
"title": "Freelance Motion Designer",
|
||||
"department": "Marketing",
|
||||
"location": "San Francisco",
|
||||
"applyUrl": "https://jobs.ashbyhq.com/comfy-org/a7ccc2b4-4d9d-4e04-b39c-28a711995b5b/application"
|
||||
},
|
||||
{
|
||||
"id": "0f5256cf302e552b",
|
||||
"title": "Creative Artist",
|
||||
"department": "Marketing",
|
||||
"location": "San Francisco",
|
||||
"applyUrl": "https://jobs.ashbyhq.com/comfy-org/19ba10aa-4961-45e8-8473-66a8a7a8079d/application"
|
||||
},
|
||||
{
|
||||
"id": "5746486d87874937",
|
||||
"title": "Graphic Designer",
|
||||
"department": "Marketing",
|
||||
"location": "San Francisco",
|
||||
"applyUrl": "https://jobs.ashbyhq.com/comfy-org/49fa0b07-3fa1-4a3a-b2c6-d2cc684ad63f/application"
|
||||
},
|
||||
{
|
||||
"id": "b5803a0d4785d406",
|
||||
"title": "Lifecycle Growth Marketer",
|
||||
@@ -144,7 +151,7 @@
|
||||
"roles": [
|
||||
{
|
||||
"id": "ec68ae44dd5943c9",
|
||||
"title": "Senior Technical Recruiter",
|
||||
"title": "Talent Lead",
|
||||
"department": "Operations",
|
||||
"location": "San Francisco",
|
||||
"applyUrl": "https://jobs.ashbyhq.com/comfy-org/d5008532-c45d-46e6-ba2c-20489d364362/application"
|
||||
|
||||
@@ -914,9 +914,9 @@ const translations = {
|
||||
'zh-CN': '我应该选择 Comfy Cloud 还是本地 ComfyUI(自托管)?'
|
||||
},
|
||||
'cloud.faq.3.a': {
|
||||
en: "Comfy Cloud (beta) has zero setup, is easy to share with your team, and is faster than most GPUs you can run on a desktop workstation. You can immediately run the best models and workflows from the community on Comfy Cloud.\nLocal ComfyUI is infinitely customizable, works offline, and you don't need to worry about queue times. However, depending on what you want to create, you might need to have a good GPU and some amount of technical knowledge to install community-created custom nodes.",
|
||||
en: "Comfy Cloud has zero setup, is easy to share with your team, and is faster than most GPUs you can run on a desktop workstation. You can immediately run the best models and workflows from the community on Comfy Cloud.\nLocal ComfyUI is infinitely customizable, works offline, and you don't need to worry about queue times. However, depending on what you want to create, you might need to have a good GPU and some amount of technical knowledge to install community-created custom nodes.",
|
||||
'zh-CN':
|
||||
'Comfy Cloud(测试版)无需任何设置,方便与团队共享,比大多数桌面工作站 GPU 更快。您可以立即在 Comfy Cloud 上运行社区中最好的模型和工作流。\n本地 ComfyUI 可以无限定制,支持离线工作,无需担心排队时间。但根据您的创作需求,可能需要一块好的 GPU 以及一定的技术知识来安装社区创建的自定义节点。'
|
||||
'Comfy Cloud 无需任何设置,方便与团队共享,比大多数桌面工作站 GPU 更快。您可以立即在 Comfy Cloud 上运行社区中最好的模型和工作流。\n本地 ComfyUI 可以无限定制,支持离线工作,无需担心排队时间。但根据您的创作需求,可能需要一块好的 GPU 以及一定的技术知识来安装社区创建的自定义节点。'
|
||||
},
|
||||
'cloud.faq.4.q': {
|
||||
en: 'Do I need a GPU or a strong computer to use Comfy Cloud?',
|
||||
@@ -1280,10 +1280,6 @@ const translations = {
|
||||
en: 'Run multiple workflows in parallel to speed up your pipeline.',
|
||||
'zh-CN': '并行运行多个工作流,加速你的流程。'
|
||||
},
|
||||
'pricing.included.comingSoon': {
|
||||
en: 'coming soon',
|
||||
'zh-CN': '即将推出'
|
||||
},
|
||||
|
||||
// VideoPlayer
|
||||
'player.play': { en: 'Play', 'zh-CN': '播放' },
|
||||
|
||||
3
apps/website/src/utils/marketingImage.ts
Normal file
3
apps/website/src/utils/marketingImage.ts
Normal file
@@ -0,0 +1,3 @@
|
||||
export const MARKETING_FORMATS = ['avif', 'webp'] as const
|
||||
|
||||
export const MARKETING_WIDTHS = [640, 960, 1280, 1920] as const
|
||||
111
apps/website/src/utils/video.test.ts
Normal file
111
apps/website/src/utils/video.test.ts
Normal file
@@ -0,0 +1,111 @@
|
||||
import { describe, expect, it } from 'vitest'
|
||||
|
||||
import { buildVideoSources, videoKey } from './video'
|
||||
|
||||
describe('buildVideoSources', () => {
|
||||
it('builds a source per requested format', () => {
|
||||
const sources = buildVideoSources({
|
||||
name: 'hero',
|
||||
baseUrl: 'https://media.comfy.org/website/marketing',
|
||||
width: 1280,
|
||||
formats: ['webm', 'mp4']
|
||||
})
|
||||
|
||||
expect(sources).toEqual([
|
||||
{
|
||||
src: 'https://media.comfy.org/website/marketing/hero-1280.webm',
|
||||
type: 'video/webm',
|
||||
format: 'webm'
|
||||
},
|
||||
{
|
||||
src: 'https://media.comfy.org/website/marketing/hero-1280.mp4',
|
||||
type: 'video/mp4',
|
||||
format: 'mp4'
|
||||
}
|
||||
])
|
||||
})
|
||||
|
||||
it('preserves caller-supplied format order', () => {
|
||||
const sources = buildVideoSources({
|
||||
name: 'clip',
|
||||
baseUrl: 'https://cdn.example.com/v',
|
||||
width: 960,
|
||||
formats: ['mp4', 'webm']
|
||||
})
|
||||
|
||||
expect(sources.map((s) => s.format)).toEqual(['mp4', 'webm'])
|
||||
})
|
||||
|
||||
it('strips a single trailing slash from baseUrl', () => {
|
||||
const sources = buildVideoSources({
|
||||
name: 'reel',
|
||||
baseUrl: 'https://media.comfy.org/website/marketing/',
|
||||
width: 1920,
|
||||
formats: ['webm']
|
||||
})
|
||||
|
||||
expect(sources[0]?.src).toBe(
|
||||
'https://media.comfy.org/website/marketing/reel-1920.webm'
|
||||
)
|
||||
})
|
||||
|
||||
it('returns an empty list when no formats are requested', () => {
|
||||
const sources = buildVideoSources({
|
||||
name: 'x',
|
||||
baseUrl: 'https://example.com',
|
||||
width: 640,
|
||||
formats: []
|
||||
})
|
||||
|
||||
expect(sources).toEqual([])
|
||||
})
|
||||
})
|
||||
|
||||
describe('videoKey', () => {
|
||||
it('changes when the source URL list changes', () => {
|
||||
const at1280 = buildVideoSources({
|
||||
name: 'hero',
|
||||
baseUrl: 'https://media.comfy.org/m',
|
||||
width: 1280,
|
||||
formats: ['webm', 'mp4']
|
||||
})
|
||||
const at640 = buildVideoSources({
|
||||
name: 'hero',
|
||||
baseUrl: 'https://media.comfy.org/m',
|
||||
width: 640,
|
||||
formats: ['webm', 'mp4']
|
||||
})
|
||||
|
||||
expect(videoKey(at1280)).not.toBe(videoKey(at640))
|
||||
})
|
||||
|
||||
it('is stable across repeated calls with the same inputs', () => {
|
||||
const args = {
|
||||
name: 'hero',
|
||||
baseUrl: 'https://media.comfy.org/m',
|
||||
width: 1280,
|
||||
formats: ['webm', 'mp4'] as const
|
||||
}
|
||||
|
||||
expect(
|
||||
videoKey(buildVideoSources({ ...args, formats: [...args.formats] }))
|
||||
).toBe(videoKey(buildVideoSources({ ...args, formats: [...args.formats] })))
|
||||
})
|
||||
|
||||
it('reflects format-order changes', () => {
|
||||
const webmFirst = buildVideoSources({
|
||||
name: 'hero',
|
||||
baseUrl: 'https://media.comfy.org/m',
|
||||
width: 1280,
|
||||
formats: ['webm', 'mp4']
|
||||
})
|
||||
const mp4First = buildVideoSources({
|
||||
name: 'hero',
|
||||
baseUrl: 'https://media.comfy.org/m',
|
||||
width: 1280,
|
||||
formats: ['mp4', 'webm']
|
||||
})
|
||||
|
||||
expect(videoKey(webmFirst)).not.toBe(videoKey(mp4First))
|
||||
})
|
||||
})
|
||||
49
apps/website/src/utils/video.ts
Normal file
49
apps/website/src/utils/video.ts
Normal file
@@ -0,0 +1,49 @@
|
||||
/** @knipIgnoreUsedByStackedPR */
|
||||
export type VideoFormat = 'webm' | 'mp4'
|
||||
|
||||
/** @knipIgnoreUsedByStackedPR */
|
||||
export type VideoSource = {
|
||||
src: string
|
||||
type: `video/${VideoFormat}`
|
||||
format: VideoFormat
|
||||
}
|
||||
|
||||
const MIME_TYPES: Record<VideoFormat, VideoSource['type']> = {
|
||||
webm: 'video/webm',
|
||||
mp4: 'video/mp4'
|
||||
}
|
||||
|
||||
type BuildArgs = {
|
||||
name: string
|
||||
baseUrl: string
|
||||
width: number
|
||||
formats: VideoFormat[]
|
||||
}
|
||||
|
||||
/**
|
||||
* Expects assets named `${name}-${width}.${format}` under `${baseUrl}/`,
|
||||
* matching the output of `apps/website/scripts/process-videos.sh`.
|
||||
*/
|
||||
export function buildVideoSources({
|
||||
name,
|
||||
baseUrl,
|
||||
width,
|
||||
formats
|
||||
}: BuildArgs): VideoSource[] {
|
||||
const base = baseUrl.endsWith('/') ? baseUrl.slice(0, -1) : baseUrl
|
||||
|
||||
return formats.map((format) => ({
|
||||
src: `${base}/${name}-${width}.${format}`,
|
||||
type: MIME_TYPES[format],
|
||||
format
|
||||
}))
|
||||
}
|
||||
|
||||
/**
|
||||
* Stable identifier for a list of video sources, suitable as a Vue `key`.
|
||||
* Browsers do not reload a `<video>` when nested `<source>` children change;
|
||||
* keying the parent forces a remount when the source set changes.
|
||||
*/
|
||||
export function videoKey(sources: VideoSource[]): string {
|
||||
return sources.map((s) => s.src).join('|')
|
||||
}
|
||||
@@ -13,45 +13,35 @@ test.describe('Keyboard shortcut actions', { tag: '@keyboard' }, () => {
|
||||
await comfyPage.setup()
|
||||
})
|
||||
|
||||
test('Ctrl+Z undoes the last graph change', async ({ comfyPage }) => {
|
||||
test('Ctrl+Z undoes and Ctrl+Shift+Z redoes the last graph change', async ({
|
||||
comfyPage
|
||||
}) => {
|
||||
const initialNodeCount = await comfyPage.nodeOps.getNodeCount()
|
||||
|
||||
await comfyPage.page.evaluate(() => {
|
||||
const node = window.LiteGraph!.createNode('Note')
|
||||
window.app!.graph!.add(node)
|
||||
await test.step('Ctrl+Z undoes the last graph change', async () => {
|
||||
await comfyPage.page.evaluate(() => {
|
||||
const node = window.LiteGraph!.createNode('Note')
|
||||
window.app!.graph!.add(node)
|
||||
})
|
||||
await comfyPage.nextFrame()
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount + 1)
|
||||
|
||||
await comfyPage.canvas.click()
|
||||
await comfyPage.page.keyboard.press('ControlOrMeta+z')
|
||||
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount)
|
||||
})
|
||||
await comfyPage.nextFrame()
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount + 1)
|
||||
|
||||
await comfyPage.canvas.click()
|
||||
await comfyPage.page.keyboard.press('ControlOrMeta+z')
|
||||
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount)
|
||||
})
|
||||
|
||||
test('Ctrl+Shift+Z redoes after undo', async ({ comfyPage }) => {
|
||||
const initialNodeCount = await comfyPage.nodeOps.getNodeCount()
|
||||
|
||||
await comfyPage.page.evaluate(() => {
|
||||
const node = window.LiteGraph!.createNode('Note')
|
||||
window.app!.graph!.add(node)
|
||||
await test.step('Ctrl+Shift+Z redoes after undo', async () => {
|
||||
await comfyPage.page.keyboard.press('ControlOrMeta+Shift+z')
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount + 1)
|
||||
})
|
||||
await comfyPage.nextFrame()
|
||||
|
||||
await comfyPage.canvas.click()
|
||||
await comfyPage.page.keyboard.press('ControlOrMeta+z')
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount)
|
||||
|
||||
await comfyPage.page.keyboard.press('ControlOrMeta+Shift+z')
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount + 1)
|
||||
})
|
||||
|
||||
test('Ctrl+S opens save dialog', async ({ comfyPage }) => {
|
||||
@@ -62,25 +52,23 @@ test.describe('Keyboard shortcut actions', { tag: '@keyboard' }, () => {
|
||||
await expect(saveDialog).toBeVisible()
|
||||
})
|
||||
|
||||
test('Ctrl+, opens settings dialog', async ({ comfyPage }) => {
|
||||
await comfyPage.page.keyboard.down('ControlOrMeta')
|
||||
await comfyPage.page.keyboard.press(',')
|
||||
await comfyPage.page.keyboard.up('ControlOrMeta')
|
||||
|
||||
test('Ctrl+, opens and Escape closes settings dialog', async ({
|
||||
comfyPage
|
||||
}) => {
|
||||
const settingsDialog = comfyPage.page.getByTestId('settings-dialog')
|
||||
await expect(settingsDialog).toBeVisible()
|
||||
})
|
||||
|
||||
test('Escape closes settings dialog', async ({ comfyPage }) => {
|
||||
await comfyPage.page.keyboard.down('ControlOrMeta')
|
||||
await comfyPage.page.keyboard.press(',')
|
||||
await comfyPage.page.keyboard.up('ControlOrMeta')
|
||||
await test.step('Ctrl+, opens settings dialog', async () => {
|
||||
await comfyPage.page.keyboard.down('ControlOrMeta')
|
||||
await comfyPage.page.keyboard.press(',')
|
||||
await comfyPage.page.keyboard.up('ControlOrMeta')
|
||||
|
||||
const settingsDialog = comfyPage.page.getByTestId('settings-dialog')
|
||||
await expect(settingsDialog).toBeVisible()
|
||||
await expect(settingsDialog).toBeVisible()
|
||||
})
|
||||
|
||||
await comfyPage.page.keyboard.press('Escape')
|
||||
await expect(settingsDialog).toBeHidden()
|
||||
await test.step('Escape closes settings dialog', async () => {
|
||||
await comfyPage.page.keyboard.press('Escape')
|
||||
await expect(settingsDialog).toBeHidden()
|
||||
})
|
||||
})
|
||||
|
||||
test('Delete key removes selected nodes', async ({ comfyPage }) => {
|
||||
|
||||
@@ -22,44 +22,35 @@ test.describe('Topbar menu commands', { tag: '@ui' }, () => {
|
||||
await expect.poll(() => topbar.getTabNames()).toHaveLength(2)
|
||||
})
|
||||
|
||||
test('Edit > Undo undoes the last action', async ({ comfyPage }) => {
|
||||
test('Edit > Undo undoes and Edit > Redo restores the last action', async ({
|
||||
comfyPage
|
||||
}) => {
|
||||
const initialNodeCount = await comfyPage.nodeOps.getNodeCount()
|
||||
|
||||
await comfyPage.page.evaluate(() => {
|
||||
const node = window.LiteGraph!.createNode('Note')
|
||||
window.app!.graph!.add(node)
|
||||
await test.step('Edit > Undo undoes the last action', async () => {
|
||||
await comfyPage.page.evaluate(() => {
|
||||
const node = window.LiteGraph!.createNode('Note')
|
||||
window.app!.graph!.add(node)
|
||||
})
|
||||
await comfyPage.nextFrame()
|
||||
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount + 1)
|
||||
|
||||
await comfyPage.menu.topbar.triggerTopbarCommand(['Edit', 'Undo'])
|
||||
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount)
|
||||
})
|
||||
await comfyPage.nextFrame()
|
||||
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount + 1)
|
||||
|
||||
await comfyPage.menu.topbar.triggerTopbarCommand(['Edit', 'Undo'])
|
||||
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount)
|
||||
})
|
||||
|
||||
test('Edit > Redo restores an undone action', async ({ comfyPage }) => {
|
||||
const initialNodeCount = await comfyPage.nodeOps.getNodeCount()
|
||||
|
||||
await comfyPage.page.evaluate(() => {
|
||||
const node = window.LiteGraph!.createNode('Note')
|
||||
window.app!.graph!.add(node)
|
||||
await test.step('Edit > Redo restores an undone action', async () => {
|
||||
await comfyPage.menu.topbar.triggerTopbarCommand(['Edit', 'Redo'])
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount + 1)
|
||||
})
|
||||
await comfyPage.nextFrame()
|
||||
|
||||
await comfyPage.menu.topbar.triggerTopbarCommand(['Edit', 'Undo'])
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount)
|
||||
|
||||
await comfyPage.menu.topbar.triggerTopbarCommand(['Edit', 'Redo'])
|
||||
await expect
|
||||
.poll(() => comfyPage.nodeOps.getNodeCount())
|
||||
.toBe(initialNodeCount + 1)
|
||||
})
|
||||
|
||||
test('File > Save opens save dialog', async ({ comfyPage }) => {
|
||||
|
||||
@@ -230,6 +230,31 @@ export default defineConfig([
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
name: 'comfy/no-unsafe-error-assertion',
|
||||
files: [
|
||||
'src/**/*.ts',
|
||||
'src/**/*.tsx',
|
||||
'src/**/*.vue',
|
||||
'apps/*/src/**/*.ts',
|
||||
'apps/*/src/**/*.tsx',
|
||||
'apps/*/src/**/*.vue'
|
||||
],
|
||||
ignores: ['**/*.test.ts', '**/*.spec.ts'],
|
||||
rules: {
|
||||
'no-restricted-syntax': [
|
||||
'error',
|
||||
{
|
||||
// Bans `value as Error` and `value as Error & { ... }`.
|
||||
// Use `error instanceof Error` narrowing or `toError()` from
|
||||
// @/utils/errorUtil instead — see issue #11429.
|
||||
selector: "TSAsExpression TSTypeReference[typeName.name='Error']",
|
||||
message:
|
||||
'Do not use `as Error` assertions. Use `instanceof Error` narrowing or `toError()` from @/utils/errorUtil instead. See issue #11429.'
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
files: ['**/*.spec.ts'],
|
||||
ignores: ['browser_tests/tests/**/*.spec.ts', 'apps/*/e2e/**/*.spec.ts'],
|
||||
|
||||
@@ -54,6 +54,9 @@ const config: KnipConfig = {
|
||||
'.github/workflows/ci-oss-assets-validation.yaml',
|
||||
// Pending integration in stacked PR
|
||||
'src/components/sidebar/tabs/nodeLibrary/CustomNodesPanel.vue',
|
||||
// Marketing media tooling — adopted by pages in a follow-up PR
|
||||
'apps/website/src/components/common/SiteVideo.vue',
|
||||
'apps/website/src/utils/marketingImage.ts',
|
||||
// Agent review check config, not part of the build
|
||||
'.agents/checks/eslint.strict.config.js',
|
||||
// Devtools extensions, included dynamically
|
||||
|
||||
134
packages/registry-types/src/comfyRegistryTypes.ts
generated
134
packages/registry-types/src/comfyRegistryTypes.ts
generated
@@ -4014,6 +4014,26 @@ export interface paths {
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/proxy/seedance/visual-validate/groups": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
/**
|
||||
* List the caller's completed visual-validation groups
|
||||
* @description Returns the caller's completed visual-validation groups (real-person H5 verification). Used to power the group selector in client UIs. Excludes virtual-library (AIGC) groups, which are not part of the public API surface.
|
||||
*/
|
||||
get: operations["seedanceListVisualValidationGroups"];
|
||||
put?: never;
|
||||
post?: never;
|
||||
delete?: never;
|
||||
options?: never;
|
||||
head?: never;
|
||||
patch?: never;
|
||||
trace?: never;
|
||||
};
|
||||
"/proxy/seedance/visual-validate/sessions/{session_id}": {
|
||||
parameters: {
|
||||
query?: never;
|
||||
@@ -4037,7 +4057,11 @@ export interface paths {
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
get?: never;
|
||||
/**
|
||||
* List the caller's assets across all owned groups
|
||||
* @description Fans out to BytePlus ListAssets across the caller's completed verification groups, denormalizes the group label into each row, and returns a single flat list. Result is post-filtered by asset_type. Optional group_id narrows to one group. Hard caps: 5 pages × 100 assets per group, 1000 total assets.
|
||||
*/
|
||||
get: operations["seedanceListUserAssets"];
|
||||
put?: never;
|
||||
post: operations["seedanceCreateAsset"];
|
||||
delete?: never;
|
||||
@@ -13569,7 +13593,7 @@ export interface components {
|
||||
stream: boolean | null;
|
||||
};
|
||||
/** @enum {string} */
|
||||
OpenAIModels: "gpt-4" | "gpt-4-0314" | "gpt-4-0613" | "gpt-4-32k" | "gpt-4-32k-0314" | "gpt-4-32k-0613" | "gpt-4-0125-preview" | "gpt-4-turbo" | "gpt-4-turbo-2024-04-09" | "gpt-4-turbo-preview" | "gpt-4-1106-preview" | "gpt-4-vision-preview" | "gpt-3.5-turbo" | "gpt-3.5-turbo-16k" | "gpt-3.5-turbo-0301" | "gpt-3.5-turbo-0613" | "gpt-3.5-turbo-1106" | "gpt-3.5-turbo-0125" | "gpt-3.5-turbo-16k-0613" | "gpt-4.1" | "gpt-4.1-mini" | "gpt-4.1-nano" | "gpt-4.1-2025-04-14" | "gpt-4.1-mini-2025-04-14" | "gpt-4.1-nano-2025-04-14" | "o1" | "o1-mini" | "o1-preview" | "o1-pro" | "o1-2024-12-17" | "o1-preview-2024-09-12" | "o1-mini-2024-09-12" | "o1-pro-2025-03-19" | "o3" | "o3-mini" | "o3-2025-04-16" | "o3-mini-2025-01-31" | "o4-mini" | "o4-mini-2025-04-16" | "gpt-4o" | "gpt-4o-mini" | "gpt-4o-2024-11-20" | "gpt-4o-2024-08-06" | "gpt-4o-2024-05-13" | "gpt-4o-mini-2024-07-18" | "gpt-4o-audio-preview" | "gpt-4o-audio-preview-2024-10-01" | "gpt-4o-audio-preview-2024-12-17" | "gpt-4o-mini-audio-preview" | "gpt-4o-mini-audio-preview-2024-12-17" | "gpt-4o-search-preview" | "gpt-4o-mini-search-preview" | "gpt-4o-search-preview-2025-03-11" | "gpt-4o-mini-search-preview-2025-03-11" | "computer-use-preview" | "computer-use-preview-2025-03-11" | "gpt-5" | "gpt-5-mini" | "gpt-5-nano" | "chatgpt-4o-latest";
|
||||
OpenAIModels: "gpt-4" | "gpt-4-0314" | "gpt-4-0613" | "gpt-4-32k" | "gpt-4-32k-0314" | "gpt-4-32k-0613" | "gpt-4-0125-preview" | "gpt-4-turbo" | "gpt-4-turbo-2024-04-09" | "gpt-4-turbo-preview" | "gpt-4-1106-preview" | "gpt-4-vision-preview" | "gpt-3.5-turbo" | "gpt-3.5-turbo-16k" | "gpt-3.5-turbo-0301" | "gpt-3.5-turbo-0613" | "gpt-3.5-turbo-1106" | "gpt-3.5-turbo-0125" | "gpt-3.5-turbo-16k-0613" | "gpt-4.1" | "gpt-4.1-mini" | "gpt-4.1-nano" | "gpt-4.1-2025-04-14" | "gpt-4.1-mini-2025-04-14" | "gpt-4.1-nano-2025-04-14" | "o1" | "o1-mini" | "o1-preview" | "o1-pro" | "o1-2024-12-17" | "o1-preview-2024-09-12" | "o1-mini-2024-09-12" | "o1-pro-2025-03-19" | "o3" | "o3-mini" | "o3-2025-04-16" | "o3-mini-2025-01-31" | "o4-mini" | "o4-mini-2025-04-16" | "gpt-4o" | "gpt-4o-mini" | "gpt-4o-2024-11-20" | "gpt-4o-2024-08-06" | "gpt-4o-2024-05-13" | "gpt-4o-mini-2024-07-18" | "gpt-4o-audio-preview" | "gpt-4o-audio-preview-2024-10-01" | "gpt-4o-audio-preview-2024-12-17" | "gpt-4o-mini-audio-preview" | "gpt-4o-mini-audio-preview-2024-12-17" | "gpt-4o-search-preview" | "gpt-4o-mini-search-preview" | "gpt-4o-search-preview-2025-03-11" | "gpt-4o-mini-search-preview-2025-03-11" | "computer-use-preview" | "computer-use-preview-2025-03-11" | "gpt-5" | "gpt-5-mini" | "gpt-5-nano" | "gpt-5.5" | "gpt-5.5-pro" | "chatgpt-4o-latest";
|
||||
MoonvalleyTextToVideoInferenceParams: {
|
||||
/**
|
||||
* @description Height of the generated video in pixels
|
||||
@@ -14442,6 +14466,10 @@ export interface components {
|
||||
total_tokens?: number;
|
||||
};
|
||||
};
|
||||
SeedanceCreateVisualValidateSessionRequest: {
|
||||
/** @description Optional human-readable label for the asset group that will be created by this verification. Stored locally and returned by seedanceListVisualValidationGroups so users can identify their groups in selectors. */
|
||||
name?: string;
|
||||
};
|
||||
SeedanceCreateVisualValidateSessionResponse: {
|
||||
/**
|
||||
* Format: uuid
|
||||
@@ -14451,6 +14479,37 @@ export interface components {
|
||||
/** @description BytePlus-issued H5 liveness link. Open in a browser with camera access. Valid for ~120 seconds. */
|
||||
h5_link: string;
|
||||
};
|
||||
SeedanceListVisualValidationGroupsResponse: {
|
||||
groups: components["schemas"]["SeedanceVisualValidationGroup"][];
|
||||
};
|
||||
SeedanceListUserAssetsResponse: {
|
||||
assets: components["schemas"]["SeedanceUserAsset"][];
|
||||
/** @description True if the global per-request asset cap was hit and older results were dropped. */
|
||||
truncated: boolean;
|
||||
};
|
||||
SeedanceUserAsset: {
|
||||
asset_id: string;
|
||||
name?: string | null;
|
||||
/** @description BytePlus access URL (~12h validity). Refreshed on each list call. */
|
||||
url?: string | null;
|
||||
group_id: string;
|
||||
/** @description Display label of the source group, denormalized for client-side search. */
|
||||
group_name: string;
|
||||
/** @enum {string} */
|
||||
asset_type: "Image" | "Video" | "Audio";
|
||||
/** @enum {string} */
|
||||
status: "Active" | "Processing" | "Failed";
|
||||
/** Format: date-time */
|
||||
create_time: string;
|
||||
};
|
||||
SeedanceVisualValidationGroup: {
|
||||
/** @description BytePlus-issued asset group id. */
|
||||
group_id: string;
|
||||
/** @description Display label. Caller-supplied at creation time when available; otherwise a server-generated fallback derived from the creation date. */
|
||||
name: string;
|
||||
/** Format: date-time */
|
||||
created_at: string;
|
||||
};
|
||||
SeedanceGetVisualValidateSessionResponse: {
|
||||
/** Format: uuid */
|
||||
session_id: string;
|
||||
@@ -14458,6 +14517,8 @@ export interface components {
|
||||
status: "pending" | "completed" | "failed";
|
||||
/** @description Populated only when status == completed. This is the BytePlus Asset Group ID the user will upload assets into. */
|
||||
group_id?: string | null;
|
||||
/** @description Optional human-readable label provided when the session was created. */
|
||||
name?: string | null;
|
||||
error_code?: string | null;
|
||||
error_message?: string | null;
|
||||
};
|
||||
@@ -30275,7 +30336,11 @@ export interface operations {
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody?: never;
|
||||
requestBody?: {
|
||||
content: {
|
||||
"application/json": components["schemas"]["SeedanceCreateVisualValidateSessionRequest"];
|
||||
};
|
||||
};
|
||||
responses: {
|
||||
/** @description Verification session created */
|
||||
201: {
|
||||
@@ -30297,6 +30362,35 @@ export interface operations {
|
||||
};
|
||||
};
|
||||
};
|
||||
seedanceListVisualValidationGroups: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody?: never;
|
||||
responses: {
|
||||
/** @description Visual-validation groups owned by the caller */
|
||||
200: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["SeedanceListVisualValidationGroupsResponse"];
|
||||
};
|
||||
};
|
||||
/** @description Error 4xx/5xx */
|
||||
default: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["ErrorResponse"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
seedanceGetVisualValidateSession: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
@@ -30329,6 +30423,40 @@ export interface operations {
|
||||
};
|
||||
};
|
||||
};
|
||||
seedanceListUserAssets: {
|
||||
parameters: {
|
||||
query: {
|
||||
/** @description Asset type to return. */
|
||||
asset_type: "Image" | "Video";
|
||||
/** @description Narrow the listing to one group. Caller must own it. */
|
||||
group_id?: string;
|
||||
};
|
||||
header?: never;
|
||||
path?: never;
|
||||
cookie?: never;
|
||||
};
|
||||
requestBody?: never;
|
||||
responses: {
|
||||
/** @description Assets owned by the caller */
|
||||
200: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["SeedanceListUserAssetsResponse"];
|
||||
};
|
||||
};
|
||||
/** @description Error 4xx/5xx */
|
||||
default: {
|
||||
headers: {
|
||||
[name: string]: unknown;
|
||||
};
|
||||
content: {
|
||||
"application/json": components["schemas"]["ErrorResponse"];
|
||||
};
|
||||
};
|
||||
};
|
||||
};
|
||||
seedanceCreateAsset: {
|
||||
parameters: {
|
||||
query?: never;
|
||||
|
||||
@@ -98,6 +98,7 @@ import { useQueueFeatureFlags } from '@/composables/queue/useQueueFeatureFlags'
|
||||
import { buildTooltipConfig } from '@/composables/useTooltipConfig'
|
||||
import { isCloud } from '@/platform/distribution/types'
|
||||
import { useSettingStore } from '@/platform/settings/settingStore'
|
||||
import { useSurveyFeatureTracking } from '@/platform/surveys/useSurveyFeatureTracking'
|
||||
import { useSidebarTabStore } from '@/stores/workspace/sidebarTabStore'
|
||||
|
||||
const emit = defineEmits<{
|
||||
@@ -107,6 +108,7 @@ const emit = defineEmits<{
|
||||
const { t } = useI18n()
|
||||
const settingStore = useSettingStore()
|
||||
const sidebarTabStore = useSidebarTabStore()
|
||||
const { trackFeatureUsed } = useSurveyFeatureTracking('queue-progress-overlay')
|
||||
|
||||
const moreTooltipConfig = computed(() => buildTooltipConfig(t('g.more')))
|
||||
const { isQueuePanelV2Enabled, isRunProgressBarEnabled } =
|
||||
@@ -119,6 +121,7 @@ const onClearHistoryFromMenu = (close: () => void) => {
|
||||
}
|
||||
|
||||
const onToggleDockedJobHistory = async (close: () => void) => {
|
||||
trackFeatureUsed()
|
||||
close()
|
||||
|
||||
try {
|
||||
@@ -138,6 +141,7 @@ const onToggleDockedJobHistory = async (close: () => void) => {
|
||||
}
|
||||
|
||||
const onToggleRunProgressBar = async () => {
|
||||
trackFeatureUsed()
|
||||
await settingStore.set(
|
||||
'Comfy.Queue.ShowRunProgressBar',
|
||||
!isRunProgressBarEnabled.value
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
:selected-sort-mode="selectedSortMode"
|
||||
:has-failed-jobs="hasFailedJobs"
|
||||
@show-assets="$emit('showAssets')"
|
||||
@update:selected-job-tab="$emit('update:selectedJobTab', $event)"
|
||||
@update:selected-job-tab="onUpdateSelectedJobTab"
|
||||
@update:selected-workflow-filter="
|
||||
$emit('update:selectedWorkflowFilter', $event)
|
||||
"
|
||||
@@ -50,6 +50,7 @@ import type {
|
||||
import type { MenuEntry } from '@/composables/queue/useJobMenu'
|
||||
import { useJobMenu } from '@/composables/queue/useJobMenu'
|
||||
import { useErrorHandling } from '@/composables/useErrorHandling'
|
||||
import { useSurveyFeatureTracking } from '@/platform/surveys/useSurveyFeatureTracking'
|
||||
|
||||
import QueueOverlayHeader from './QueueOverlayHeader.vue'
|
||||
import JobContextMenu from './job/JobContextMenu.vue'
|
||||
@@ -81,6 +82,7 @@ const emit = defineEmits<{
|
||||
const currentMenuItem = ref<JobListItem | null>(null)
|
||||
const jobContextMenuRef = ref<InstanceType<typeof JobContextMenu> | null>(null)
|
||||
const { wrapWithErrorHandlingAsync } = useErrorHandling()
|
||||
const { trackFeatureUsed } = useSurveyFeatureTracking('queue-progress-overlay')
|
||||
|
||||
const { jobMenuEntries } = useJobMenu(
|
||||
() => currentMenuItem.value,
|
||||
@@ -95,6 +97,11 @@ const onDeleteItemEvent = (item: JobListItem) => {
|
||||
emit('deleteItem', item)
|
||||
}
|
||||
|
||||
const onUpdateSelectedJobTab = (value: JobTab) => {
|
||||
trackFeatureUsed()
|
||||
emit('update:selectedJobTab', value)
|
||||
}
|
||||
|
||||
const onMenuItem = (item: JobListItem, event: Event) => {
|
||||
currentMenuItem.value = item
|
||||
jobContextMenuRef.value?.open(event)
|
||||
|
||||
@@ -66,6 +66,7 @@ import { useResultGallery } from '@/composables/queue/useResultGallery'
|
||||
import { useErrorHandling } from '@/composables/useErrorHandling'
|
||||
import { useAssetSelectionStore } from '@/platform/assets/composables/useAssetSelectionStore'
|
||||
import { isCloud } from '@/platform/distribution/types'
|
||||
import { useSurveyFeatureTracking } from '@/platform/surveys/useSurveyFeatureTracking'
|
||||
import { api } from '@/scripts/api'
|
||||
import { useAssetsStore } from '@/stores/assetsStore'
|
||||
import { useCommandStore } from '@/stores/commandStore'
|
||||
@@ -93,6 +94,7 @@ const assetsStore = useAssetsStore()
|
||||
const assetSelectionStore = useAssetSelectionStore()
|
||||
const { showQueueClearHistoryDialog } = useQueueClearHistoryDialog()
|
||||
const { wrapWithErrorHandlingAsync } = useErrorHandling()
|
||||
const { trackFeatureUsed } = useSurveyFeatureTracking('queue-progress-overlay')
|
||||
|
||||
const {
|
||||
totalPercentFormatted,
|
||||
@@ -188,6 +190,7 @@ const {
|
||||
const displayedJobGroups = computed(() => groupedJobItems.value)
|
||||
|
||||
const onCancelItem = wrapWithErrorHandlingAsync(async (item: JobListItem) => {
|
||||
trackFeatureUsed()
|
||||
const jobId = item.taskRef?.jobId
|
||||
if (!jobId) return
|
||||
|
||||
@@ -209,6 +212,7 @@ const onCancelItem = wrapWithErrorHandlingAsync(async (item: JobListItem) => {
|
||||
})
|
||||
|
||||
const onDeleteItem = wrapWithErrorHandlingAsync(async (item: JobListItem) => {
|
||||
trackFeatureUsed()
|
||||
if (!item.taskRef) return
|
||||
await queueStore.delete(item.taskRef)
|
||||
})
|
||||
@@ -224,10 +228,12 @@ const setExpanded = (expanded: boolean) => {
|
||||
}
|
||||
|
||||
const viewAllJobs = () => {
|
||||
trackFeatureUsed()
|
||||
setExpanded(true)
|
||||
}
|
||||
|
||||
const toggleAssetsSidebar = () => {
|
||||
trackFeatureUsed()
|
||||
sidebarTabStore.toggleSidebarTab('assets')
|
||||
}
|
||||
|
||||
@@ -257,12 +263,14 @@ const focusAssetInSidebar = async (item: JobListItem) => {
|
||||
|
||||
const inspectJobAsset = wrapWithErrorHandlingAsync(
|
||||
async (item: JobListItem) => {
|
||||
trackFeatureUsed()
|
||||
await openResultGallery(item)
|
||||
await focusAssetInSidebar(item)
|
||||
}
|
||||
)
|
||||
|
||||
const cancelQueuedWorkflows = wrapWithErrorHandlingAsync(async () => {
|
||||
trackFeatureUsed()
|
||||
// Capture pending jobIds before clearing
|
||||
const pendingJobIds = queueStore.pendingTasks
|
||||
.map((task) => task.jobId)
|
||||
@@ -275,6 +283,7 @@ const cancelQueuedWorkflows = wrapWithErrorHandlingAsync(async () => {
|
||||
})
|
||||
|
||||
const interruptAll = wrapWithErrorHandlingAsync(async () => {
|
||||
trackFeatureUsed()
|
||||
const tasks = queueStore.runningTasks
|
||||
const jobIds = tasks
|
||||
.map((task) => task.jobId)
|
||||
@@ -298,6 +307,7 @@ const interruptAll = wrapWithErrorHandlingAsync(async () => {
|
||||
})
|
||||
|
||||
const onClearHistoryFromMenu = () => {
|
||||
trackFeatureUsed()
|
||||
showQueueClearHistoryDialog()
|
||||
}
|
||||
</script>
|
||||
|
||||
@@ -122,6 +122,7 @@ import Button from '@/components/ui/button/Button.vue'
|
||||
import { jobSortModes } from '@/composables/queue/useJobList'
|
||||
import type { JobSortMode } from '@/composables/queue/useJobList'
|
||||
import { buildTooltipConfig } from '@/composables/useTooltipConfig'
|
||||
import { useSurveyFeatureTracking } from '@/platform/surveys/useSurveyFeatureTracking'
|
||||
|
||||
const {
|
||||
hideShowAssetsAction = false,
|
||||
@@ -147,6 +148,7 @@ const emit = defineEmits<{
|
||||
}>()
|
||||
|
||||
const { t } = useI18n()
|
||||
const { trackFeatureUsed } = useSurveyFeatureTracking('queue-progress-overlay')
|
||||
|
||||
const filterTooltipConfig = computed(() =>
|
||||
buildTooltipConfig(t('sideToolbar.queueProgressOverlay.filterBy'))
|
||||
@@ -170,6 +172,7 @@ const onSelectWorkflowFilter = (
|
||||
value: 'all' | 'current',
|
||||
close: () => void
|
||||
) => {
|
||||
trackFeatureUsed()
|
||||
selectWorkflowFilter(value)
|
||||
close()
|
||||
}
|
||||
@@ -179,6 +182,7 @@ const selectSortMode = (value: JobSortMode) => {
|
||||
}
|
||||
|
||||
const onSelectSortMode = (value: JobSortMode, close: () => void) => {
|
||||
trackFeatureUsed()
|
||||
selectSortMode(value)
|
||||
close()
|
||||
}
|
||||
|
||||
@@ -2,15 +2,16 @@
|
||||
<SidebarTabTemplate :title="$t('queue.jobHistory')">
|
||||
<template #alt-title>
|
||||
<div class="ml-auto flex shrink-0 items-center">
|
||||
<JobHistoryActionsMenu @clear-history="showQueueClearHistoryDialog" />
|
||||
<JobHistoryActionsMenu @clear-history="onClearHistory" />
|
||||
</div>
|
||||
</template>
|
||||
<template #header>
|
||||
<div class="flex flex-col gap-2 pb-1">
|
||||
<div class="px-3 py-2">
|
||||
<JobFilterTabs
|
||||
v-model:selected-job-tab="selectedJobTab"
|
||||
:selected-job-tab="selectedJobTab"
|
||||
:has-failed-jobs="hasFailedJobs"
|
||||
@update:selected-job-tab="onUpdateSelectedJobTab"
|
||||
/>
|
||||
</div>
|
||||
<JobFilterActions
|
||||
@@ -81,13 +82,14 @@ import JobHistoryActionsMenu from '@/components/queue/JobHistoryActionsMenu.vue'
|
||||
import type { MenuEntry } from '@/composables/queue/useJobMenu'
|
||||
import { useJobMenu } from '@/composables/queue/useJobMenu'
|
||||
import { useJobList } from '@/composables/queue/useJobList'
|
||||
import type { JobListItem } from '@/composables/queue/useJobList'
|
||||
import type { JobListItem, JobTab } from '@/composables/queue/useJobList'
|
||||
import { useQueueClearHistoryDialog } from '@/composables/queue/useQueueClearHistoryDialog'
|
||||
import { useResultGallery } from '@/composables/queue/useResultGallery'
|
||||
import { useErrorHandling } from '@/composables/useErrorHandling'
|
||||
import SidebarTabTemplate from '@/components/sidebar/tabs/SidebarTabTemplate.vue'
|
||||
import MediaLightbox from '@/components/sidebar/tabs/queue/MediaLightbox.vue'
|
||||
import Button from '@/components/ui/button/Button.vue'
|
||||
import { useSurveyFeatureTracking } from '@/platform/surveys/useSurveyFeatureTracking'
|
||||
import { useCommandStore } from '@/stores/commandStore'
|
||||
import { useDialogStore } from '@/stores/dialogStore'
|
||||
import { useExecutionStore } from '@/stores/executionStore'
|
||||
@@ -104,6 +106,17 @@ const executionStore = useExecutionStore()
|
||||
const queueStore = useQueueStore()
|
||||
const { showQueueClearHistoryDialog } = useQueueClearHistoryDialog()
|
||||
const { wrapWithErrorHandlingAsync } = useErrorHandling()
|
||||
const { trackFeatureUsed } = useSurveyFeatureTracking('queue-progress-overlay')
|
||||
|
||||
const onClearHistory = () => {
|
||||
trackFeatureUsed()
|
||||
showQueueClearHistoryDialog()
|
||||
}
|
||||
|
||||
const onUpdateSelectedJobTab = (value: JobTab) => {
|
||||
trackFeatureUsed()
|
||||
selectedJobTab.value = value
|
||||
}
|
||||
const {
|
||||
selectedJobTab,
|
||||
selectedWorkflowFilter,
|
||||
@@ -145,6 +158,7 @@ const activeQueueSummary = computed(() => {
|
||||
})
|
||||
|
||||
const clearQueuedWorkflows = wrapWithErrorHandlingAsync(async () => {
|
||||
trackFeatureUsed()
|
||||
const pendingJobIds = queueStore.pendingTasks
|
||||
.map((task) => task.jobId)
|
||||
.filter((id): id is string => typeof id === 'string' && id.length > 0)
|
||||
@@ -160,6 +174,7 @@ const {
|
||||
} = useResultGallery(() => filteredTasks.value)
|
||||
|
||||
const onViewItem = wrapWithErrorHandlingAsync(async (item: JobListItem) => {
|
||||
trackFeatureUsed()
|
||||
const previewOutput = item.taskRef?.previewOutput
|
||||
|
||||
if (previewOutput?.is3D) {
|
||||
@@ -194,10 +209,12 @@ const { jobMenuEntries, cancelJob } = useJobMenu(
|
||||
)
|
||||
|
||||
const onCancelItem = wrapWithErrorHandlingAsync(async (item: JobListItem) => {
|
||||
trackFeatureUsed()
|
||||
await cancelJob(item)
|
||||
})
|
||||
|
||||
const onDeleteItem = wrapWithErrorHandlingAsync(async (item: JobListItem) => {
|
||||
trackFeatureUsed()
|
||||
if (!item.taskRef) return
|
||||
await queueStore.delete(item.taskRef)
|
||||
})
|
||||
|
||||
51
src/components/toast/ProgressToastItem.test.ts
Normal file
51
src/components/toast/ProgressToastItem.test.ts
Normal file
@@ -0,0 +1,51 @@
|
||||
import { render, screen } from '@testing-library/vue'
|
||||
import { describe, expect, it } from 'vitest'
|
||||
import { createI18n } from 'vue-i18n'
|
||||
|
||||
import type { AssetDownload } from '@/stores/assetDownloadStore'
|
||||
|
||||
import ProgressToastItem from './ProgressToastItem.vue'
|
||||
|
||||
const i18n = createI18n({
|
||||
legacy: false,
|
||||
locale: 'en',
|
||||
messages: {
|
||||
en: {
|
||||
progressToast: {
|
||||
finished: 'Finished',
|
||||
failed: 'Failed',
|
||||
pending: 'Pending'
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
function completedJob(): AssetDownload {
|
||||
return {
|
||||
taskId: 'task-1',
|
||||
assetId: 'asset-1',
|
||||
assetName: 'controlnet-canny.safetensors',
|
||||
bytesTotal: 100,
|
||||
bytesDownloaded: 100,
|
||||
progress: 1,
|
||||
status: 'completed',
|
||||
lastUpdate: Date.now()
|
||||
}
|
||||
}
|
||||
|
||||
describe('ProgressToastItem — completed state', () => {
|
||||
it('keeps the finished badge outside the dimmed (opacity-50) subtree', () => {
|
||||
render(ProgressToastItem, {
|
||||
props: { job: completedJob() },
|
||||
global: { plugins: [i18n] }
|
||||
})
|
||||
|
||||
const badge = screen.getByText('Finished')
|
||||
// eslint-disable-next-line testing-library/no-node-access -- verifying structural placement of opacity-50 boundary, which is the subject of this fix
|
||||
expect(badge.closest('.opacity-50')).toBeNull()
|
||||
|
||||
const assetName = screen.getByText('controlnet-canny.safetensors')
|
||||
// eslint-disable-next-line testing-library/no-node-access -- verifying structural placement of opacity-50 boundary, which is the subject of this fix
|
||||
expect(assetName.closest('.opacity-50')).not.toBeNull()
|
||||
})
|
||||
})
|
||||
@@ -22,14 +22,9 @@ const isPending = computed(() => job.status === 'created')
|
||||
|
||||
<template>
|
||||
<div
|
||||
:class="
|
||||
cn(
|
||||
'flex items-center justify-between rounded-lg bg-modal-card-background px-4 py-3',
|
||||
isCompleted && 'opacity-50'
|
||||
)
|
||||
"
|
||||
class="flex items-center justify-between rounded-lg bg-modal-card-background px-4 py-3"
|
||||
>
|
||||
<div class="min-w-0 flex-1">
|
||||
<div :class="cn('min-w-0 flex-1', isCompleted && 'opacity-50')">
|
||||
<span class="block truncate text-sm text-base-foreground">{{
|
||||
job.assetName
|
||||
}}</span>
|
||||
|
||||
@@ -23,8 +23,15 @@ import type {
|
||||
*/
|
||||
function isNotFoundError(error: unknown): boolean {
|
||||
if (!(error instanceof Error)) return false
|
||||
const withResponse = error as Error & { response?: { status?: number } }
|
||||
if (withResponse.response?.status === 404) return true
|
||||
if (
|
||||
'response' in error &&
|
||||
typeof error.response === 'object' &&
|
||||
error.response !== null &&
|
||||
'status' in error.response &&
|
||||
error.response.status === 404
|
||||
) {
|
||||
return true
|
||||
}
|
||||
return /\b404\b/.test(error.message)
|
||||
}
|
||||
|
||||
|
||||
@@ -1,7 +1,12 @@
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest'
|
||||
|
||||
import type { AssetItem } from '@/platform/assets/schemas/assetSchema'
|
||||
import { assetService } from '@/platform/assets/services/assetService'
|
||||
import {
|
||||
MISSING_TAG,
|
||||
assetService,
|
||||
isBlake3AssetHash,
|
||||
toBlake3AssetHash
|
||||
} from '@/platform/assets/services/assetService'
|
||||
import { api } from '@/scripts/api'
|
||||
|
||||
const mockDistributionState = vi.hoisted(() => ({ isCloud: false }))
|
||||
@@ -44,6 +49,10 @@ vi.mock('@/i18n', () => ({
|
||||
|
||||
const fetchApiMock = vi.mocked(api.fetchApi)
|
||||
|
||||
const validBlake3Hash =
|
||||
'1111111111111111111111111111111111111111111111111111111111111111'
|
||||
const validBlake3AssetHash = `blake3:${validBlake3Hash}`
|
||||
|
||||
function buildResponse(
|
||||
body: unknown,
|
||||
init: { ok?: boolean; status?: number } = {}
|
||||
@@ -180,9 +189,98 @@ describe(assetService.getAssetMetadata, () => {
|
||||
})
|
||||
})
|
||||
|
||||
describe(isBlake3AssetHash, () => {
|
||||
it('accepts only prefixed 64-character blake3 hashes', () => {
|
||||
expect(isBlake3AssetHash(validBlake3AssetHash)).toBe(true)
|
||||
expect(isBlake3AssetHash('BLAKE3:' + validBlake3Hash.toUpperCase())).toBe(
|
||||
true
|
||||
)
|
||||
expect(isBlake3AssetHash('blake3:abc')).toBe(false)
|
||||
expect(isBlake3AssetHash(validBlake3Hash)).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
describe(toBlake3AssetHash, () => {
|
||||
it('normalizes 64-character blake3 hex values to asset hashes', () => {
|
||||
expect(toBlake3AssetHash(validBlake3Hash)).toBe(validBlake3AssetHash)
|
||||
expect(toBlake3AssetHash('abc')).toBeNull()
|
||||
expect(toBlake3AssetHash(undefined)).toBeNull()
|
||||
})
|
||||
})
|
||||
|
||||
describe(assetService.uploadAssetFromUrl, () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
assetService.invalidateInputAssetsIncludingPublic()
|
||||
})
|
||||
|
||||
it('does not invalidate cached input assets when the upload response is invalid', async () => {
|
||||
const staleAssets = [validAsset({ id: 'stale-input', tags: ['input'] })]
|
||||
const consoleSpy = vi.spyOn(console, 'error').mockImplementation(() => {})
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(buildResponse({ assets: staleAssets }))
|
||||
.mockResolvedValueOnce(buildResponse({ id: 'missing-name' }))
|
||||
|
||||
await assetService.getInputAssetsIncludingPublic()
|
||||
await expect(
|
||||
assetService.uploadAssetFromUrl({
|
||||
url: 'https://example.com/input.png',
|
||||
name: 'input.png',
|
||||
tags: ['input']
|
||||
})
|
||||
).rejects.toThrow('Failed to upload asset')
|
||||
const cached = await assetService.getInputAssetsIncludingPublic()
|
||||
|
||||
expect(cached).toEqual(staleAssets)
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(2)
|
||||
consoleSpy.mockRestore()
|
||||
})
|
||||
|
||||
it('requires upload responses to include created_new', async () => {
|
||||
const staleAssets = [validAsset({ id: 'stale-input', tags: ['input'] })]
|
||||
const consoleSpy = vi.spyOn(console, 'error').mockImplementation(() => {})
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(buildResponse({ assets: staleAssets }))
|
||||
.mockResolvedValueOnce(
|
||||
buildResponse(validAsset({ id: 'uploaded-input', tags: ['input'] }))
|
||||
)
|
||||
|
||||
await assetService.getInputAssetsIncludingPublic()
|
||||
await expect(
|
||||
assetService.uploadAssetFromUrl({
|
||||
url: 'https://example.com/input.png',
|
||||
name: 'input.png',
|
||||
tags: ['input']
|
||||
})
|
||||
).rejects.toThrow('Failed to upload asset')
|
||||
const cached = await assetService.getInputAssetsIncludingPublic()
|
||||
|
||||
expect(cached).toEqual(staleAssets)
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(2)
|
||||
consoleSpy.mockRestore()
|
||||
})
|
||||
|
||||
it('returns validated upload responses with created_new', async () => {
|
||||
const uploadedAsset = {
|
||||
...validAsset({ id: 'uploaded-input', tags: ['input'] }),
|
||||
created_new: true
|
||||
}
|
||||
fetchApiMock.mockResolvedValueOnce(buildResponse(uploadedAsset))
|
||||
|
||||
await expect(
|
||||
assetService.uploadAssetFromUrl({
|
||||
url: 'https://example.com/input.png',
|
||||
name: 'input.png',
|
||||
tags: ['input']
|
||||
})
|
||||
).resolves.toEqual(uploadedAsset)
|
||||
})
|
||||
})
|
||||
|
||||
describe(assetService.uploadAssetFromBase64, () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
assetService.invalidateInputAssetsIncludingPublic()
|
||||
})
|
||||
|
||||
it('throws before calling the network when data is not a data URL', async () => {
|
||||
@@ -195,6 +293,63 @@ describe(assetService.uploadAssetFromBase64, () => {
|
||||
|
||||
expect(fetchApiMock).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('does not invalidate cached input assets when the upload response is invalid', async () => {
|
||||
const staleAssets = [validAsset({ id: 'stale-input', tags: ['input'] })]
|
||||
const consoleSpy = vi.spyOn(console, 'error').mockImplementation(() => {})
|
||||
const fetchSpy = vi
|
||||
.spyOn(globalThis, 'fetch')
|
||||
.mockResolvedValueOnce(new Response('hello'))
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(buildResponse({ assets: staleAssets }))
|
||||
.mockResolvedValueOnce(buildResponse({ id: 'missing-name' }))
|
||||
|
||||
await assetService.getInputAssetsIncludingPublic()
|
||||
await expect(
|
||||
assetService.uploadAssetFromBase64({
|
||||
data: 'data:text/plain;base64,aGVsbG8=',
|
||||
name: 'input.txt',
|
||||
tags: ['input']
|
||||
})
|
||||
).rejects.toThrow('Failed to upload asset')
|
||||
const cached = await assetService.getInputAssetsIncludingPublic()
|
||||
|
||||
expect(cached).toEqual(staleAssets)
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(2)
|
||||
fetchSpy.mockRestore()
|
||||
consoleSpy.mockRestore()
|
||||
})
|
||||
|
||||
it('rejects upload responses with a non-boolean created_new', async () => {
|
||||
const staleAssets = [validAsset({ id: 'stale-input', tags: ['input'] })]
|
||||
const consoleSpy = vi.spyOn(console, 'error').mockImplementation(() => {})
|
||||
const fetchSpy = vi
|
||||
.spyOn(globalThis, 'fetch')
|
||||
.mockResolvedValueOnce(new Response('hello'))
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(buildResponse({ assets: staleAssets }))
|
||||
.mockResolvedValueOnce(
|
||||
buildResponse({
|
||||
...validAsset({ id: 'uploaded-input', tags: ['input'] }),
|
||||
created_new: 'true'
|
||||
})
|
||||
)
|
||||
|
||||
await assetService.getInputAssetsIncludingPublic()
|
||||
await expect(
|
||||
assetService.uploadAssetFromBase64({
|
||||
data: 'data:text/plain;base64,aGVsbG8=',
|
||||
name: 'input.txt',
|
||||
tags: ['input']
|
||||
})
|
||||
).rejects.toThrow('Failed to upload asset')
|
||||
const cached = await assetService.getInputAssetsIncludingPublic()
|
||||
|
||||
expect(cached).toEqual(staleAssets)
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(2)
|
||||
fetchSpy.mockRestore()
|
||||
consoleSpy.mockRestore()
|
||||
})
|
||||
})
|
||||
|
||||
describe(assetService.uploadAssetAsync, () => {
|
||||
@@ -354,3 +509,391 @@ describe(assetService.getAssetsByTag, () => {
|
||||
expect(params.get('include_public')).toBe('true')
|
||||
})
|
||||
})
|
||||
|
||||
describe(assetService.getAllAssetsByTag, () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
})
|
||||
|
||||
it('paginates tagged asset requests with include_public=true', async () => {
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(
|
||||
buildResponse({
|
||||
assets: [
|
||||
validAsset({ id: 'a', tags: ['input'] }),
|
||||
validAsset({ id: 'b', tags: ['input'] })
|
||||
]
|
||||
})
|
||||
)
|
||||
.mockResolvedValueOnce(
|
||||
buildResponse({
|
||||
assets: [validAsset({ id: 'c', tags: ['input'] })]
|
||||
})
|
||||
)
|
||||
|
||||
const assets = await assetService.getAllAssetsByTag('input', true, {
|
||||
limit: 2
|
||||
})
|
||||
|
||||
expect(assets.map((a) => a.id)).toEqual(['a', 'b', 'c'])
|
||||
|
||||
const firstUrl = fetchApiMock.mock.calls[0]?.[0] as string
|
||||
const firstParams = new URL(firstUrl, 'http://localhost').searchParams
|
||||
expect(firstParams.get('include_public')).toBe('true')
|
||||
expect(firstParams.get('limit')).toBe('2')
|
||||
expect(firstParams.has('offset')).toBe(false)
|
||||
|
||||
const secondUrl = fetchApiMock.mock.calls[1]?.[0] as string
|
||||
const secondParams = new URL(secondUrl, 'http://localhost').searchParams
|
||||
expect(secondParams.get('include_public')).toBe('true')
|
||||
expect(secondParams.get('limit')).toBe('2')
|
||||
expect(secondParams.get('offset')).toBe('2')
|
||||
})
|
||||
|
||||
it('paginates from raw response size before filtering missing-tagged assets', async () => {
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(
|
||||
buildResponse({
|
||||
assets: [
|
||||
validAsset({ id: 'visible', tags: ['input'] }),
|
||||
validAsset({ id: 'hidden', tags: ['input', MISSING_TAG] })
|
||||
]
|
||||
})
|
||||
)
|
||||
.mockResolvedValueOnce(
|
||||
buildResponse({
|
||||
assets: [validAsset({ id: 'later-public', tags: ['input'] })]
|
||||
})
|
||||
)
|
||||
|
||||
const assets = await assetService.getAllAssetsByTag('input', true, {
|
||||
limit: 2
|
||||
})
|
||||
|
||||
expect(assets.map((a) => a.id)).toEqual(['visible', 'later-public'])
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(2)
|
||||
|
||||
const secondUrl = fetchApiMock.mock.calls[1]?.[0]
|
||||
if (typeof secondUrl !== 'string') {
|
||||
throw new Error('Expected a second asset request URL')
|
||||
}
|
||||
const secondParams = new URL(secondUrl, 'http://localhost').searchParams
|
||||
expect(secondParams.get('offset')).toBe('2')
|
||||
})
|
||||
|
||||
it('honors has_more when walking tagged asset pages', async () => {
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(
|
||||
buildResponse({
|
||||
assets: [
|
||||
validAsset({ id: 'first', tags: ['input'] }),
|
||||
validAsset({ id: 'second', tags: ['input'] })
|
||||
],
|
||||
has_more: true
|
||||
})
|
||||
)
|
||||
.mockResolvedValueOnce(
|
||||
buildResponse({
|
||||
assets: [validAsset({ id: 'later-public', tags: ['input'] })],
|
||||
has_more: false
|
||||
})
|
||||
)
|
||||
|
||||
const assets = await assetService.getAllAssetsByTag('input', true, {
|
||||
limit: 3
|
||||
})
|
||||
|
||||
expect(assets.map((a) => a.id)).toEqual(['first', 'second', 'later-public'])
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(2)
|
||||
|
||||
const secondUrl = fetchApiMock.mock.calls[1]?.[0]
|
||||
if (typeof secondUrl !== 'string') {
|
||||
throw new Error('Expected a second asset request URL')
|
||||
}
|
||||
const secondParams = new URL(secondUrl, 'http://localhost').searchParams
|
||||
expect(secondParams.get('offset')).toBe('2')
|
||||
})
|
||||
|
||||
it('passes abort signals through paginated requests', async () => {
|
||||
const controller = new AbortController()
|
||||
fetchApiMock.mockResolvedValueOnce(
|
||||
buildResponse({
|
||||
assets: [validAsset({ id: 'a', tags: ['input'] })]
|
||||
})
|
||||
)
|
||||
|
||||
await assetService.getAllAssetsByTag('input', true, {
|
||||
limit: 2,
|
||||
signal: controller.signal
|
||||
})
|
||||
|
||||
expect(fetchApiMock).toHaveBeenCalledWith(expect.any(String), {
|
||||
signal: controller.signal
|
||||
})
|
||||
})
|
||||
|
||||
it('stops pagination when aborted between pages', async () => {
|
||||
const controller = new AbortController()
|
||||
fetchApiMock.mockImplementationOnce(async () => {
|
||||
controller.abort()
|
||||
return buildResponse({
|
||||
assets: [
|
||||
validAsset({ id: 'a', tags: ['input'] }),
|
||||
validAsset({ id: 'b', tags: ['input'] })
|
||||
]
|
||||
})
|
||||
})
|
||||
|
||||
await expect(
|
||||
assetService.getAllAssetsByTag('input', true, {
|
||||
limit: 2,
|
||||
signal: controller.signal
|
||||
})
|
||||
).rejects.toMatchObject({ name: 'AbortError' })
|
||||
|
||||
expect(fetchApiMock).toHaveBeenCalledOnce()
|
||||
})
|
||||
})
|
||||
|
||||
describe(assetService.getInputAssetsIncludingPublic, () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
assetService.invalidateInputAssetsIncludingPublic()
|
||||
})
|
||||
|
||||
it('loads input assets with public assets included and reuses the cache', async () => {
|
||||
const assets = [
|
||||
validAsset({ id: 'user-input', tags: ['input'] }),
|
||||
validAsset({ id: 'public-input', tags: ['input'], is_immutable: true })
|
||||
]
|
||||
fetchApiMock.mockResolvedValueOnce(buildResponse({ assets }))
|
||||
|
||||
const first = await assetService.getInputAssetsIncludingPublic()
|
||||
const second = await assetService.getInputAssetsIncludingPublic()
|
||||
|
||||
expect(first).toEqual(assets)
|
||||
expect(second).toBe(first)
|
||||
expect(fetchApiMock).toHaveBeenCalledOnce()
|
||||
|
||||
const requestedUrl = fetchApiMock.mock.calls[0]?.[0] as string
|
||||
const params = new URL(requestedUrl, 'http://localhost').searchParams
|
||||
expect(params.get('include_public')).toBe('true')
|
||||
expect(params.get('limit')).toBe('500')
|
||||
})
|
||||
|
||||
it('fetches fresh input assets after explicit invalidation', async () => {
|
||||
const staleAssets = [validAsset({ id: 'stale-input', tags: ['input'] })]
|
||||
const freshAssets = [validAsset({ id: 'fresh-input', tags: ['input'] })]
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(buildResponse({ assets: staleAssets }))
|
||||
.mockResolvedValueOnce(buildResponse({ assets: freshAssets }))
|
||||
|
||||
await assetService.getInputAssetsIncludingPublic()
|
||||
assetService.invalidateInputAssetsIncludingPublic()
|
||||
const refreshed = await assetService.getInputAssetsIncludingPublic()
|
||||
|
||||
expect(refreshed).toEqual(freshAssets)
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(2)
|
||||
})
|
||||
|
||||
it('does not let one caller abort the shared input asset load for other callers', async () => {
|
||||
const firstController = new AbortController()
|
||||
const secondController = new AbortController()
|
||||
const assets = [validAsset({ id: 'public-input', tags: ['input'] })]
|
||||
let resolveResponse!: (response: Response) => void
|
||||
let serviceSignal: AbortSignal | undefined
|
||||
fetchApiMock.mockImplementationOnce(async (_url, options) => {
|
||||
serviceSignal = options?.signal ?? undefined
|
||||
return await new Promise<Response>((resolve) => {
|
||||
resolveResponse = resolve
|
||||
})
|
||||
})
|
||||
|
||||
const first = assetService.getInputAssetsIncludingPublic(
|
||||
firstController.signal
|
||||
)
|
||||
const second = assetService.getInputAssetsIncludingPublic(
|
||||
secondController.signal
|
||||
)
|
||||
firstController.abort()
|
||||
|
||||
await expect(first).rejects.toMatchObject({ name: 'AbortError' })
|
||||
expect(serviceSignal).toBeUndefined()
|
||||
|
||||
resolveResponse(buildResponse({ assets }))
|
||||
|
||||
await expect(second).resolves.toEqual(assets)
|
||||
expect(fetchApiMock).toHaveBeenCalledOnce()
|
||||
})
|
||||
|
||||
it('keeps the shared input asset load alive after all callers abort', async () => {
|
||||
const firstController = new AbortController()
|
||||
const secondController = new AbortController()
|
||||
const assets = [validAsset({ id: 'public-input', tags: ['input'] })]
|
||||
let resolveResponse!: (response: Response) => void
|
||||
fetchApiMock.mockImplementationOnce(
|
||||
async () =>
|
||||
await new Promise<Response>((resolve) => {
|
||||
resolveResponse = resolve
|
||||
})
|
||||
)
|
||||
|
||||
const first = assetService.getInputAssetsIncludingPublic(
|
||||
firstController.signal
|
||||
)
|
||||
const second = assetService.getInputAssetsIncludingPublic(
|
||||
secondController.signal
|
||||
)
|
||||
firstController.abort()
|
||||
secondController.abort()
|
||||
|
||||
await expect(first).rejects.toMatchObject({ name: 'AbortError' })
|
||||
await expect(second).rejects.toMatchObject({ name: 'AbortError' })
|
||||
|
||||
resolveResponse(buildResponse({ assets }))
|
||||
await Promise.resolve()
|
||||
|
||||
await expect(assetService.getInputAssetsIncludingPublic()).resolves.toEqual(
|
||||
assets
|
||||
)
|
||||
expect(fetchApiMock).toHaveBeenCalledOnce()
|
||||
})
|
||||
|
||||
it('does not abort in-flight input asset loads when invalidated', async () => {
|
||||
const assets = [validAsset({ id: 'stale-input', tags: ['input'] })]
|
||||
const freshAssets = [validAsset({ id: 'fresh-input', tags: ['input'] })]
|
||||
let resolveResponse!: (response: Response) => void
|
||||
fetchApiMock
|
||||
.mockImplementationOnce(
|
||||
async () =>
|
||||
await new Promise<Response>((resolve) => {
|
||||
resolveResponse = resolve
|
||||
})
|
||||
)
|
||||
.mockResolvedValueOnce(buildResponse({ assets: freshAssets }))
|
||||
|
||||
const inFlight = assetService.getInputAssetsIncludingPublic()
|
||||
assetService.invalidateInputAssetsIncludingPublic()
|
||||
|
||||
resolveResponse(buildResponse({ assets }))
|
||||
|
||||
await expect(inFlight).resolves.toEqual(assets)
|
||||
await expect(assetService.getInputAssetsIncludingPublic()).resolves.toEqual(
|
||||
freshAssets
|
||||
)
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(2)
|
||||
})
|
||||
|
||||
it('invalidates cached input assets after deleting an asset', async () => {
|
||||
const staleAssets = [validAsset({ id: 'stale-input', tags: ['input'] })]
|
||||
const freshAssets = [validAsset({ id: 'fresh-input', tags: ['input'] })]
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(buildResponse({ assets: staleAssets }))
|
||||
.mockResolvedValueOnce(buildResponse(null))
|
||||
.mockResolvedValueOnce(buildResponse({ assets: freshAssets }))
|
||||
|
||||
await assetService.getInputAssetsIncludingPublic()
|
||||
await assetService.deleteAsset('stale-input')
|
||||
const refreshed = await assetService.getInputAssetsIncludingPublic()
|
||||
|
||||
expect(refreshed).toEqual(freshAssets)
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(3)
|
||||
expect(fetchApiMock.mock.calls[1]).toEqual([
|
||||
'/assets/stale-input',
|
||||
expect.objectContaining({ method: 'DELETE' })
|
||||
])
|
||||
})
|
||||
|
||||
it('invalidates cached input assets after an input asset upload', async () => {
|
||||
const staleAssets = [validAsset({ id: 'stale-input', tags: ['input'] })]
|
||||
const uploadedAsset = validAsset({ id: 'uploaded-input', tags: ['input'] })
|
||||
const freshAssets = [uploadedAsset]
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(buildResponse({ assets: staleAssets }))
|
||||
.mockResolvedValueOnce(buildResponse(uploadedAsset))
|
||||
.mockResolvedValueOnce(buildResponse({ assets: freshAssets }))
|
||||
|
||||
await assetService.getInputAssetsIncludingPublic()
|
||||
await assetService.uploadAssetAsync({
|
||||
source_url: 'https://example.com/input.png',
|
||||
tags: ['input']
|
||||
})
|
||||
const refreshed = await assetService.getInputAssetsIncludingPublic()
|
||||
|
||||
expect(refreshed).toEqual(freshAssets)
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(3)
|
||||
})
|
||||
|
||||
it('does not invalidate cached input assets for pending async input uploads', async () => {
|
||||
const staleAssets = [validAsset({ id: 'stale-input', tags: ['input'] })]
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(buildResponse({ assets: staleAssets }))
|
||||
.mockResolvedValueOnce(
|
||||
buildResponse(
|
||||
{ task_id: 'task-1', status: 'running' },
|
||||
{ ok: true, status: 202 }
|
||||
)
|
||||
)
|
||||
|
||||
await assetService.getInputAssetsIncludingPublic()
|
||||
await assetService.uploadAssetAsync({
|
||||
source_url: 'https://example.com/input.png',
|
||||
tags: ['input']
|
||||
})
|
||||
const cached = await assetService.getInputAssetsIncludingPublic()
|
||||
|
||||
expect(cached).toEqual(staleAssets)
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(2)
|
||||
})
|
||||
|
||||
it('does not invalidate cached input assets for non-input uploads', async () => {
|
||||
const staleAssets = [validAsset({ id: 'stale-input', tags: ['input'] })]
|
||||
fetchApiMock
|
||||
.mockResolvedValueOnce(buildResponse({ assets: staleAssets }))
|
||||
.mockResolvedValueOnce(buildResponse(validAsset({ tags: ['models'] })))
|
||||
|
||||
await assetService.getInputAssetsIncludingPublic()
|
||||
await assetService.uploadAssetAsync({
|
||||
source_url: 'https://example.com/model.safetensors',
|
||||
tags: ['models']
|
||||
})
|
||||
const cached = await assetService.getInputAssetsIncludingPublic()
|
||||
|
||||
expect(cached).toEqual(staleAssets)
|
||||
expect(fetchApiMock).toHaveBeenCalledTimes(2)
|
||||
})
|
||||
})
|
||||
|
||||
describe(assetService.checkAssetHash, () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
})
|
||||
|
||||
it.each([
|
||||
[200, 'exists'],
|
||||
[404, 'missing'],
|
||||
[400, 'invalid']
|
||||
] as const)('maps %s responses to %s', async (status, expected) => {
|
||||
const hash =
|
||||
'blake3:1111111111111111111111111111111111111111111111111111111111111111'
|
||||
fetchApiMock.mockResolvedValueOnce(buildResponse(null, { status }))
|
||||
|
||||
await expect(assetService.checkAssetHash(hash)).resolves.toBe(expected)
|
||||
|
||||
expect(fetchApiMock).toHaveBeenCalledWith(
|
||||
`/assets/hash/${encodeURIComponent(hash)}`,
|
||||
{
|
||||
method: 'HEAD',
|
||||
signal: undefined
|
||||
}
|
||||
)
|
||||
})
|
||||
|
||||
it('throws for unexpected responses', async () => {
|
||||
fetchApiMock.mockResolvedValueOnce(buildResponse(null, { status: 500 }))
|
||||
|
||||
await expect(assetService.checkAssetHash('blake3:abc')).rejects.toThrow(
|
||||
'Unexpected asset hash check status: 500'
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import { fromZodError } from 'zod-validation-error'
|
||||
import { z } from 'zod'
|
||||
|
||||
import { st } from '@/i18n'
|
||||
|
||||
@@ -29,9 +30,14 @@ export interface PaginationOptions {
|
||||
offset?: number
|
||||
}
|
||||
|
||||
interface AssetPaginationOptions extends PaginationOptions {
|
||||
signal?: AbortSignal
|
||||
}
|
||||
|
||||
interface AssetRequestOptions extends PaginationOptions {
|
||||
includeTags: string[]
|
||||
includePublic?: boolean
|
||||
signal?: AbortSignal
|
||||
}
|
||||
|
||||
interface AssetExportOptions {
|
||||
@@ -170,10 +176,61 @@ const ASSETS_DOWNLOAD_ENDPOINT = '/assets/download'
|
||||
const ASSETS_EXPORT_ENDPOINT = '/assets/export'
|
||||
const EXPERIMENTAL_WARNING = `EXPERIMENTAL: If you are seeing this please make sure "Comfy.Assets.UseAssetAPI" is set to "false" in your ComfyUI Settings.\n`
|
||||
const DEFAULT_LIMIT = 500
|
||||
const INPUT_ASSETS_WITH_PUBLIC_LIMIT = 500
|
||||
|
||||
export const MODELS_TAG = 'models'
|
||||
/** Asset tag used by the backend for placeholder records that are not installed. */
|
||||
export const MISSING_TAG = 'missing'
|
||||
|
||||
/** Result of a HEAD lookup against an exact asset hash. */
|
||||
export type AssetHashStatus = 'exists' | 'missing' | 'invalid'
|
||||
|
||||
const BLAKE3_ASSET_HASH_PATTERN = /^blake3:[0-9a-f]{64}$/i
|
||||
const BLAKE3_HEX_PATTERN = /^[0-9a-f]{64}$/i
|
||||
const uploadedAssetResponseSchema = assetItemSchema.extend({
|
||||
created_new: z.boolean()
|
||||
})
|
||||
|
||||
/** Returns true for a prefixed BLAKE3 asset hash: `blake3:<64 hex>`. */
|
||||
export function isBlake3AssetHash(value: string): boolean {
|
||||
return BLAKE3_ASSET_HASH_PATTERN.test(value)
|
||||
}
|
||||
|
||||
/** Converts a raw 64-character BLAKE3 hex digest into an asset hash. */
|
||||
export function toBlake3AssetHash(hash: string | undefined): string | null {
|
||||
if (!hash || !BLAKE3_HEX_PATTERN.test(hash)) return null
|
||||
return `blake3:${hash}`
|
||||
}
|
||||
|
||||
function createAbortError(): DOMException {
|
||||
return new DOMException('Aborted', 'AbortError')
|
||||
}
|
||||
|
||||
function throwIfAborted(signal?: AbortSignal): void {
|
||||
if (signal?.aborted) throw createAbortError()
|
||||
}
|
||||
|
||||
async function withCallerAbort<T>(
|
||||
promise: Promise<T>,
|
||||
signal?: AbortSignal
|
||||
): Promise<T> {
|
||||
throwIfAborted(signal)
|
||||
if (!signal) return await promise
|
||||
|
||||
let removeAbortListener = () => {}
|
||||
const abortPromise = new Promise<never>((_, reject) => {
|
||||
const onAbort = () => reject(createAbortError())
|
||||
signal.addEventListener('abort', onAbort, { once: true })
|
||||
removeAbortListener = () => signal.removeEventListener('abort', onAbort)
|
||||
})
|
||||
|
||||
try {
|
||||
return await Promise.race([promise, abortPromise])
|
||||
} finally {
|
||||
removeAbortListener()
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates asset response data using Zod schema
|
||||
*/
|
||||
@@ -187,11 +244,43 @@ function validateAssetResponse(data: unknown): AssetResponse {
|
||||
)
|
||||
}
|
||||
|
||||
function validateUploadedAssetResponse(
|
||||
data: unknown
|
||||
): AssetItem & { created_new: boolean } {
|
||||
const result = uploadedAssetResponseSchema.safeParse(data)
|
||||
if (result.success) {
|
||||
return result.data
|
||||
}
|
||||
|
||||
console.error('Invalid asset upload response:', fromZodError(result.error))
|
||||
throw new Error(
|
||||
st(
|
||||
'assetBrowser.errorUploadFailed',
|
||||
'Failed to upload asset. Please try again.'
|
||||
)
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Private service for asset-related network requests
|
||||
* Not exposed globally - used internally by ComfyApi
|
||||
*/
|
||||
function createAssetService() {
|
||||
let inputAssetsIncludingPublic: AssetItem[] | null = null
|
||||
let inputAssetsIncludingPublicRequestId = 0
|
||||
let pendingInputAssetsIncludingPublic: Promise<AssetItem[]> | null = null
|
||||
|
||||
/** Invalidates the cached public-inclusive input assets without aborting in-flight readers. */
|
||||
function invalidateInputAssetsIncludingPublic(): void {
|
||||
inputAssetsIncludingPublicRequestId++
|
||||
pendingInputAssetsIncludingPublic = null
|
||||
inputAssetsIncludingPublic = null
|
||||
}
|
||||
|
||||
function invalidateInputAssetsCacheIfNeeded(tags?: string[]): void {
|
||||
if (tags?.includes('input')) invalidateInputAssetsIncludingPublic()
|
||||
}
|
||||
|
||||
/**
|
||||
* Handles API response with consistent error handling and Zod validation
|
||||
*/
|
||||
@@ -203,7 +292,8 @@ function createAssetService() {
|
||||
includeTags,
|
||||
limit = DEFAULT_LIMIT,
|
||||
offset,
|
||||
includePublic
|
||||
includePublic,
|
||||
signal
|
||||
} = options
|
||||
const queryParams = new URLSearchParams({
|
||||
include_tags: includeTags.join(','),
|
||||
@@ -217,7 +307,9 @@ function createAssetService() {
|
||||
}
|
||||
|
||||
const url = `${ASSETS_ENDPOINT}?${queryParams.toString()}`
|
||||
const res = await api.fetchApi(url)
|
||||
const res = signal
|
||||
? await api.fetchApi(url, { signal })
|
||||
: await api.fetchApi(url)
|
||||
if (!res.ok) {
|
||||
throw new Error(
|
||||
`${EXPERIMENTAL_WARNING}Unable to load ${context}: Server returned ${res.status}. Please try again.`
|
||||
@@ -403,15 +495,16 @@ function createAssetService() {
|
||||
* @param options - Pagination options
|
||||
* @param options.limit - Maximum number of assets to return (default: 500)
|
||||
* @param options.offset - Number of assets to skip (default: 0)
|
||||
* @param options.signal - Optional abort signal for cancelling the request
|
||||
* @returns Promise<AssetItem[]> - Full asset objects filtered by tag, excluding missing assets
|
||||
*/
|
||||
async function getAssetsByTag(
|
||||
tag: string,
|
||||
includePublic: boolean = true,
|
||||
{ limit = DEFAULT_LIMIT, offset = 0 }: PaginationOptions = {}
|
||||
{ limit = DEFAULT_LIMIT, offset = 0, signal }: AssetPaginationOptions = {}
|
||||
): Promise<AssetItem[]> {
|
||||
const data = await handleAssetRequest(
|
||||
{ includeTags: [tag], limit, offset, includePublic },
|
||||
{ includeTags: [tag], limit, offset, includePublic, signal },
|
||||
`assets for tag ${tag}`
|
||||
)
|
||||
|
||||
@@ -420,6 +513,116 @@ function createAssetService() {
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets every asset for a tag by walking paginated asset API responses.
|
||||
*
|
||||
* @param tag - The tag to filter by (e.g., 'models', 'input')
|
||||
* @param includePublic - Whether to include public assets (default: true)
|
||||
* @param options - Pagination options
|
||||
* @param options.limit - Page size for each request (default: 500)
|
||||
* @param options.signal - Optional abort signal for cancelling requests
|
||||
* @returns Promise<AssetItem[]> - Full asset objects filtered by tag
|
||||
*/
|
||||
async function getAllAssetsByTag(
|
||||
tag: string,
|
||||
includePublic: boolean = true,
|
||||
{ limit = DEFAULT_LIMIT, signal }: AssetPaginationOptions = {}
|
||||
): Promise<AssetItem[]> {
|
||||
const assets: AssetItem[] = []
|
||||
const pageSize = limit > 0 ? limit : DEFAULT_LIMIT
|
||||
let offset = 0
|
||||
|
||||
while (true) {
|
||||
if (signal?.aborted) throw createAbortError()
|
||||
|
||||
const data = await handleAssetRequest(
|
||||
{
|
||||
includeTags: [tag],
|
||||
limit: pageSize,
|
||||
offset,
|
||||
includePublic,
|
||||
signal
|
||||
},
|
||||
`assets for tag ${tag}`
|
||||
)
|
||||
const batch = data.assets ?? []
|
||||
assets.push(...batch.filter((asset) => !asset.tags.includes(MISSING_TAG)))
|
||||
|
||||
const noMoreFromServer = data.has_more === false
|
||||
const inferredLastPage =
|
||||
data.has_more === undefined && batch.length < pageSize
|
||||
if (batch.length === 0 || noMoreFromServer || inferredLastPage) {
|
||||
return assets
|
||||
}
|
||||
|
||||
offset += batch.length
|
||||
}
|
||||
}
|
||||
|
||||
function startInputAssetsIncludingPublicRequest(): Promise<AssetItem[]> {
|
||||
const requestId = ++inputAssetsIncludingPublicRequestId
|
||||
|
||||
pendingInputAssetsIncludingPublic = getAllAssetsByTag('input', true, {
|
||||
limit: INPUT_ASSETS_WITH_PUBLIC_LIMIT
|
||||
})
|
||||
.then((assets) => {
|
||||
if (requestId === inputAssetsIncludingPublicRequestId) {
|
||||
inputAssetsIncludingPublic = assets
|
||||
}
|
||||
return assets
|
||||
})
|
||||
.finally(() => {
|
||||
if (requestId === inputAssetsIncludingPublicRequestId) {
|
||||
pendingInputAssetsIncludingPublic = null
|
||||
}
|
||||
})
|
||||
|
||||
void pendingInputAssetsIncludingPublic.catch(() => {})
|
||||
return pendingInputAssetsIncludingPublic
|
||||
}
|
||||
|
||||
/**
|
||||
* Gets cached input assets including public assets for missing media checks.
|
||||
* Caller aborts cancel only that caller; shared fetches are invalidated
|
||||
* through invalidateInputAssetsIncludingPublic().
|
||||
*/
|
||||
async function getInputAssetsIncludingPublic(
|
||||
signal?: AbortSignal
|
||||
): Promise<AssetItem[]> {
|
||||
throwIfAborted(signal)
|
||||
if (inputAssetsIncludingPublic) return inputAssetsIncludingPublic
|
||||
|
||||
const request =
|
||||
pendingInputAssetsIncludingPublic ??
|
||||
startInputAssetsIncludingPublicRequest()
|
||||
return await withCallerAbort(request, signal)
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks whether an asset exists for an exact asset hash.
|
||||
*
|
||||
* Uses the HEAD /assets/hash/{hash} endpoint and maps status-only responses:
|
||||
* 200 -> exists, 404 -> missing, and 400 -> invalid hash format.
|
||||
*/
|
||||
async function checkAssetHash(
|
||||
assetHash: string,
|
||||
signal?: AbortSignal
|
||||
): Promise<AssetHashStatus> {
|
||||
const response = await api.fetchApi(
|
||||
`${ASSETS_ENDPOINT}/hash/${encodeURIComponent(assetHash)}`,
|
||||
{
|
||||
method: 'HEAD',
|
||||
signal
|
||||
}
|
||||
)
|
||||
|
||||
if (response.status === 200) return 'exists'
|
||||
if (response.status === 404) return 'missing'
|
||||
if (response.status === 400) return 'invalid'
|
||||
|
||||
throw new Error(`Unexpected asset hash check status: ${response.status}`)
|
||||
}
|
||||
|
||||
/**
|
||||
* Deletes an asset by ID
|
||||
* Only available in cloud environment
|
||||
@@ -438,6 +641,8 @@ function createAssetService() {
|
||||
`Unable to delete asset ${id}: Server returned ${res.status}`
|
||||
)
|
||||
}
|
||||
|
||||
invalidateInputAssetsIncludingPublic()
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -545,7 +750,9 @@ function createAssetService() {
|
||||
)
|
||||
}
|
||||
|
||||
return await res.json()
|
||||
const asset = validateUploadedAssetResponse(await res.json())
|
||||
invalidateInputAssetsCacheIfNeeded(params.tags)
|
||||
return asset
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -598,7 +805,9 @@ function createAssetService() {
|
||||
)
|
||||
}
|
||||
|
||||
return await res.json()
|
||||
const asset = validateUploadedAssetResponse(await res.json())
|
||||
invalidateInputAssetsCacheIfNeeded(params.tags)
|
||||
return asset
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -628,6 +837,7 @@ function createAssetService() {
|
||||
if (!parseResult.success) {
|
||||
throw fromZodError(parseResult.error)
|
||||
}
|
||||
invalidateInputAssetsIncludingPublic()
|
||||
return parseResult.data
|
||||
}
|
||||
|
||||
@@ -658,6 +868,7 @@ function createAssetService() {
|
||||
if (!parseResult.success) {
|
||||
throw fromZodError(parseResult.error)
|
||||
}
|
||||
invalidateInputAssetsIncludingPublic()
|
||||
return parseResult.data
|
||||
}
|
||||
|
||||
@@ -709,6 +920,13 @@ function createAssetService() {
|
||||
)
|
||||
)
|
||||
}
|
||||
if (
|
||||
params.tags?.includes('input') &&
|
||||
result.data.type === 'async' &&
|
||||
result.data.task.status === 'completed'
|
||||
) {
|
||||
invalidateInputAssetsIncludingPublic()
|
||||
}
|
||||
return result.data
|
||||
}
|
||||
|
||||
@@ -724,6 +942,7 @@ function createAssetService() {
|
||||
)
|
||||
)
|
||||
}
|
||||
invalidateInputAssetsCacheIfNeeded(params.tags)
|
||||
return result.data
|
||||
}
|
||||
|
||||
@@ -764,6 +983,10 @@ function createAssetService() {
|
||||
getAssetsForNodeType,
|
||||
getAssetDetails,
|
||||
getAssetsByTag,
|
||||
getAllAssetsByTag,
|
||||
getInputAssetsIncludingPublic,
|
||||
invalidateInputAssetsIncludingPublic,
|
||||
checkAssetHash,
|
||||
deleteAsset,
|
||||
updateAsset,
|
||||
addAssetTags,
|
||||
|
||||
@@ -2,6 +2,7 @@ import * as Sentry from '@sentry/vue'
|
||||
import { isEmpty } from 'es-toolkit/compat'
|
||||
|
||||
import { api } from '@/scripts/api'
|
||||
import { toError } from '@/utils/errorUtil'
|
||||
|
||||
interface UserCloudStatus {
|
||||
status: 'active'
|
||||
@@ -80,7 +81,7 @@ export async function getUserCloudStatus(): Promise<UserCloudStatus> {
|
||||
} catch (error) {
|
||||
// Only capture network errors (not HTTP errors we already captured)
|
||||
if (!isHttpError(error, 'Failed to get user:')) {
|
||||
captureApiError(error as Error, '/user', 'network_error')
|
||||
captureApiError(toError(error), '/user', 'network_error')
|
||||
}
|
||||
throw error
|
||||
}
|
||||
@@ -176,7 +177,7 @@ export async function submitSurvey(
|
||||
// Only capture network errors (not HTTP errors we already captured)
|
||||
if (!isHttpError(error, 'Failed to submit survey:')) {
|
||||
captureApiError(
|
||||
error as Error,
|
||||
toError(error),
|
||||
'/settings',
|
||||
'network_error',
|
||||
undefined,
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
import { fromAny } from '@total-typescript/shoehorn'
|
||||
import { describe, expect, it, vi } from 'vitest'
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest'
|
||||
|
||||
import type { LGraph } from '@/lib/litegraph/src/LGraph'
|
||||
import type { LGraphNode } from '@/lib/litegraph/src/LGraphNode'
|
||||
import type { IComboWidget } from '@/lib/litegraph/src/types/widgets'
|
||||
import type { AssetItem } from '@/platform/assets/schemas/assetSchema'
|
||||
import type * as AssetServiceModule from '@/platform/assets/services/assetService'
|
||||
import {
|
||||
scanAllMediaCandidates,
|
||||
scanNodeMediaCandidates,
|
||||
@@ -13,6 +15,13 @@ import {
|
||||
} from './missingMediaScan'
|
||||
import type { MissingMediaCandidate } from './types'
|
||||
|
||||
const { mockCheckAssetHash, mockGetInputAssetsIncludingPublic } = vi.hoisted(
|
||||
() => ({
|
||||
mockCheckAssetHash: vi.fn(),
|
||||
mockGetInputAssetsIncludingPublic: vi.fn()
|
||||
})
|
||||
)
|
||||
|
||||
vi.mock('@/utils/graphTraversalUtil', () => ({
|
||||
collectAllNodes: (graph: { _testNodes: LGraphNode[] }) => graph._testNodes,
|
||||
getExecutionIdByNode: (
|
||||
@@ -21,6 +30,21 @@ vi.mock('@/utils/graphTraversalUtil', () => ({
|
||||
) => node._testExecutionId ?? String(node.id)
|
||||
}))
|
||||
|
||||
vi.mock('@/platform/assets/services/assetService', async () => {
|
||||
const actual = await vi.importActual<typeof AssetServiceModule>(
|
||||
'@/platform/assets/services/assetService'
|
||||
)
|
||||
|
||||
return {
|
||||
...actual,
|
||||
assetService: {
|
||||
...actual.assetService,
|
||||
checkAssetHash: mockCheckAssetHash,
|
||||
getInputAssetsIncludingPublic: mockGetInputAssetsIncludingPublic
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
function makeCandidate(
|
||||
nodeId: string,
|
||||
name: string,
|
||||
@@ -70,6 +94,16 @@ function makeGraph(nodes: LGraphNode[]): LGraph {
|
||||
return fromAny<LGraph, unknown>({ _testNodes: nodes })
|
||||
}
|
||||
|
||||
function makeAsset(name: string, assetHash: string | null = null): AssetItem {
|
||||
return {
|
||||
id: name,
|
||||
name,
|
||||
asset_hash: assetHash,
|
||||
mime_type: null,
|
||||
tags: ['input']
|
||||
}
|
||||
}
|
||||
|
||||
describe('scanNodeMediaCandidates', () => {
|
||||
it('returns candidate for a LoadImage node with missing image', () => {
|
||||
const graph = makeGraph([])
|
||||
@@ -232,37 +266,43 @@ describe('groupCandidatesByMediaType', () => {
|
||||
})
|
||||
|
||||
describe('verifyCloudMediaCandidates', () => {
|
||||
it('marks candidates missing when not in input assets', async () => {
|
||||
const existingHash =
|
||||
'blake3:1111111111111111111111111111111111111111111111111111111111111111'
|
||||
const missingHash =
|
||||
'blake3:2222222222222222222222222222222222222222222222222222222222222222'
|
||||
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
mockCheckAssetHash.mockResolvedValue('missing')
|
||||
mockGetInputAssetsIncludingPublic.mockResolvedValue([])
|
||||
})
|
||||
|
||||
it('marks candidates missing when the asset hash is not found', async () => {
|
||||
const candidates = [
|
||||
makeCandidate('1', 'abc123.png', { isMissing: undefined }),
|
||||
makeCandidate('2', 'def456.png', { isMissing: undefined })
|
||||
makeCandidate('1', missingHash, { isMissing: undefined }),
|
||||
makeCandidate('2', existingHash, { isMissing: undefined })
|
||||
]
|
||||
|
||||
const mockStore = {
|
||||
updateInputs: async () => {},
|
||||
inputAssets: [{ asset_hash: 'def456.png', name: 'my-photo.png' }]
|
||||
}
|
||||
const checkAssetHash = vi.fn(async (assetHash: string) =>
|
||||
assetHash === existingHash ? ('exists' as const) : ('missing' as const)
|
||||
)
|
||||
|
||||
await verifyCloudMediaCandidates(candidates, undefined, mockStore)
|
||||
await verifyCloudMediaCandidates(candidates, undefined, checkAssetHash)
|
||||
|
||||
expect(candidates[0].isMissing).toBe(true)
|
||||
expect(candidates[1].isMissing).toBe(false)
|
||||
})
|
||||
|
||||
it('calls updateInputs before checking assets', async () => {
|
||||
let updateCalled = false
|
||||
const candidates = [makeCandidate('1', 'abc.png', { isMissing: undefined })]
|
||||
it('uses assetService.checkAssetHash by default', async () => {
|
||||
const candidates = [
|
||||
makeCandidate('1', existingHash, { isMissing: undefined })
|
||||
]
|
||||
mockCheckAssetHash.mockResolvedValue('exists')
|
||||
|
||||
const mockStore = {
|
||||
updateInputs: async () => {
|
||||
updateCalled = true
|
||||
},
|
||||
inputAssets: []
|
||||
}
|
||||
await verifyCloudMediaCandidates(candidates)
|
||||
|
||||
await verifyCloudMediaCandidates(candidates, undefined, mockStore)
|
||||
|
||||
expect(updateCalled).toBe(true)
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
expect(mockCheckAssetHash).toHaveBeenCalledWith(existingHash, undefined)
|
||||
})
|
||||
|
||||
it('respects abort signal before execution', async () => {
|
||||
@@ -270,69 +310,221 @@ describe('verifyCloudMediaCandidates', () => {
|
||||
controller.abort()
|
||||
|
||||
const candidates = [
|
||||
makeCandidate('1', 'abc123.png', { isMissing: undefined })
|
||||
makeCandidate('1', missingHash, { isMissing: undefined })
|
||||
]
|
||||
|
||||
await verifyCloudMediaCandidates(candidates, controller.signal)
|
||||
|
||||
expect(candidates[0].isMissing).toBeUndefined()
|
||||
expect(mockCheckAssetHash).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('respects abort signal after updateInputs', async () => {
|
||||
it('respects abort signal after hash verification', async () => {
|
||||
const controller = new AbortController()
|
||||
const candidates = [makeCandidate('1', 'abc.png', { isMissing: undefined })]
|
||||
const candidates = [
|
||||
makeCandidate('1', existingHash, { isMissing: undefined })
|
||||
]
|
||||
const checkAssetHash = vi.fn(async () => {
|
||||
controller.abort()
|
||||
return 'exists' as const
|
||||
})
|
||||
|
||||
const mockStore = {
|
||||
updateInputs: async () => {
|
||||
controller.abort()
|
||||
},
|
||||
inputAssets: [{ asset_hash: 'abc.png', name: 'photo.png' }]
|
||||
}
|
||||
|
||||
await verifyCloudMediaCandidates(candidates, controller.signal, mockStore)
|
||||
await verifyCloudMediaCandidates(
|
||||
candidates,
|
||||
controller.signal,
|
||||
checkAssetHash
|
||||
)
|
||||
|
||||
expect(candidates[0].isMissing).toBeUndefined()
|
||||
})
|
||||
|
||||
it('skips candidates already resolved as true', async () => {
|
||||
const candidates = [makeCandidate('1', 'abc.png', { isMissing: true })]
|
||||
const candidates = [makeCandidate('1', missingHash, { isMissing: true })]
|
||||
|
||||
const mockStore = {
|
||||
updateInputs: async () => {},
|
||||
inputAssets: []
|
||||
}
|
||||
|
||||
await verifyCloudMediaCandidates(candidates, undefined, mockStore)
|
||||
await verifyCloudMediaCandidates(candidates)
|
||||
|
||||
expect(candidates[0].isMissing).toBe(true)
|
||||
expect(mockCheckAssetHash).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('skips candidates already resolved as false', async () => {
|
||||
const candidates = [makeCandidate('1', 'abc.png', { isMissing: false })]
|
||||
const candidates = [makeCandidate('1', existingHash, { isMissing: false })]
|
||||
|
||||
const mockStore = {
|
||||
updateInputs: async () => {},
|
||||
inputAssets: []
|
||||
}
|
||||
|
||||
await verifyCloudMediaCandidates(candidates, undefined, mockStore)
|
||||
await verifyCloudMediaCandidates(candidates)
|
||||
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
expect(mockCheckAssetHash).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('skips entirely when no pending candidates', async () => {
|
||||
let updateCalled = false
|
||||
const candidates = [makeCandidate('1', 'abc.png', { isMissing: true })]
|
||||
const candidates = [makeCandidate('1', missingHash, { isMissing: true })]
|
||||
|
||||
const mockStore = {
|
||||
updateInputs: async () => {
|
||||
updateCalled = true
|
||||
},
|
||||
inputAssets: []
|
||||
}
|
||||
await verifyCloudMediaCandidates(candidates)
|
||||
|
||||
await verifyCloudMediaCandidates(candidates, undefined, mockStore)
|
||||
expect(mockCheckAssetHash).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
expect(updateCalled).toBe(false)
|
||||
it('falls back to input assets for non-blake3 candidate names', async () => {
|
||||
const candidates = [
|
||||
makeCandidate('1', 'photo.png', { isMissing: undefined }),
|
||||
makeCandidate('2', 'missing.png', { isMissing: undefined })
|
||||
]
|
||||
const fetchInputAssets = vi.fn(async () => [
|
||||
makeAsset('stored-photo.png', 'photo.png')
|
||||
])
|
||||
|
||||
await verifyCloudMediaCandidates(
|
||||
candidates,
|
||||
undefined,
|
||||
undefined,
|
||||
fetchInputAssets
|
||||
)
|
||||
|
||||
expect(mockCheckAssetHash).not.toHaveBeenCalled()
|
||||
expect(fetchInputAssets).toHaveBeenCalledOnce()
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
expect(candidates[1].isMissing).toBe(true)
|
||||
})
|
||||
|
||||
it('uses public input assets for default legacy fallback', async () => {
|
||||
const candidates = [
|
||||
makeCandidate('1', 'public-photo.png', { isMissing: undefined })
|
||||
]
|
||||
const inputAssets = Array.from({ length: 500 }, (_, index) =>
|
||||
makeAsset(`asset-${index}.png`)
|
||||
)
|
||||
inputAssets[42] = makeAsset('public-asset-record', 'public-photo.png')
|
||||
mockGetInputAssetsIncludingPublic.mockResolvedValue(inputAssets)
|
||||
|
||||
await verifyCloudMediaCandidates(candidates)
|
||||
|
||||
expect(mockGetInputAssetsIncludingPublic).toHaveBeenCalledWith(undefined)
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
})
|
||||
|
||||
it('silences aborts while loading legacy fallback input assets', async () => {
|
||||
const abortError = new Error('aborted')
|
||||
abortError.name = 'AbortError'
|
||||
const controller = new AbortController()
|
||||
const candidates = [
|
||||
makeCandidate('1', 'photo.png', { isMissing: undefined })
|
||||
]
|
||||
const fetchInputAssets = vi.fn(async () => {
|
||||
controller.abort()
|
||||
throw abortError
|
||||
})
|
||||
|
||||
await expect(
|
||||
verifyCloudMediaCandidates(
|
||||
candidates,
|
||||
controller.signal,
|
||||
undefined,
|
||||
fetchInputAssets
|
||||
)
|
||||
).resolves.toBeUndefined()
|
||||
|
||||
expect(candidates[0].isMissing).toBeUndefined()
|
||||
})
|
||||
|
||||
it('silences aborts from the default legacy fallback input asset store path', async () => {
|
||||
const abortError = new Error('aborted')
|
||||
abortError.name = 'AbortError'
|
||||
const controller = new AbortController()
|
||||
const candidates = [
|
||||
makeCandidate('1', 'photo.png', { isMissing: undefined })
|
||||
]
|
||||
mockGetInputAssetsIncludingPublic.mockImplementationOnce(async () => {
|
||||
controller.abort()
|
||||
throw abortError
|
||||
})
|
||||
|
||||
await expect(
|
||||
verifyCloudMediaCandidates(candidates, controller.signal)
|
||||
).resolves.toBeUndefined()
|
||||
|
||||
expect(mockGetInputAssetsIncludingPublic).toHaveBeenCalledWith(
|
||||
controller.signal
|
||||
)
|
||||
expect(candidates[0].isMissing).toBeUndefined()
|
||||
})
|
||||
|
||||
it('falls back to input assets when the hash endpoint returns 400', async () => {
|
||||
const candidates = [
|
||||
makeCandidate('1', existingHash, { isMissing: undefined })
|
||||
]
|
||||
mockCheckAssetHash.mockResolvedValue('invalid')
|
||||
const fetchInputAssets = vi.fn(async () => [
|
||||
makeAsset('photo.png', existingHash)
|
||||
])
|
||||
|
||||
await verifyCloudMediaCandidates(
|
||||
candidates,
|
||||
undefined,
|
||||
undefined,
|
||||
fetchInputAssets
|
||||
)
|
||||
|
||||
expect(mockCheckAssetHash).toHaveBeenCalledWith(existingHash, undefined)
|
||||
expect(fetchInputAssets).toHaveBeenCalledOnce()
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
})
|
||||
|
||||
it('falls back to input assets when hash verification fails', async () => {
|
||||
const warn = vi.spyOn(console, 'warn').mockImplementation(() => {})
|
||||
const candidates = [
|
||||
makeCandidate('1', existingHash, { isMissing: undefined })
|
||||
]
|
||||
const checkAssetHash = vi.fn(async () => {
|
||||
throw new Error('network failed')
|
||||
})
|
||||
const fetchInputAssets = vi.fn(async () => [
|
||||
makeAsset('photo.png', existingHash)
|
||||
])
|
||||
|
||||
await verifyCloudMediaCandidates(
|
||||
candidates,
|
||||
undefined,
|
||||
checkAssetHash,
|
||||
fetchInputAssets
|
||||
)
|
||||
|
||||
expect(fetchInputAssets).toHaveBeenCalledOnce()
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
expect(warn).toHaveBeenCalledOnce()
|
||||
warn.mockRestore()
|
||||
})
|
||||
|
||||
it('does not call the hash endpoint for malformed blake3-looking values', async () => {
|
||||
const malformedHash = 'blake3:abc'
|
||||
const candidates = [
|
||||
makeCandidate('1', malformedHash, { isMissing: undefined })
|
||||
]
|
||||
const fetchInputAssets = vi.fn(async () => [
|
||||
makeAsset('legacy.png', malformedHash)
|
||||
])
|
||||
|
||||
await verifyCloudMediaCandidates(
|
||||
candidates,
|
||||
undefined,
|
||||
undefined,
|
||||
fetchInputAssets
|
||||
)
|
||||
|
||||
expect(mockCheckAssetHash).not.toHaveBeenCalled()
|
||||
expect(fetchInputAssets).toHaveBeenCalledOnce()
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
})
|
||||
|
||||
it('deduplicates checks for repeated candidate names', async () => {
|
||||
const candidates = [
|
||||
makeCandidate('1', missingHash, { isMissing: undefined }),
|
||||
makeCandidate('2', missingHash, { isMissing: undefined })
|
||||
]
|
||||
|
||||
await verifyCloudMediaCandidates(candidates)
|
||||
|
||||
expect(mockCheckAssetHash).toHaveBeenCalledOnce()
|
||||
expect(candidates[0].isMissing).toBe(true)
|
||||
expect(candidates[1].isMissing).toBe(true)
|
||||
})
|
||||
})
|
||||
|
||||
@@ -18,6 +18,12 @@ import {
|
||||
} from '@/utils/graphTraversalUtil'
|
||||
import { LGraphEventMode } from '@/lib/litegraph/src/types/globalEnums'
|
||||
import { resolveComboValues } from '@/utils/litegraphUtil'
|
||||
import type { AssetItem } from '@/platform/assets/schemas/assetSchema'
|
||||
import type { AssetHashStatus } from '@/platform/assets/services/assetService'
|
||||
import {
|
||||
assetService,
|
||||
isBlake3AssetHash
|
||||
} from '@/platform/assets/services/assetService'
|
||||
|
||||
/** Map of node types to their media widget name and media type. */
|
||||
const MEDIA_NODE_WIDGETS: Record<
|
||||
@@ -106,41 +112,130 @@ export function scanNodeMediaCandidates(
|
||||
return candidates
|
||||
}
|
||||
|
||||
interface InputVerifier {
|
||||
updateInputs: () => Promise<unknown>
|
||||
inputAssets: Array<{ asset_hash?: string | null; name: string }>
|
||||
type AssetHashVerifier = (
|
||||
assetHash: string,
|
||||
signal?: AbortSignal
|
||||
) => Promise<AssetHashStatus>
|
||||
|
||||
type InputAssetFetcher = (signal?: AbortSignal) => Promise<AssetItem[]>
|
||||
|
||||
function groupCandidatesForHashLookup(candidates: MissingMediaCandidate[]): {
|
||||
candidatesByHash: Map<string, MissingMediaCandidate[]>
|
||||
legacyCandidates: MissingMediaCandidate[]
|
||||
} {
|
||||
const candidatesByHash = new Map<string, MissingMediaCandidate[]>()
|
||||
const legacyCandidates: MissingMediaCandidate[] = []
|
||||
|
||||
for (const candidate of candidates) {
|
||||
if (!isBlake3AssetHash(candidate.name)) {
|
||||
legacyCandidates.push(candidate)
|
||||
continue
|
||||
}
|
||||
|
||||
const hashCandidates = candidatesByHash.get(candidate.name)
|
||||
if (hashCandidates) hashCandidates.push(candidate)
|
||||
else candidatesByHash.set(candidate.name, [candidate])
|
||||
}
|
||||
|
||||
return { candidatesByHash, legacyCandidates }
|
||||
}
|
||||
|
||||
async function verifyCandidatesByHash(
|
||||
candidatesByHash: Map<string, MissingMediaCandidate[]>,
|
||||
legacyCandidates: MissingMediaCandidate[],
|
||||
signal: AbortSignal | undefined,
|
||||
checkAssetHash: AssetHashVerifier
|
||||
): Promise<void> {
|
||||
await Promise.all(
|
||||
Array.from(candidatesByHash, async ([assetHash, hashCandidates]) => {
|
||||
if (signal?.aborted) return
|
||||
|
||||
let status: AssetHashStatus
|
||||
try {
|
||||
status = await checkAssetHash(assetHash, signal)
|
||||
if (signal?.aborted) return
|
||||
} catch (err) {
|
||||
if (signal?.aborted || isAbortError(err)) return
|
||||
console.warn(
|
||||
'[Missing Media Pipeline] Failed to verify asset hash:',
|
||||
err
|
||||
)
|
||||
legacyCandidates.push(...hashCandidates)
|
||||
return
|
||||
}
|
||||
|
||||
if (status === 'invalid') {
|
||||
legacyCandidates.push(...hashCandidates)
|
||||
return
|
||||
}
|
||||
|
||||
for (const candidate of hashCandidates) {
|
||||
candidate.isMissing = status === 'missing'
|
||||
}
|
||||
})
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Verify cloud media candidates against the input assets fetched from the
|
||||
* assets store. Mutates candidates' `isMissing` in place.
|
||||
* Verify cloud media candidates by probing the asset hash endpoint first.
|
||||
* Invalid hash values fall back to the legacy input asset list check.
|
||||
*/
|
||||
export async function verifyCloudMediaCandidates(
|
||||
candidates: MissingMediaCandidate[],
|
||||
signal?: AbortSignal,
|
||||
assetsStore?: InputVerifier
|
||||
checkAssetHash: AssetHashVerifier = assetService.checkAssetHash,
|
||||
fetchInputAssets: InputAssetFetcher = fetchMissingInputAssets
|
||||
): Promise<void> {
|
||||
if (signal?.aborted) return
|
||||
|
||||
const pending = candidates.filter((c) => c.isMissing === undefined)
|
||||
if (pending.length === 0) return
|
||||
|
||||
const store =
|
||||
assetsStore ?? (await import('@/stores/assetsStore')).useAssetsStore()
|
||||
const { candidatesByHash, legacyCandidates } =
|
||||
groupCandidatesForHashLookup(pending)
|
||||
await verifyCandidatesByHash(
|
||||
candidatesByHash,
|
||||
legacyCandidates,
|
||||
signal,
|
||||
checkAssetHash
|
||||
)
|
||||
|
||||
await store.updateInputs()
|
||||
if (signal?.aborted || legacyCandidates.length === 0) return
|
||||
|
||||
let inputAssets: AssetItem[]
|
||||
try {
|
||||
inputAssets = await fetchInputAssets(signal)
|
||||
} catch (err) {
|
||||
if (signal?.aborted || isAbortError(err)) return
|
||||
throw err
|
||||
}
|
||||
|
||||
if (signal?.aborted) return
|
||||
|
||||
const assetHashes = new Set(
|
||||
store.inputAssets.map((a) => a.asset_hash).filter((h): h is string => !!h)
|
||||
inputAssets.map((a) => a.asset_hash).filter((h): h is string => !!h)
|
||||
)
|
||||
|
||||
for (const c of pending) {
|
||||
c.isMissing = !assetHashes.has(c.name)
|
||||
for (const candidate of legacyCandidates) {
|
||||
candidate.isMissing = !assetHashes.has(candidate.name)
|
||||
}
|
||||
}
|
||||
|
||||
async function fetchMissingInputAssets(
|
||||
signal?: AbortSignal
|
||||
): Promise<AssetItem[]> {
|
||||
return await assetService.getInputAssetsIncludingPublic(signal)
|
||||
}
|
||||
|
||||
function isAbortError(err: unknown): boolean {
|
||||
return (
|
||||
typeof err === 'object' &&
|
||||
err !== null &&
|
||||
'name' in err &&
|
||||
err.name === 'AbortError'
|
||||
)
|
||||
}
|
||||
|
||||
/** Group confirmed-missing candidates by file name into view models. */
|
||||
export function groupCandidatesByName(
|
||||
candidates: MissingMediaCandidate[]
|
||||
|
||||
@@ -19,6 +19,11 @@ import activeSubgraphUnmatchedModel from '@/platform/missingModel/__fixtures__/a
|
||||
import bypassedSubgraphUnmatchedModel from '@/platform/missingModel/__fixtures__/bypassedSubgraphUnmatchedModel.json' with { type: 'json' }
|
||||
import type { MissingModelCandidate } from '@/platform/missingModel/types'
|
||||
import type { ComfyWorkflowJSON } from '@/platform/workflow/validation/schemas/workflowSchema'
|
||||
import type * as AssetServiceModule from '@/platform/assets/services/assetService'
|
||||
|
||||
const { mockCheckAssetHash } = vi.hoisted(() => ({
|
||||
mockCheckAssetHash: vi.fn()
|
||||
}))
|
||||
|
||||
vi.mock('@/utils/graphTraversalUtil', () => ({
|
||||
collectAllNodes: (graph: { _testNodes: LGraphNode[] }) => graph._testNodes,
|
||||
@@ -28,6 +33,20 @@ vi.mock('@/utils/graphTraversalUtil', () => ({
|
||||
) => node._testExecutionId ?? String(node.id)
|
||||
}))
|
||||
|
||||
vi.mock('@/platform/assets/services/assetService', async () => {
|
||||
const actual = await vi.importActual<typeof AssetServiceModule>(
|
||||
'@/platform/assets/services/assetService'
|
||||
)
|
||||
|
||||
return {
|
||||
...actual,
|
||||
assetService: {
|
||||
...actual.assetService,
|
||||
checkAssetHash: mockCheckAssetHash
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
/** Helper: create a combo widget mock */
|
||||
function makeComboWidget(
|
||||
name: string,
|
||||
@@ -43,7 +62,7 @@ function makeComboWidget(
|
||||
}
|
||||
|
||||
/** Helper: create an asset widget mock (Cloud combo replacement) */
|
||||
function makeAssetWidget(name: string, value: string): IBaseWidget {
|
||||
function makeAssetWidget(name: string, value: unknown): IBaseWidget {
|
||||
return fromAny<IBaseWidget, unknown>({
|
||||
type: 'asset',
|
||||
name,
|
||||
@@ -551,6 +570,16 @@ describe('scanAllModelCandidates', () => {
|
||||
expect(result).toEqual([])
|
||||
})
|
||||
|
||||
it('should skip asset widgets with non-string values', () => {
|
||||
const graph = makeGraph([
|
||||
makeNode(1, 'SomeNode', [makeAssetWidget('ckpt_name', 123)])
|
||||
])
|
||||
|
||||
const result = scanAllModelCandidates(graph, noAssetSupport)
|
||||
|
||||
expect(result).toEqual([])
|
||||
})
|
||||
|
||||
it('should scan both combo and asset widgets on the same node', () => {
|
||||
const graph = makeGraph([
|
||||
makeNode(1, 'DualLoaderNode', [
|
||||
@@ -1411,6 +1440,7 @@ function makeAssetCandidate(
|
||||
describe('verifyAssetSupportedCandidates', () => {
|
||||
beforeEach(() => {
|
||||
vi.clearAllMocks()
|
||||
mockCheckAssetHash.mockResolvedValue('missing')
|
||||
mockIsModelLoading.mockReturnValue(false)
|
||||
mockHasMore.mockReturnValue(false)
|
||||
mockGetAssets.mockReturnValue([])
|
||||
@@ -1428,6 +1458,125 @@ describe('verifyAssetSupportedCandidates', () => {
|
||||
)
|
||||
})
|
||||
|
||||
it('should resolve isMissing=false when the blake3 hash endpoint finds the asset', async () => {
|
||||
const hash =
|
||||
'1111111111111111111111111111111111111111111111111111111111111111'
|
||||
const candidates = [
|
||||
makeAssetCandidate('model.safetensors', {
|
||||
hash,
|
||||
hashType: 'blake3'
|
||||
})
|
||||
]
|
||||
mockCheckAssetHash.mockResolvedValue('exists')
|
||||
|
||||
await verifyAssetSupportedCandidates(candidates)
|
||||
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
expect(mockCheckAssetHash).toHaveBeenCalledWith(`blake3:${hash}`, undefined)
|
||||
expect(mockUpdateModelsForNodeType).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('should fall back to asset store matching when the blake3 hash is not found', async () => {
|
||||
const hash =
|
||||
'2222222222222222222222222222222222222222222222222222222222222222'
|
||||
const candidates = [
|
||||
makeAssetCandidate('my_model.safetensors', {
|
||||
hash,
|
||||
hashType: 'blake3'
|
||||
})
|
||||
]
|
||||
mockCheckAssetHash.mockResolvedValue('missing')
|
||||
mockGetAssets.mockReturnValue([
|
||||
{
|
||||
id: '1',
|
||||
name: 'my_model.safetensors',
|
||||
asset_hash: null,
|
||||
metadata: { filename: 'my_model.safetensors' }
|
||||
}
|
||||
])
|
||||
|
||||
await verifyAssetSupportedCandidates(candidates)
|
||||
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
expect(mockUpdateModelsForNodeType).toHaveBeenCalledWith(
|
||||
'CheckpointLoaderSimple'
|
||||
)
|
||||
})
|
||||
|
||||
it('should fall back to asset store matching when hash verification fails', async () => {
|
||||
const warn = vi.spyOn(console, 'warn').mockImplementation(() => {})
|
||||
const hash =
|
||||
'3333333333333333333333333333333333333333333333333333333333333333'
|
||||
const candidates = [
|
||||
makeAssetCandidate('my_model.safetensors', {
|
||||
hash,
|
||||
hashType: 'blake3'
|
||||
})
|
||||
]
|
||||
mockCheckAssetHash.mockRejectedValue(new Error('network failed'))
|
||||
mockGetAssets.mockReturnValue([
|
||||
{
|
||||
id: '1',
|
||||
name: 'my_model.safetensors',
|
||||
asset_hash: null,
|
||||
metadata: { filename: 'my_model.safetensors' }
|
||||
}
|
||||
])
|
||||
|
||||
await verifyAssetSupportedCandidates(candidates)
|
||||
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
expect(mockUpdateModelsForNodeType).toHaveBeenCalledWith(
|
||||
'CheckpointLoaderSimple'
|
||||
)
|
||||
expect(warn).toHaveBeenCalledOnce()
|
||||
warn.mockRestore()
|
||||
})
|
||||
|
||||
it('should skip malformed blake3 hashes and use asset store matching', async () => {
|
||||
const candidates = [
|
||||
makeAssetCandidate('my_model.safetensors', {
|
||||
hash: 'abc123',
|
||||
hashType: 'blake3'
|
||||
})
|
||||
]
|
||||
mockGetAssets.mockReturnValue([
|
||||
{
|
||||
id: '1',
|
||||
name: 'my_model.safetensors',
|
||||
asset_hash: null,
|
||||
metadata: { filename: 'my_model.safetensors' }
|
||||
}
|
||||
])
|
||||
|
||||
await verifyAssetSupportedCandidates(candidates)
|
||||
|
||||
expect(mockCheckAssetHash).not.toHaveBeenCalled()
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
})
|
||||
|
||||
it('should not warn or fall back when hash verification is aborted', async () => {
|
||||
const warn = vi.spyOn(console, 'warn').mockImplementation(() => {})
|
||||
const abortError = new Error('aborted')
|
||||
abortError.name = 'AbortError'
|
||||
const hash =
|
||||
'4444444444444444444444444444444444444444444444444444444444444444'
|
||||
const candidates = [
|
||||
makeAssetCandidate('my_model.safetensors', {
|
||||
hash,
|
||||
hashType: 'blake3'
|
||||
})
|
||||
]
|
||||
mockCheckAssetHash.mockRejectedValue(abortError)
|
||||
|
||||
await verifyAssetSupportedCandidates(candidates)
|
||||
|
||||
expect(candidates[0].isMissing).toBeUndefined()
|
||||
expect(mockUpdateModelsForNodeType).not.toHaveBeenCalled()
|
||||
expect(warn).not.toHaveBeenCalled()
|
||||
warn.mockRestore()
|
||||
})
|
||||
|
||||
it('should resolve isMissing=false when asset with matching hash exists', async () => {
|
||||
const candidates = [
|
||||
makeAssetCandidate('model.safetensors', {
|
||||
@@ -1442,6 +1591,7 @@ describe('verifyAssetSupportedCandidates', () => {
|
||||
await verifyAssetSupportedCandidates(candidates)
|
||||
|
||||
expect(candidates[0].isMissing).toBe(false)
|
||||
expect(mockCheckAssetHash).not.toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('should resolve isMissing=false when asset with matching filename exists', async () => {
|
||||
|
||||
@@ -24,6 +24,11 @@ import {
|
||||
} from '@/utils/graphTraversalUtil'
|
||||
import { LGraphEventMode } from '@/lib/litegraph/src/types/globalEnums'
|
||||
import { resolveComboValues } from '@/utils/litegraphUtil'
|
||||
import type { AssetHashStatus } from '@/platform/assets/services/assetService'
|
||||
import {
|
||||
assetService,
|
||||
toBlake3AssetHash
|
||||
} from '@/platform/assets/services/assetService'
|
||||
|
||||
export type MissingModelWorkflowData = FlattenableWorkflowGraph & {
|
||||
models?: ModelFile[]
|
||||
@@ -177,7 +182,7 @@ function scanAssetWidget(
|
||||
getDirectory: ((nodeType: string) => string | undefined) | undefined
|
||||
): MissingModelCandidate | null {
|
||||
const value = widget.value
|
||||
if (!value.trim()) return null
|
||||
if (typeof value !== 'string' || !value.trim()) return null
|
||||
if (!isModelFileName(value)) return null
|
||||
|
||||
return {
|
||||
@@ -445,20 +450,68 @@ interface AssetVerifier {
|
||||
getAssets: (nodeType: string) => AssetItem[] | undefined
|
||||
}
|
||||
|
||||
type AssetHashVerifier = (
|
||||
assetHash: string,
|
||||
signal?: AbortSignal
|
||||
) => Promise<AssetHashStatus>
|
||||
|
||||
export async function verifyAssetSupportedCandidates(
|
||||
candidates: MissingModelCandidate[],
|
||||
signal?: AbortSignal,
|
||||
assetsStore?: AssetVerifier
|
||||
assetsStore?: AssetVerifier,
|
||||
checkAssetHash: AssetHashVerifier = assetService.checkAssetHash
|
||||
): Promise<void> {
|
||||
if (signal?.aborted) return
|
||||
|
||||
const pendingCandidates = candidates.filter(
|
||||
(c) => c.isAssetSupported && c.isMissing === undefined
|
||||
)
|
||||
if (pendingCandidates.length === 0) return
|
||||
|
||||
const pendingNodeTypes = new Set<string>()
|
||||
for (const c of candidates) {
|
||||
if (c.isAssetSupported && c.isMissing === undefined) {
|
||||
pendingNodeTypes.add(c.nodeType)
|
||||
const candidatesByHash = new Map<string, MissingModelCandidate[]>()
|
||||
|
||||
for (const candidate of pendingCandidates) {
|
||||
const assetHash = getBlake3AssetHash(candidate)
|
||||
if (!assetHash) {
|
||||
pendingNodeTypes.add(candidate.nodeType)
|
||||
continue
|
||||
}
|
||||
|
||||
const hashCandidates = candidatesByHash.get(assetHash)
|
||||
if (hashCandidates) hashCandidates.push(candidate)
|
||||
else candidatesByHash.set(assetHash, [candidate])
|
||||
}
|
||||
|
||||
await Promise.all(
|
||||
Array.from(candidatesByHash, async ([assetHash, hashCandidates]) => {
|
||||
if (signal?.aborted) return
|
||||
|
||||
try {
|
||||
const status = await checkAssetHash(assetHash, signal)
|
||||
if (signal?.aborted) return
|
||||
|
||||
if (status === 'exists') {
|
||||
for (const candidate of hashCandidates) {
|
||||
candidate.isMissing = false
|
||||
}
|
||||
return
|
||||
}
|
||||
} catch (err) {
|
||||
if (signal?.aborted || isAbortError(err)) return
|
||||
console.warn(
|
||||
'[Missing Model Pipeline] Failed to verify asset hash:',
|
||||
err
|
||||
)
|
||||
}
|
||||
|
||||
for (const candidate of hashCandidates) {
|
||||
pendingNodeTypes.add(candidate.nodeType)
|
||||
}
|
||||
})
|
||||
)
|
||||
|
||||
if (signal?.aborted) return
|
||||
if (pendingNodeTypes.size === 0) return
|
||||
|
||||
const store =
|
||||
@@ -491,6 +544,20 @@ export async function verifyAssetSupportedCandidates(
|
||||
}
|
||||
}
|
||||
|
||||
function getBlake3AssetHash(candidate: MissingModelCandidate): string | null {
|
||||
if (candidate.hashType?.toLowerCase() !== 'blake3') return null
|
||||
return toBlake3AssetHash(candidate.hash)
|
||||
}
|
||||
|
||||
function isAbortError(err: unknown): boolean {
|
||||
return (
|
||||
typeof err === 'object' &&
|
||||
err !== null &&
|
||||
'name' in err &&
|
||||
err.name === 'AbortError'
|
||||
)
|
||||
}
|
||||
|
||||
function normalizePath(path: string): string {
|
||||
return path.replace(/\\/g, '/')
|
||||
}
|
||||
|
||||
@@ -11,6 +11,12 @@ export const FEATURE_SURVEYS: Record<string, FeatureSurveyConfig> = {
|
||||
triggerThreshold: 3,
|
||||
delayMs: 5000
|
||||
},
|
||||
'queue-progress-overlay': {
|
||||
featureId: 'queue-progress-overlay',
|
||||
typeformId: 'HZ5saxry',
|
||||
triggerThreshold: 16,
|
||||
delayMs: 5000
|
||||
},
|
||||
'error-panel': {
|
||||
featureId: 'error-panel',
|
||||
typeformId: 'iFp4p4mV',
|
||||
|
||||
@@ -2,6 +2,7 @@ import { MediaRecorder as ExtendableMediaRecorder } from 'extendable-media-recor
|
||||
import { onUnmounted, ref } from 'vue'
|
||||
|
||||
import { useAudioService } from '@/services/audioService'
|
||||
import { toError } from '@/utils/errorUtil'
|
||||
|
||||
interface AudioRecorderOptions {
|
||||
onRecordingComplete?: (audioBlob: Blob) => Promise<void>
|
||||
@@ -62,7 +63,7 @@ export function useAudioRecorder(options: AudioRecorderOptions = {}) {
|
||||
isRecording.value = true
|
||||
} catch (err) {
|
||||
if (options.onError) {
|
||||
options.onError(err as Error)
|
||||
options.onError(toError(err))
|
||||
}
|
||||
throw err
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import type {
|
||||
NodePackSearchProvider,
|
||||
SearchPacksResult
|
||||
} from '@/types/searchServiceTypes'
|
||||
import { toError } from '@/utils/errorUtil'
|
||||
|
||||
type RegistryNodePack = components['schemas']['Node']
|
||||
|
||||
@@ -152,7 +153,7 @@ export const useRegistrySearchGateway = (): NodePackSearchProvider => {
|
||||
recordSuccess(providerState)
|
||||
return result
|
||||
} catch (error) {
|
||||
lastError = error as Error
|
||||
lastError = toError(error)
|
||||
const providerState = providers[activeProviderIndex]
|
||||
recordFailure(providerState, lastError)
|
||||
console.warn(
|
||||
|
||||
@@ -1,43 +1,597 @@
|
||||
import { createTestingPinia } from '@pinia/testing'
|
||||
import { setActivePinia } from 'pinia'
|
||||
import { beforeEach, describe, expect, it, vi } from 'vitest'
|
||||
|
||||
vi.mock('@/scripts/app', () => ({
|
||||
app: { canvas: undefined },
|
||||
ComfyApp: class {}
|
||||
import type {
|
||||
IContextMenuValue,
|
||||
LGraphNode
|
||||
} from '@/lib/litegraph/src/litegraph'
|
||||
import type { ContextMenuDivElement } from '@/lib/litegraph/src/interfaces'
|
||||
import type { IBaseWidget } from '@/lib/litegraph/src/types/widgets'
|
||||
|
||||
import { getExtraOptionsForWidget } from '@/services/litegraphService'
|
||||
|
||||
async function invokeMenuCallback(option: IContextMenuValue): Promise<void> {
|
||||
// Production callbacks under test do not reference `this`; ContextMenuDivElement
|
||||
// is a DOM element decorated with extra fields, not realistic to construct in tests.
|
||||
await option.callback?.call({} as ContextMenuDivElement)
|
||||
}
|
||||
|
||||
const mockPrompt = vi.fn()
|
||||
const mockCanvas = vi.hoisted(() => ({
|
||||
setDirty: vi.fn(),
|
||||
graph_mouse: [100, 200],
|
||||
ds: {
|
||||
scale: 1,
|
||||
offset: [0, 0] as [number, number],
|
||||
visible_area: [0, 0, 800, 600] as
|
||||
| [number, number, number, number]
|
||||
| undefined,
|
||||
fitToBounds: vi.fn()
|
||||
},
|
||||
graph: {
|
||||
nodes: [] as unknown[],
|
||||
getNodeById: vi.fn(),
|
||||
add: vi.fn(),
|
||||
setDirtyCanvas: vi.fn(),
|
||||
isRootGraph: true
|
||||
},
|
||||
animateToBounds: vi.fn(),
|
||||
_deserializeItems: vi.fn()
|
||||
}))
|
||||
|
||||
import { app } from '@/scripts/app'
|
||||
import { useLitegraphService } from '@/services/litegraphService'
|
||||
const mockApp = vi.hoisted(() => ({
|
||||
canvas: undefined as unknown,
|
||||
graph: undefined as unknown,
|
||||
dragOverNode: null,
|
||||
lastExecutionError: null,
|
||||
rootGraph: {}
|
||||
}))
|
||||
|
||||
describe('useLitegraphService().getCanvasCenter', () => {
|
||||
const mockFavoritedWidgetsStore = vi.hoisted(() => ({
|
||||
isFavorited: vi.fn().mockReturnValue(false),
|
||||
toggleFavorite: vi.fn()
|
||||
}))
|
||||
|
||||
vi.mock('@/stores/workspace/favoritedWidgetsStore', () => ({
|
||||
useFavoritedWidgetsStore: () => mockFavoritedWidgetsStore
|
||||
}))
|
||||
|
||||
vi.mock('@/services/dialogService', () => ({
|
||||
useDialogService: () => ({
|
||||
prompt: mockPrompt
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/renderer/core/canvas/canvasStore', () => ({
|
||||
useCanvasStore: () => ({
|
||||
canvas: mockCanvas,
|
||||
getCanvas: () => mockCanvas
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/core/graph/subgraph/promotionUtils', () => ({
|
||||
addWidgetPromotionOptions: vi.fn(),
|
||||
isPreviewPseudoWidget: vi.fn()
|
||||
}))
|
||||
|
||||
vi.mock('@/i18n', () => ({
|
||||
t: (key: string) => key,
|
||||
st: (_key: string, fallback: string) => fallback
|
||||
}))
|
||||
|
||||
vi.mock('@/utils/formatUtil', () => ({
|
||||
normalizeI18nKey: (key: string) => key
|
||||
}))
|
||||
|
||||
vi.mock('@/scripts/app', () => ({
|
||||
app: mockApp,
|
||||
ComfyApp: {
|
||||
clipspace: null,
|
||||
clipspace_return_node: null,
|
||||
copyToClipspace: vi.fn(),
|
||||
pasteFromClipspace: vi.fn()
|
||||
}
|
||||
}))
|
||||
|
||||
vi.mock('@/platform/updates/common/toastStore', () => ({
|
||||
useToastStore: () => ({ addAlert: vi.fn() })
|
||||
}))
|
||||
|
||||
vi.mock('@/stores/widgetStore', () => ({
|
||||
useWidgetStore: () => ({ widgets: new Map() })
|
||||
}))
|
||||
|
||||
vi.mock('@/stores/executionStore', () => ({
|
||||
useExecutionStore: () => ({
|
||||
nodeLocationProgressStates: {}
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/platform/workflow/management/stores/workflowStore', () => ({
|
||||
useWorkflowStore: () => ({
|
||||
activeSubgraph: null,
|
||||
nodeIdToNodeLocatorId: (id: string) => id
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/platform/settings/settingStore', () => ({
|
||||
useSettingStore: () => ({
|
||||
get: vi.fn().mockReturnValue(false)
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/composables/canvas/useSelectedLiteGraphItems', () => ({
|
||||
useSelectedLiteGraphItems: () => ({
|
||||
toggleSelectedNodesMode: vi.fn()
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/services/extensionService', () => ({
|
||||
useExtensionService: () => ({
|
||||
invokeExtensionsAsync: vi.fn()
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/stores/subgraphStore', () => ({
|
||||
useSubgraphStore: () => ({
|
||||
typePrefix: 'Subgraph::',
|
||||
getBlueprint: vi.fn()
|
||||
})
|
||||
}))
|
||||
|
||||
const mockNodeOutputStore = vi.hoisted(() => ({
|
||||
getNodeOutputs: vi.fn(),
|
||||
getNodePreviews: vi.fn()
|
||||
}))
|
||||
|
||||
vi.mock('@/stores/nodeOutputStore', () => ({
|
||||
useNodeOutputStore: () => mockNodeOutputStore
|
||||
}))
|
||||
|
||||
vi.mock('@/composables/node/useNodeAnimatedImage', () => ({
|
||||
useNodeAnimatedImage: () => ({
|
||||
showAnimatedPreview: vi.fn(),
|
||||
removeAnimatedPreview: vi.fn()
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/composables/node/useNodeCanvasImagePreview', () => ({
|
||||
useNodeCanvasImagePreview: () => ({
|
||||
showCanvasImagePreview: vi.fn(),
|
||||
removeCanvasImagePreview: vi.fn()
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/composables/node/useNodeImage', () => ({
|
||||
useNodeImage: () => ({ showPreview: vi.fn() }),
|
||||
useNodeVideo: () => ({ showPreview: vi.fn() })
|
||||
}))
|
||||
|
||||
vi.mock('@/composables/graph/useSubgraphOperations', () => ({
|
||||
useSubgraphOperations: () => ({ unpackSubgraph: vi.fn() })
|
||||
}))
|
||||
|
||||
vi.mock('@/composables/maskeditor/useMaskEditor', () => ({
|
||||
useMaskEditor: () => ({ openMaskEditor: vi.fn() })
|
||||
}))
|
||||
|
||||
vi.mock('@/stores/domWidgetStore', () => ({
|
||||
useDomWidgetStore: () => ({
|
||||
widgetStates: new Map(),
|
||||
registerWidget: vi.fn(),
|
||||
unregisterWidget: vi.fn()
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/stores/promotionStore', () => ({
|
||||
usePromotionStore: () => ({
|
||||
getPromotionsRef: vi.fn().mockReturnValue([])
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/services/subgraphPseudoWidgetCache', () => ({
|
||||
resolveSubgraphPseudoWidgetCache: vi.fn().mockReturnValue({
|
||||
cache: { promotions: [], entries: [], nodes: [] },
|
||||
nodes: []
|
||||
})
|
||||
}))
|
||||
|
||||
vi.mock('@/stores/workspace/rightSidePanelStore', () => ({
|
||||
useRightSidePanelStore: () => ({ openPanel: vi.fn() })
|
||||
}))
|
||||
|
||||
vi.mock('@/base/common/downloadUtil', () => ({
|
||||
downloadFile: vi.fn(),
|
||||
openFileInNewTab: vi.fn()
|
||||
}))
|
||||
|
||||
vi.mock('@/scripts/domWidget', () => ({
|
||||
isComponentWidget: vi.fn().mockReturnValue(false),
|
||||
isDOMWidget: vi.fn().mockReturnValue(false)
|
||||
}))
|
||||
|
||||
const mockCreateBounds = vi.hoisted(() => vi.fn())
|
||||
vi.mock('@/lib/litegraph/src/litegraph', async (importOriginal) => {
|
||||
const actual = await importOriginal()
|
||||
return {
|
||||
...(actual as object),
|
||||
createBounds: mockCreateBounds
|
||||
}
|
||||
})
|
||||
|
||||
vi.mock('@/scripts/ui', () => ({
|
||||
$el: vi.fn()
|
||||
}))
|
||||
|
||||
vi.mock('@/utils/litegraphUtil', () => ({
|
||||
isAnimatedOutput: vi.fn().mockReturnValue(false),
|
||||
isImageNode: vi.fn().mockReturnValue(false),
|
||||
isVideoNode: vi.fn().mockReturnValue(false),
|
||||
isVideoOutput: vi.fn().mockReturnValue(false),
|
||||
migrateWidgetsValues: vi.fn().mockReturnValue([])
|
||||
}))
|
||||
|
||||
vi.mock('@/core/graph/widgets/dynamicWidgets', () => ({
|
||||
applyDynamicInputs: vi.fn().mockReturnValue(false)
|
||||
}))
|
||||
|
||||
vi.mock('@/schemas/nodeDef/migration', () => ({
|
||||
transformInputSpecV2ToV1: vi.fn().mockReturnValue([])
|
||||
}))
|
||||
|
||||
vi.mock('@/workbench/utils/nodeDefOrderingUtil', () => ({
|
||||
getOrderedInputSpecs: vi.fn().mockReturnValue([])
|
||||
}))
|
||||
|
||||
vi.mock('@/stores/nodeDefStore', () => ({
|
||||
ComfyNodeDefImpl: vi.fn().mockImplementation((def: unknown) => def)
|
||||
}))
|
||||
|
||||
function createMockNode(overrides: Record<string, unknown> = {}): LGraphNode {
|
||||
return {
|
||||
id: 1,
|
||||
inputs: [],
|
||||
graph: null,
|
||||
constructor: { nodeData: { name: 'TestNode' } },
|
||||
getWidgetOnPos: vi.fn(),
|
||||
...overrides
|
||||
} as unknown as LGraphNode
|
||||
}
|
||||
|
||||
function createMockWidget(
|
||||
overrides: Record<string, unknown> = {}
|
||||
): IBaseWidget {
|
||||
return {
|
||||
name: 'test_widget',
|
||||
label: undefined,
|
||||
value: 42,
|
||||
callback: vi.fn(),
|
||||
options: {},
|
||||
...overrides
|
||||
} as unknown as IBaseWidget
|
||||
}
|
||||
|
||||
describe('litegraphService', () => {
|
||||
beforeEach(() => {
|
||||
setActivePinia(createTestingPinia({ stubActions: false }))
|
||||
vi.clearAllMocks()
|
||||
mockFavoritedWidgetsStore.isFavorited.mockReturnValue(false)
|
||||
mockPrompt.mockReset()
|
||||
mockCreateBounds.mockReset()
|
||||
mockCanvas.graph.getNodeById.mockReset()
|
||||
mockCanvas.ds.scale = 1
|
||||
mockCanvas.ds.offset = [0, 0]
|
||||
mockCanvas.ds.visible_area = [0, 0, 800, 600]
|
||||
mockCanvas.graph.nodes = []
|
||||
mockApp.canvas = mockCanvas
|
||||
mockApp.graph = mockCanvas.graph
|
||||
})
|
||||
|
||||
it('returns origin when canvas is not yet initialised', () => {
|
||||
Reflect.set(app, 'canvas', undefined)
|
||||
describe('getExtraOptionsForWidget', () => {
|
||||
it('adds favorite option when widget is not favorited', () => {
|
||||
const node = createMockNode()
|
||||
const widget = createMockWidget()
|
||||
mockFavoritedWidgetsStore.isFavorited.mockReturnValue(false)
|
||||
|
||||
const center = useLitegraphService().getCanvasCenter()
|
||||
const options = getExtraOptionsForWidget(node, widget)
|
||||
|
||||
expect(center).toEqual([0, 0])
|
||||
})
|
||||
|
||||
it('returns origin when canvas exists but ds.visible_area is missing', () => {
|
||||
Reflect.set(app, 'canvas', { ds: {} })
|
||||
|
||||
const center = useLitegraphService().getCanvasCenter()
|
||||
|
||||
expect(center).toEqual([0, 0])
|
||||
})
|
||||
|
||||
it('returns the visible-area centre once the canvas is ready', () => {
|
||||
Reflect.set(app, 'canvas', {
|
||||
ds: { visible_area: [10, 20, 200, 100] }
|
||||
expect(options).toHaveLength(1)
|
||||
expect(options[0].content).toContain('contextMenu.FavoriteWidget')
|
||||
expect(options[0].content).toContain('test_widget')
|
||||
})
|
||||
|
||||
const center = useLitegraphService().getCanvasCenter()
|
||||
it('adds unfavorite option when widget is already favorited', () => {
|
||||
const node = createMockNode()
|
||||
const widget = createMockWidget()
|
||||
mockFavoritedWidgetsStore.isFavorited.mockReturnValue(true)
|
||||
|
||||
expect(center).toEqual([110, 70])
|
||||
const options = getExtraOptionsForWidget(node, widget)
|
||||
|
||||
expect(options[0].content).toContain('contextMenu.UnfavoriteWidget')
|
||||
})
|
||||
|
||||
it('uses widget label when available', () => {
|
||||
const node = createMockNode()
|
||||
const widget = createMockWidget({ label: 'My Label' })
|
||||
mockFavoritedWidgetsStore.isFavorited.mockReturnValue(false)
|
||||
|
||||
const options = getExtraOptionsForWidget(node, widget)
|
||||
|
||||
expect(options[0].content).toContain('My Label')
|
||||
})
|
||||
|
||||
it('calls toggleFavorite when favorite option callback is invoked', () => {
|
||||
const node = createMockNode()
|
||||
const widget = createMockWidget()
|
||||
|
||||
const options = getExtraOptionsForWidget(node, widget)
|
||||
|
||||
void invokeMenuCallback(options[0])
|
||||
expect(mockFavoritedWidgetsStore.toggleFavorite).toHaveBeenCalledWith(
|
||||
node,
|
||||
'test_widget'
|
||||
)
|
||||
})
|
||||
|
||||
it('adds rename option when input matches widget', () => {
|
||||
const widget = createMockWidget({ name: 'seed' })
|
||||
const node = createMockNode({
|
||||
inputs: [{ widget: { name: 'seed' } }]
|
||||
})
|
||||
|
||||
const options = getExtraOptionsForWidget(node, widget)
|
||||
|
||||
// rename is unshifted first, then favorite is unshifted (ends up first)
|
||||
expect(options).toHaveLength(2)
|
||||
const renameOption = options.find((o: IContextMenuValue) =>
|
||||
o.content?.includes('contextMenu.RenameWidget')
|
||||
)
|
||||
expect(renameOption).toBeDefined()
|
||||
expect(renameOption!.content).toContain('seed')
|
||||
})
|
||||
|
||||
it('rename callback updates widget and input labels', async () => {
|
||||
const widget = createMockWidget({ name: 'seed' })
|
||||
const input = { widget: { name: 'seed' }, label: undefined as unknown }
|
||||
const node = createMockNode({ inputs: [input] })
|
||||
mockPrompt.mockResolvedValue('New Name')
|
||||
|
||||
const options = getExtraOptionsForWidget(node, widget)
|
||||
|
||||
const renameOption = options.find((o: IContextMenuValue) =>
|
||||
o.content?.includes('contextMenu.RenameWidget')
|
||||
)
|
||||
await invokeMenuCallback(renameOption!)
|
||||
|
||||
expect(widget.label).toBe('New Name')
|
||||
expect(input.label).toBe('New Name')
|
||||
expect(widget.callback).toHaveBeenCalledWith(42)
|
||||
expect(mockCanvas.setDirty).toHaveBeenCalledWith(true)
|
||||
})
|
||||
|
||||
it('rename callback clears label when empty string is returned', async () => {
|
||||
const widget = createMockWidget({ name: 'seed', label: 'Old' })
|
||||
const input = {
|
||||
widget: { name: 'seed' },
|
||||
label: 'Old' as string | undefined
|
||||
}
|
||||
const node = createMockNode({ inputs: [input] })
|
||||
mockPrompt.mockResolvedValue('')
|
||||
|
||||
const options = getExtraOptionsForWidget(node, widget)
|
||||
|
||||
const renameOption = options.find((o: IContextMenuValue) =>
|
||||
o.content?.includes('contextMenu.RenameWidget')
|
||||
)
|
||||
await invokeMenuCallback(renameOption!)
|
||||
|
||||
expect(widget.label).toBeUndefined()
|
||||
expect(input.label).toBeUndefined()
|
||||
})
|
||||
|
||||
it('rename callback does nothing when prompt is cancelled', async () => {
|
||||
const widget = createMockWidget({ name: 'seed', label: 'Original' })
|
||||
const input = { widget: { name: 'seed' }, label: 'Original' }
|
||||
const node = createMockNode({ inputs: [input] })
|
||||
mockPrompt.mockResolvedValue(null)
|
||||
|
||||
const options = getExtraOptionsForWidget(node, widget)
|
||||
|
||||
const renameOption = options.find((o: IContextMenuValue) =>
|
||||
o.content?.includes('contextMenu.RenameWidget')
|
||||
)
|
||||
await invokeMenuCallback(renameOption!)
|
||||
|
||||
expect(widget.label).toBe('Original')
|
||||
expect(input.label).toBe('Original')
|
||||
})
|
||||
|
||||
it('adds promotion options when node is in a subgraph', async () => {
|
||||
const { addWidgetPromotionOptions } = vi.mocked(
|
||||
await import('@/core/graph/subgraph/promotionUtils')
|
||||
)
|
||||
const node = createMockNode({
|
||||
graph: { isRootGraph: false }
|
||||
})
|
||||
const widget = createMockWidget()
|
||||
|
||||
getExtraOptionsForWidget(node, widget)
|
||||
|
||||
expect(addWidgetPromotionOptions).toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('does not add promotion options on root graph', async () => {
|
||||
const { addWidgetPromotionOptions } = vi.mocked(
|
||||
await import('@/core/graph/subgraph/promotionUtils')
|
||||
)
|
||||
const node = createMockNode({ graph: null })
|
||||
const widget = createMockWidget()
|
||||
|
||||
getExtraOptionsForWidget(node, widget)
|
||||
|
||||
expect(addWidgetPromotionOptions).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
describe('useLitegraphService', () => {
|
||||
// Lazily import to ensure mocks are in place
|
||||
async function getService() {
|
||||
const { useLitegraphService } =
|
||||
await import('@/services/litegraphService')
|
||||
return useLitegraphService()
|
||||
}
|
||||
|
||||
describe('getCanvasCenter', () => {
|
||||
it('returns center of visible area', async () => {
|
||||
const service = await getService()
|
||||
// visible_area = [0, 0, 800, 600], dpi = 1
|
||||
const center = service.getCanvasCenter()
|
||||
expect(center).toEqual([400, 300])
|
||||
})
|
||||
|
||||
it('accounts for visible area offset', async () => {
|
||||
const saved = mockCanvas.ds.visible_area
|
||||
mockCanvas.ds.visible_area = [10, 20, 200, 100]
|
||||
|
||||
const service = await getService()
|
||||
const center = service.getCanvasCenter()
|
||||
expect(center).toEqual([110, 70])
|
||||
|
||||
mockCanvas.ds.visible_area = saved
|
||||
})
|
||||
|
||||
it('returns [0, 0] when no visible area', async () => {
|
||||
const savedVisibleArea = mockCanvas.ds.visible_area
|
||||
mockCanvas.ds.visible_area = undefined
|
||||
|
||||
const service = await getService()
|
||||
const center = service.getCanvasCenter()
|
||||
expect(center).toEqual([0, 0])
|
||||
|
||||
mockCanvas.ds.visible_area = savedVisibleArea
|
||||
})
|
||||
|
||||
it('returns [0, 0] without throwing when app.canvas is undefined', async () => {
|
||||
mockApp.canvas = undefined
|
||||
|
||||
const service = await getService()
|
||||
expect(() => service.getCanvasCenter()).not.toThrow()
|
||||
expect(service.getCanvasCenter()).toEqual([0, 0])
|
||||
})
|
||||
})
|
||||
|
||||
describe('resetView', () => {
|
||||
it('resets canvas scale and offset', async () => {
|
||||
mockCanvas.ds.scale = 2.5
|
||||
mockCanvas.ds.offset = [100, 200]
|
||||
const service = await getService()
|
||||
|
||||
service.resetView()
|
||||
|
||||
expect(mockCanvas.ds.scale).toBe(1)
|
||||
expect(mockCanvas.ds.offset).toEqual([0, 0])
|
||||
expect(mockCanvas.setDirty).toHaveBeenCalledWith(true, true)
|
||||
})
|
||||
})
|
||||
|
||||
describe('goToNode', () => {
|
||||
it('animates to node bounds when node exists', async () => {
|
||||
const bounds = [10, 20, 100, 50]
|
||||
const graphNode = { boundingRect: bounds }
|
||||
mockCanvas.graph.getNodeById.mockReturnValue(graphNode)
|
||||
|
||||
const service = await getService()
|
||||
service.goToNode(42)
|
||||
|
||||
expect(mockCanvas.animateToBounds).toHaveBeenCalledWith(bounds)
|
||||
})
|
||||
|
||||
it('does nothing when node does not exist', async () => {
|
||||
mockCanvas.graph.getNodeById.mockReturnValue(null)
|
||||
|
||||
const service = await getService()
|
||||
service.goToNode(999)
|
||||
|
||||
expect(mockCanvas.animateToBounds).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
describe('fitView', () => {
|
||||
it('calls fitToBounds and setDirty', async () => {
|
||||
const mockBounds = [0, 0, 500, 400]
|
||||
mockCreateBounds.mockReturnValue(mockBounds)
|
||||
|
||||
const nodeObj = {
|
||||
boundingRect: [0, 0, 100, 50],
|
||||
updateArea: vi.fn()
|
||||
}
|
||||
mockCanvas.graph.nodes = [nodeObj]
|
||||
|
||||
const service = await getService()
|
||||
service.fitView()
|
||||
|
||||
expect(mockCanvas.ds.fitToBounds).toHaveBeenCalledWith(mockBounds)
|
||||
expect(mockCanvas.setDirty).toHaveBeenCalledWith(true, true)
|
||||
})
|
||||
|
||||
it('calls updateArea for nodes with zero bounds', async () => {
|
||||
mockCreateBounds.mockReturnValue([0, 0, 100, 100])
|
||||
|
||||
const nodeObj = {
|
||||
boundingRect: [0, 0, 0, 0],
|
||||
updateArea: vi.fn()
|
||||
}
|
||||
mockCanvas.graph.nodes = [nodeObj]
|
||||
|
||||
const service = await getService()
|
||||
service.fitView()
|
||||
|
||||
expect(nodeObj.updateArea).toHaveBeenCalled()
|
||||
})
|
||||
|
||||
it('does nothing when createBounds returns null', async () => {
|
||||
mockCreateBounds.mockReturnValue(null)
|
||||
mockCanvas.graph.nodes = []
|
||||
|
||||
const service = await getService()
|
||||
service.fitView()
|
||||
|
||||
expect(mockCanvas.ds.fitToBounds).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
describe('updatePreviews', () => {
|
||||
it('catches errors and logs them', async () => {
|
||||
const consoleSpy = vi
|
||||
.spyOn(console, 'error')
|
||||
.mockImplementation(() => {})
|
||||
|
||||
mockNodeOutputStore.getNodeOutputs.mockImplementation(() => {
|
||||
throw new Error('test error')
|
||||
})
|
||||
|
||||
const service = await getService()
|
||||
const badNode = createMockNode({ flags: { collapsed: false } })
|
||||
expect(() => service.updatePreviews(badNode)).not.toThrow()
|
||||
expect(consoleSpy).toHaveBeenCalledWith(
|
||||
'Error drawing node background',
|
||||
expect.any(Error)
|
||||
)
|
||||
|
||||
consoleSpy.mockRestore()
|
||||
})
|
||||
|
||||
it('skips collapsed nodes', async () => {
|
||||
const service = await getService()
|
||||
const node = createMockNode({
|
||||
flags: { collapsed: true },
|
||||
imgs: undefined,
|
||||
images: undefined,
|
||||
preview: undefined
|
||||
})
|
||||
|
||||
service.updatePreviews(node)
|
||||
|
||||
expect(mockNodeOutputStore.getNodeOutputs).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
@@ -24,7 +24,9 @@ vi.mock('@/scripts/api', () => ({
|
||||
vi.mock('@/platform/assets/services/assetService', () => ({
|
||||
assetService: {
|
||||
getAssetsByTag: vi.fn(),
|
||||
getAllAssetsByTag: vi.fn(),
|
||||
getAssetsForNodeType: vi.fn(),
|
||||
invalidateInputAssetsIncludingPublic: vi.fn(),
|
||||
updateAsset: vi.fn(),
|
||||
addAssetTags: vi.fn(),
|
||||
removeAssetTags: vi.fn()
|
||||
@@ -1259,6 +1261,9 @@ describe('assetsStore - Deletion State and Input Mapping', () => {
|
||||
false,
|
||||
{ limit: 100 }
|
||||
)
|
||||
expect(
|
||||
assetService.invalidateInputAssetsIncludingPublic
|
||||
).toHaveBeenCalledOnce()
|
||||
} finally {
|
||||
mockIsCloud.value = false
|
||||
}
|
||||
|
||||
@@ -123,7 +123,7 @@ export const useAssetsStore = defineStore('assets', () => {
|
||||
state: inputAssets,
|
||||
isLoading: inputLoading,
|
||||
error: inputError,
|
||||
execute: updateInputs
|
||||
execute: executeUpdateInputs
|
||||
} = useAsyncState(fetchInputFiles, [], {
|
||||
immediate: false,
|
||||
resetOnExecute: false,
|
||||
@@ -132,6 +132,12 @@ export const useAssetsStore = defineStore('assets', () => {
|
||||
}
|
||||
})
|
||||
|
||||
const updateInputs = async () => {
|
||||
const result = await executeUpdateInputs()
|
||||
assetService.invalidateInputAssetsIncludingPublic()
|
||||
return result
|
||||
}
|
||||
|
||||
/**
|
||||
* Fetch history assets with pagination support
|
||||
* @param loadMore - true for pagination (append), false for initial load (replace)
|
||||
|
||||
74
src/utils/errorUtil.test.ts
Normal file
74
src/utils/errorUtil.test.ts
Normal file
@@ -0,0 +1,74 @@
|
||||
import { describe, expect, it } from 'vitest'
|
||||
|
||||
import { getErrorMessage, toError } from './errorUtil'
|
||||
|
||||
describe('toError', () => {
|
||||
it('returns the same Error instance when given an Error', () => {
|
||||
const err = new Error('boom')
|
||||
expect(toError(err)).toBe(err)
|
||||
})
|
||||
|
||||
it('preserves Error subclasses', () => {
|
||||
class CustomError extends Error {}
|
||||
const err = new CustomError('subclass')
|
||||
expect(toError(err)).toBe(err)
|
||||
expect(toError(err)).toBeInstanceOf(CustomError)
|
||||
})
|
||||
|
||||
it('wraps a string as an Error message', () => {
|
||||
const result = toError('plain string')
|
||||
expect(result).toBeInstanceOf(Error)
|
||||
expect(result.message).toBe('plain string')
|
||||
})
|
||||
|
||||
it('wraps a number by stringifying it', () => {
|
||||
const result = toError(42)
|
||||
expect(result).toBeInstanceOf(Error)
|
||||
expect(result.message).toBe('42')
|
||||
})
|
||||
|
||||
it('wraps an object via JSON.stringify', () => {
|
||||
const result = toError({ code: 'EBOOM', detail: 'nope' })
|
||||
expect(result).toBeInstanceOf(Error)
|
||||
expect(result.message).toBe('{"code":"EBOOM","detail":"nope"}')
|
||||
})
|
||||
|
||||
it('falls back to String() when JSON.stringify throws (circular)', () => {
|
||||
const obj: Record<string, unknown> = {}
|
||||
obj.self = obj
|
||||
const result = toError(obj)
|
||||
expect(result).toBeInstanceOf(Error)
|
||||
expect(result.message).toBe('[object Object]')
|
||||
})
|
||||
|
||||
it('handles null and undefined', () => {
|
||||
expect(toError(null).message).toBe('null')
|
||||
expect(toError(undefined).message).toBe('undefined')
|
||||
})
|
||||
})
|
||||
|
||||
describe('getErrorMessage', () => {
|
||||
it('returns the message of an Error', () => {
|
||||
expect(getErrorMessage(new Error('boom'))).toBe('boom')
|
||||
})
|
||||
|
||||
it('returns the value when given a string', () => {
|
||||
expect(getErrorMessage('text')).toBe('text')
|
||||
})
|
||||
|
||||
it('returns the message field of a plain object', () => {
|
||||
expect(getErrorMessage({ message: 'from object' })).toBe('from object')
|
||||
})
|
||||
|
||||
it('returns undefined for objects without a string message', () => {
|
||||
expect(getErrorMessage({ code: 1 })).toBeUndefined()
|
||||
expect(getErrorMessage({ message: 42 })).toBeUndefined()
|
||||
})
|
||||
|
||||
it('returns undefined for null, undefined, numbers, booleans', () => {
|
||||
expect(getErrorMessage(null)).toBeUndefined()
|
||||
expect(getErrorMessage(undefined)).toBeUndefined()
|
||||
expect(getErrorMessage(42)).toBeUndefined()
|
||||
expect(getErrorMessage(true)).toBeUndefined()
|
||||
})
|
||||
})
|
||||
37
src/utils/errorUtil.ts
Normal file
37
src/utils/errorUtil.ts
Normal file
@@ -0,0 +1,37 @@
|
||||
/**
|
||||
* Narrow an unknown caught value to an Error.
|
||||
*
|
||||
* Replaces unsafe `value as Error` assertions. When `value` is not already
|
||||
* an Error instance, wraps it in a new Error whose message is the stringified
|
||||
* input so downstream consumers (loggers, Sentry, toasts) always receive a
|
||||
* usable Error object instead of `undefined.message`.
|
||||
*/
|
||||
export function toError(value: unknown): Error {
|
||||
if (value instanceof Error) return value
|
||||
if (typeof value === 'string') return new Error(value)
|
||||
if (value === undefined) return new Error('undefined')
|
||||
try {
|
||||
const serialised = JSON.stringify(value)
|
||||
return new Error(serialised ?? String(value))
|
||||
} catch {
|
||||
return new Error(String(value))
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a message from an unknown caught value without asserting its type.
|
||||
* Returns `undefined` when the value carries no usable message.
|
||||
*/
|
||||
export function getErrorMessage(value: unknown): string | undefined {
|
||||
if (value instanceof Error) return value.message
|
||||
if (typeof value === 'string') return value
|
||||
if (
|
||||
typeof value === 'object' &&
|
||||
value !== null &&
|
||||
'message' in value &&
|
||||
typeof (value as { message: unknown }).message === 'string'
|
||||
) {
|
||||
return (value as { message: string }).message
|
||||
}
|
||||
return undefined
|
||||
}
|
||||
Reference in New Issue
Block a user