mirror of
https://github.com/Comfy-Org/ComfyUI_frontend.git
synced 2026-01-26 19:09:52 +00:00
feat: add test count display to Playwright PR comments (#5458)
* feat: add test count display to Playwright PR comments
- Add extract-playwright-counts.mjs script to parse test results from Playwright reports
- Update pr-playwright-deploy-and-comment.sh to extract and display test counts
- Show overall summary with passed/failed/flaky/skipped counts
- Display per-browser test counts inline with report links
- Use dynamic status icons based on test results (✅/❌/⚠️)
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* feat: include skipped test count in per-browser display
- Add skipped test extraction for individual browser reports
- Update per-browser display format to show all four counts:
(✅ passed / ❌ failed / ⚠️ flaky / ⏭️ skipped)
- Provides complete test result visibility at a glance
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: improve test count extraction reliability in CI
- Use absolute paths for script and report directories
- Add debug logging to help diagnose extraction issues
- Move counts display after View Report link as requested
- Format: [View Report](url) • ✅ passed / ❌ failed / ⚠️ flaky / ⏭️ skipped
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: generate JSON reports alongside HTML for test count extraction
- Add JSON reporter to Playwright test runs
- Generate report.json alongside HTML reports
- Store JSON report in playwright-report directory
- This enables accurate test count extraction from CI artifacts
The HTML reports alone don't contain easily extractable test statistics
as they use a React app with dynamically loaded data. JSON reports
provide direct access to test counts.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: correct JSON reporter syntax for Playwright tests
- Use proper syntax for JSON reporter with outputFile option
- Run separate commands for HTML and JSON report merging
- Specify output path directly in reporter configuration
- Ensures report.json is created in playwright-report directory
This fixes the "No such file or directory" error when trying to move
report.json file, as it wasn't being created in the first place.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Revert "fix: correct JSON reporter syntax for Playwright tests"
This reverts commit 605d7cc1e2.
* fix: use correct Playwright reporter syntax with comma-separated list
- Use --reporter=html,json syntax (comma-separated, not space)
- Move test-results.json to playwright-report/report.json after generation
- Remove incorrect PLAYWRIGHT_JSON_OUTPUT_NAME env variable
- Add || true to prevent failure if JSON file doesn't exist
The JSON reporter outputs to test-results.json by default when using
the comma-separated reporter list syntax.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: improve test count extraction reliability in CI
- Use separate --reporter flags for list, html, and json
- Set PLAYWRIGHT_JSON_OUTPUT_NAME env var to specify JSON output path
- Run HTML and JSON report generation separately for merged reports
- Ensures report.json is created in playwright-report directory
The combined reporter syntax wasn't creating the JSON file properly.
Using separate reporter flags with env var ensures JSON is generated.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Update scripts/cicd/pr-playwright-deploy-and-comment.sh
Co-authored-by: Alexander Brown <drjkl@comfy.org>
* refactor: convert extraction script to TypeScript and use tsx
- Convert extract-playwright-counts.mjs to TypeScript (.ts)
- Add proper TypeScript types for better type safety
- Use tsx for execution instead of node
- Auto-install tsx in CI if not available
- Better alignment with the TypeScript codebase
This provides better type safety and consistency with the rest of
the codebase while maintaining the same functionality.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* chore(pr-playwright-deploy-and-comment.sh): move tsx installation check to the beginning of the script for better organization and efficiency
* [auto-fix] Apply ESLint and Prettier fixes
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alexander Brown <drjkl@comfy.org>
Co-authored-by: GitHub Action <action@github.com>
This commit is contained in:
15
.github/workflows/test-ui.yaml
vendored
15
.github/workflows/test-ui.yaml
vendored
@@ -229,7 +229,13 @@ jobs:
|
|||||||
|
|
||||||
- name: Run Playwright tests (${{ matrix.browser }})
|
- name: Run Playwright tests (${{ matrix.browser }})
|
||||||
id: playwright
|
id: playwright
|
||||||
run: npx playwright test --project=${{ matrix.browser }} --reporter=html
|
run: |
|
||||||
|
# Run tests with both HTML and JSON reporters
|
||||||
|
PLAYWRIGHT_JSON_OUTPUT_NAME=playwright-report/report.json \
|
||||||
|
npx playwright test --project=${{ matrix.browser }} \
|
||||||
|
--reporter=list \
|
||||||
|
--reporter=html \
|
||||||
|
--reporter=json
|
||||||
working-directory: ComfyUI_frontend
|
working-directory: ComfyUI_frontend
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v4
|
- uses: actions/upload-artifact@v4
|
||||||
@@ -275,7 +281,12 @@ jobs:
|
|||||||
merge-multiple: true
|
merge-multiple: true
|
||||||
|
|
||||||
- name: Merge into HTML Report
|
- name: Merge into HTML Report
|
||||||
run: npx playwright merge-reports --reporter html ./all-blob-reports
|
run: |
|
||||||
|
# Generate HTML report
|
||||||
|
npx playwright merge-reports --reporter=html ./all-blob-reports
|
||||||
|
# Generate JSON report separately with explicit output path
|
||||||
|
PLAYWRIGHT_JSON_OUTPUT_NAME=playwright-report/report.json \
|
||||||
|
npx playwright merge-reports --reporter=json ./all-blob-reports
|
||||||
working-directory: ComfyUI_frontend
|
working-directory: ComfyUI_frontend
|
||||||
|
|
||||||
- name: Upload HTML report
|
- name: Upload HTML report
|
||||||
|
|||||||
183
scripts/cicd/extract-playwright-counts.ts
Executable file
183
scripts/cicd/extract-playwright-counts.ts
Executable file
@@ -0,0 +1,183 @@
|
|||||||
|
#!/usr/bin/env tsx
|
||||||
|
import fs from 'fs'
|
||||||
|
import path from 'path'
|
||||||
|
|
||||||
|
interface TestStats {
|
||||||
|
expected?: number
|
||||||
|
unexpected?: number
|
||||||
|
flaky?: number
|
||||||
|
skipped?: number
|
||||||
|
finished?: number
|
||||||
|
}
|
||||||
|
|
||||||
|
interface ReportData {
|
||||||
|
stats?: TestStats
|
||||||
|
}
|
||||||
|
|
||||||
|
interface TestCounts {
|
||||||
|
passed: number
|
||||||
|
failed: number
|
||||||
|
flaky: number
|
||||||
|
skipped: number
|
||||||
|
total: number
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Extract test counts from Playwright HTML report
|
||||||
|
* @param reportDir - Path to the playwright-report directory
|
||||||
|
* @returns Test counts { passed, failed, flaky, skipped, total }
|
||||||
|
*/
|
||||||
|
function extractTestCounts(reportDir: string): TestCounts {
|
||||||
|
const counts: TestCounts = {
|
||||||
|
passed: 0,
|
||||||
|
failed: 0,
|
||||||
|
flaky: 0,
|
||||||
|
skipped: 0,
|
||||||
|
total: 0
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
// First, try to find report.json which Playwright generates with JSON reporter
|
||||||
|
const jsonReportFile = path.join(reportDir, 'report.json')
|
||||||
|
if (fs.existsSync(jsonReportFile)) {
|
||||||
|
const reportJson: ReportData = JSON.parse(
|
||||||
|
fs.readFileSync(jsonReportFile, 'utf-8')
|
||||||
|
)
|
||||||
|
if (reportJson.stats) {
|
||||||
|
const stats = reportJson.stats
|
||||||
|
counts.total = stats.expected || 0
|
||||||
|
counts.passed =
|
||||||
|
(stats.expected || 0) -
|
||||||
|
(stats.unexpected || 0) -
|
||||||
|
(stats.flaky || 0) -
|
||||||
|
(stats.skipped || 0)
|
||||||
|
counts.failed = stats.unexpected || 0
|
||||||
|
counts.flaky = stats.flaky || 0
|
||||||
|
counts.skipped = stats.skipped || 0
|
||||||
|
return counts
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try index.html - Playwright HTML report embeds data in a script tag
|
||||||
|
const indexFile = path.join(reportDir, 'index.html')
|
||||||
|
if (fs.existsSync(indexFile)) {
|
||||||
|
const content = fs.readFileSync(indexFile, 'utf-8')
|
||||||
|
|
||||||
|
// Look for the embedded report data in various formats
|
||||||
|
// Format 1: window.playwrightReportBase64
|
||||||
|
let dataMatch = content.match(
|
||||||
|
/window\.playwrightReportBase64\s*=\s*["']([^"']+)["']/
|
||||||
|
)
|
||||||
|
if (dataMatch) {
|
||||||
|
try {
|
||||||
|
const decodedData = Buffer.from(dataMatch[1], 'base64').toString(
|
||||||
|
'utf-8'
|
||||||
|
)
|
||||||
|
const reportData: ReportData = JSON.parse(decodedData)
|
||||||
|
|
||||||
|
if (reportData.stats) {
|
||||||
|
const stats = reportData.stats
|
||||||
|
counts.total = stats.expected || 0
|
||||||
|
counts.passed =
|
||||||
|
(stats.expected || 0) -
|
||||||
|
(stats.unexpected || 0) -
|
||||||
|
(stats.flaky || 0) -
|
||||||
|
(stats.skipped || 0)
|
||||||
|
counts.failed = stats.unexpected || 0
|
||||||
|
counts.flaky = stats.flaky || 0
|
||||||
|
counts.skipped = stats.skipped || 0
|
||||||
|
return counts
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
// Continue to try other formats
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Format 2: window.playwrightReport
|
||||||
|
dataMatch = content.match(/window\.playwrightReport\s*=\s*({[\s\S]*?});/)
|
||||||
|
if (dataMatch) {
|
||||||
|
try {
|
||||||
|
// Use Function constructor instead of eval for safety
|
||||||
|
const reportData = new Function(
|
||||||
|
'return ' + dataMatch[1]
|
||||||
|
)() as ReportData
|
||||||
|
|
||||||
|
if (reportData.stats) {
|
||||||
|
const stats = reportData.stats
|
||||||
|
counts.total = stats.expected || 0
|
||||||
|
counts.passed =
|
||||||
|
(stats.expected || 0) -
|
||||||
|
(stats.unexpected || 0) -
|
||||||
|
(stats.flaky || 0) -
|
||||||
|
(stats.skipped || 0)
|
||||||
|
counts.failed = stats.unexpected || 0
|
||||||
|
counts.flaky = stats.flaky || 0
|
||||||
|
counts.skipped = stats.skipped || 0
|
||||||
|
return counts
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
// Continue to try other formats
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Format 3: Look for stats in the HTML content directly
|
||||||
|
// Playwright sometimes renders stats in the UI
|
||||||
|
const statsMatch = content.match(
|
||||||
|
/(\d+)\s+passed[^0-9]*(\d+)\s+failed[^0-9]*(\d+)\s+flaky[^0-9]*(\d+)\s+skipped/i
|
||||||
|
)
|
||||||
|
if (statsMatch) {
|
||||||
|
counts.passed = parseInt(statsMatch[1]) || 0
|
||||||
|
counts.failed = parseInt(statsMatch[2]) || 0
|
||||||
|
counts.flaky = parseInt(statsMatch[3]) || 0
|
||||||
|
counts.skipped = parseInt(statsMatch[4]) || 0
|
||||||
|
counts.total =
|
||||||
|
counts.passed + counts.failed + counts.flaky + counts.skipped
|
||||||
|
return counts
|
||||||
|
}
|
||||||
|
|
||||||
|
// Format 4: Try to extract from summary text patterns
|
||||||
|
const passedMatch = content.match(/(\d+)\s+(?:tests?|specs?)\s+passed/i)
|
||||||
|
const failedMatch = content.match(/(\d+)\s+(?:tests?|specs?)\s+failed/i)
|
||||||
|
const flakyMatch = content.match(/(\d+)\s+(?:tests?|specs?)\s+flaky/i)
|
||||||
|
const skippedMatch = content.match(/(\d+)\s+(?:tests?|specs?)\s+skipped/i)
|
||||||
|
const totalMatch = content.match(
|
||||||
|
/(\d+)\s+(?:tests?|specs?)\s+(?:total|ran)/i
|
||||||
|
)
|
||||||
|
|
||||||
|
if (passedMatch) counts.passed = parseInt(passedMatch[1]) || 0
|
||||||
|
if (failedMatch) counts.failed = parseInt(failedMatch[1]) || 0
|
||||||
|
if (flakyMatch) counts.flaky = parseInt(flakyMatch[1]) || 0
|
||||||
|
if (skippedMatch) counts.skipped = parseInt(skippedMatch[1]) || 0
|
||||||
|
if (totalMatch) {
|
||||||
|
counts.total = parseInt(totalMatch[1]) || 0
|
||||||
|
} else if (
|
||||||
|
counts.passed ||
|
||||||
|
counts.failed ||
|
||||||
|
counts.flaky ||
|
||||||
|
counts.skipped
|
||||||
|
) {
|
||||||
|
counts.total =
|
||||||
|
counts.passed + counts.failed + counts.flaky + counts.skipped
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch (error) {
|
||||||
|
console.error(`Error reading report from ${reportDir}:`, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
return counts
|
||||||
|
}
|
||||||
|
|
||||||
|
// Main execution
|
||||||
|
const reportDir = process.argv[2]
|
||||||
|
|
||||||
|
if (!reportDir) {
|
||||||
|
console.error('Usage: extract-playwright-counts.ts <report-directory>')
|
||||||
|
process.exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
const counts = extractTestCounts(reportDir)
|
||||||
|
|
||||||
|
// Output as JSON for easy parsing in shell script
|
||||||
|
console.log(JSON.stringify(counts))
|
||||||
|
|
||||||
|
export { extractTestCounts }
|
||||||
@@ -58,6 +58,12 @@ if ! command -v wrangler > /dev/null 2>&1; then
|
|||||||
}
|
}
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
# Check if tsx is available, install if not
|
||||||
|
if ! command -v tsx > /dev/null 2>&1; then
|
||||||
|
echo "Installing tsx..." >&2
|
||||||
|
npm install -g tsx >&2 || echo "Failed to install tsx" >&2
|
||||||
|
fi
|
||||||
|
|
||||||
# Deploy a single browser report, WARN: ensure inputs are sanitized before calling this function
|
# Deploy a single browser report, WARN: ensure inputs are sanitized before calling this function
|
||||||
deploy_report() {
|
deploy_report() {
|
||||||
dir="$1"
|
dir="$1"
|
||||||
@@ -159,12 +165,16 @@ else
|
|||||||
echo "Available reports:"
|
echo "Available reports:"
|
||||||
ls -la reports/ 2>/dev/null || echo "Reports directory not found"
|
ls -la reports/ 2>/dev/null || echo "Reports directory not found"
|
||||||
|
|
||||||
# Deploy all reports in parallel and collect URLs
|
# Deploy all reports in parallel and collect URLs + test counts
|
||||||
temp_dir=$(mktemp -d)
|
temp_dir=$(mktemp -d)
|
||||||
pids=""
|
pids=""
|
||||||
i=0
|
i=0
|
||||||
|
|
||||||
# Start parallel deployments
|
# Store current working directory for absolute paths
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||||
|
BASE_DIR="$(pwd)"
|
||||||
|
|
||||||
|
# Start parallel deployments and count extractions
|
||||||
for browser in $BROWSERS; do
|
for browser in $BROWSERS; do
|
||||||
if [ -d "reports/playwright-report-$browser" ]; then
|
if [ -d "reports/playwright-report-$browser" ]; then
|
||||||
echo "Found report for $browser, deploying in parallel..."
|
echo "Found report for $browser, deploying in parallel..."
|
||||||
@@ -172,11 +182,26 @@ else
|
|||||||
url=$(deploy_report "reports/playwright-report-$browser" "$browser" "$cloudflare_branch")
|
url=$(deploy_report "reports/playwright-report-$browser" "$browser" "$cloudflare_branch")
|
||||||
echo "$url" > "$temp_dir/$i.url"
|
echo "$url" > "$temp_dir/$i.url"
|
||||||
echo "Deployment result for $browser: $url"
|
echo "Deployment result for $browser: $url"
|
||||||
|
|
||||||
|
# Extract test counts using tsx (TypeScript executor)
|
||||||
|
EXTRACT_SCRIPT="$SCRIPT_DIR/extract-playwright-counts.ts"
|
||||||
|
REPORT_DIR="$BASE_DIR/reports/playwright-report-$browser"
|
||||||
|
|
||||||
|
if command -v tsx > /dev/null 2>&1 && [ -f "$EXTRACT_SCRIPT" ]; then
|
||||||
|
echo "Extracting counts from $REPORT_DIR using $EXTRACT_SCRIPT" >&2
|
||||||
|
counts=$(tsx "$EXTRACT_SCRIPT" "$REPORT_DIR" 2>&1 || echo '{}')
|
||||||
|
echo "Extracted counts for $browser: $counts" >&2
|
||||||
|
echo "$counts" > "$temp_dir/$i.counts"
|
||||||
|
else
|
||||||
|
echo "Script not found or tsx not available: $EXTRACT_SCRIPT" >&2
|
||||||
|
echo '{}' > "$temp_dir/$i.counts"
|
||||||
|
fi
|
||||||
) &
|
) &
|
||||||
pids="$pids $!"
|
pids="$pids $!"
|
||||||
else
|
else
|
||||||
echo "Report not found for $browser at reports/playwright-report-$browser"
|
echo "Report not found for $browser at reports/playwright-report-$browser"
|
||||||
echo "failed" > "$temp_dir/$i.url"
|
echo "failed" > "$temp_dir/$i.url"
|
||||||
|
echo '{}' > "$temp_dir/$i.counts"
|
||||||
fi
|
fi
|
||||||
i=$((i + 1))
|
i=$((i + 1))
|
||||||
done
|
done
|
||||||
@@ -186,8 +211,9 @@ else
|
|||||||
wait $pid
|
wait $pid
|
||||||
done
|
done
|
||||||
|
|
||||||
# Collect URLs in order
|
# Collect URLs and counts in order
|
||||||
urls=""
|
urls=""
|
||||||
|
all_counts=""
|
||||||
i=0
|
i=0
|
||||||
for browser in $BROWSERS; do
|
for browser in $BROWSERS; do
|
||||||
if [ -f "$temp_dir/$i.url" ]; then
|
if [ -f "$temp_dir/$i.url" ]; then
|
||||||
@@ -200,37 +226,147 @@ else
|
|||||||
else
|
else
|
||||||
urls="$urls $url"
|
urls="$urls $url"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [ -f "$temp_dir/$i.counts" ]; then
|
||||||
|
counts=$(cat "$temp_dir/$i.counts")
|
||||||
|
echo "Read counts for $browser from $temp_dir/$i.counts: $counts" >&2
|
||||||
|
else
|
||||||
|
counts="{}"
|
||||||
|
echo "No counts file found for $browser at $temp_dir/$i.counts" >&2
|
||||||
|
fi
|
||||||
|
if [ -z "$all_counts" ]; then
|
||||||
|
all_counts="$counts"
|
||||||
|
else
|
||||||
|
all_counts="$all_counts|$counts"
|
||||||
|
fi
|
||||||
|
|
||||||
i=$((i + 1))
|
i=$((i + 1))
|
||||||
done
|
done
|
||||||
|
|
||||||
# Clean up temp directory
|
# Clean up temp directory
|
||||||
rm -rf "$temp_dir"
|
rm -rf "$temp_dir"
|
||||||
|
|
||||||
|
# Calculate total test counts across all browsers
|
||||||
|
total_passed=0
|
||||||
|
total_failed=0
|
||||||
|
total_flaky=0
|
||||||
|
total_skipped=0
|
||||||
|
total_tests=0
|
||||||
|
|
||||||
|
# Parse counts and calculate totals
|
||||||
|
IFS='|'
|
||||||
|
set -- $all_counts
|
||||||
|
for counts_json; do
|
||||||
|
if [ "$counts_json" != "{}" ] && [ -n "$counts_json" ]; then
|
||||||
|
# Parse JSON counts using simple grep/sed if jq is not available
|
||||||
|
if command -v jq > /dev/null 2>&1; then
|
||||||
|
passed=$(echo "$counts_json" | jq -r '.passed // 0')
|
||||||
|
failed=$(echo "$counts_json" | jq -r '.failed // 0')
|
||||||
|
flaky=$(echo "$counts_json" | jq -r '.flaky // 0')
|
||||||
|
skipped=$(echo "$counts_json" | jq -r '.skipped // 0')
|
||||||
|
total=$(echo "$counts_json" | jq -r '.total // 0')
|
||||||
|
else
|
||||||
|
# Fallback parsing without jq
|
||||||
|
passed=$(echo "$counts_json" | sed -n 's/.*"passed":\([0-9]*\).*/\1/p')
|
||||||
|
failed=$(echo "$counts_json" | sed -n 's/.*"failed":\([0-9]*\).*/\1/p')
|
||||||
|
flaky=$(echo "$counts_json" | sed -n 's/.*"flaky":\([0-9]*\).*/\1/p')
|
||||||
|
skipped=$(echo "$counts_json" | sed -n 's/.*"skipped":\([0-9]*\).*/\1/p')
|
||||||
|
total=$(echo "$counts_json" | sed -n 's/.*"total":\([0-9]*\).*/\1/p')
|
||||||
|
fi
|
||||||
|
|
||||||
|
total_passed=$((total_passed + ${passed:-0}))
|
||||||
|
total_failed=$((total_failed + ${failed:-0}))
|
||||||
|
total_flaky=$((total_flaky + ${flaky:-0}))
|
||||||
|
total_skipped=$((total_skipped + ${skipped:-0}))
|
||||||
|
total_tests=$((total_tests + ${total:-0}))
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
unset IFS
|
||||||
|
|
||||||
|
# Determine overall status
|
||||||
|
if [ $total_failed -gt 0 ]; then
|
||||||
|
status_icon="❌"
|
||||||
|
status_text="Some tests failed"
|
||||||
|
elif [ $total_flaky -gt 0 ]; then
|
||||||
|
status_icon="⚠️"
|
||||||
|
status_text="Tests passed with flaky tests"
|
||||||
|
elif [ $total_tests -gt 0 ]; then
|
||||||
|
status_icon="✅"
|
||||||
|
status_text="All tests passed!"
|
||||||
|
else
|
||||||
|
status_icon="🕵🏻"
|
||||||
|
status_text="No test results found"
|
||||||
|
fi
|
||||||
|
|
||||||
# Generate completion comment
|
# Generate completion comment
|
||||||
comment="$COMMENT_MARKER
|
comment="$COMMENT_MARKER
|
||||||
## 🎭 Playwright Test Results
|
## 🎭 Playwright Test Results
|
||||||
|
|
||||||
✅ **Tests completed successfully!**
|
$status_icon **$status_text**
|
||||||
|
|
||||||
⏰ Completed at: $(date -u '+%m/%d/%Y, %I:%M:%S %p') UTC
|
⏰ Completed at: $(date -u '+%m/%d/%Y, %I:%M:%S %p') UTC"
|
||||||
|
|
||||||
|
# Add summary counts if we have test data
|
||||||
|
if [ $total_tests -gt 0 ]; then
|
||||||
|
comment="$comment
|
||||||
|
|
||||||
|
### 📈 Summary
|
||||||
|
- **Total Tests:** $total_tests
|
||||||
|
- **Passed:** $total_passed ✅
|
||||||
|
- **Failed:** $total_failed $([ $total_failed -gt 0 ] && echo '❌' || echo '')
|
||||||
|
- **Flaky:** $total_flaky $([ $total_flaky -gt 0 ] && echo '⚠️' || echo '')
|
||||||
|
- **Skipped:** $total_skipped $([ $total_skipped -gt 0 ] && echo '⏭️' || echo '')"
|
||||||
|
fi
|
||||||
|
|
||||||
|
comment="$comment
|
||||||
|
|
||||||
### 📊 Test Reports by Browser"
|
### 📊 Test Reports by Browser"
|
||||||
|
|
||||||
# Add browser results
|
# Add browser results with individual counts
|
||||||
i=0
|
i=0
|
||||||
for browser in $BROWSERS; do
|
IFS='|'
|
||||||
|
set -- $all_counts
|
||||||
|
for counts_json; do
|
||||||
|
# Get browser name
|
||||||
|
browser=$(echo "$BROWSERS" | cut -d' ' -f$((i + 1)))
|
||||||
# Get URL at position i
|
# Get URL at position i
|
||||||
url=$(echo "$urls" | cut -d' ' -f$((i + 1)))
|
url=$(echo "$urls" | cut -d' ' -f$((i + 1)))
|
||||||
|
|
||||||
if [ "$url" != "failed" ] && [ -n "$url" ]; then
|
if [ "$url" != "failed" ] && [ -n "$url" ]; then
|
||||||
|
# Parse individual browser counts
|
||||||
|
if [ "$counts_json" != "{}" ] && [ -n "$counts_json" ]; then
|
||||||
|
if command -v jq > /dev/null 2>&1; then
|
||||||
|
b_passed=$(echo "$counts_json" | jq -r '.passed // 0')
|
||||||
|
b_failed=$(echo "$counts_json" | jq -r '.failed // 0')
|
||||||
|
b_flaky=$(echo "$counts_json" | jq -r '.flaky // 0')
|
||||||
|
b_skipped=$(echo "$counts_json" | jq -r '.skipped // 0')
|
||||||
|
b_total=$(echo "$counts_json" | jq -r '.total // 0')
|
||||||
|
else
|
||||||
|
b_passed=$(echo "$counts_json" | sed -n 's/.*"passed":\([0-9]*\).*/\1/p')
|
||||||
|
b_failed=$(echo "$counts_json" | sed -n 's/.*"failed":\([0-9]*\).*/\1/p')
|
||||||
|
b_flaky=$(echo "$counts_json" | sed -n 's/.*"flaky":\([0-9]*\).*/\1/p')
|
||||||
|
b_skipped=$(echo "$counts_json" | sed -n 's/.*"skipped":\([0-9]*\).*/\1/p')
|
||||||
|
b_total=$(echo "$counts_json" | sed -n 's/.*"total":\([0-9]*\).*/\1/p')
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ -n "$b_total" ] && [ "$b_total" != "0" ]; then
|
||||||
|
counts_str=" • ✅ $b_passed / ❌ $b_failed / ⚠️ $b_flaky / ⏭️ $b_skipped"
|
||||||
|
else
|
||||||
|
counts_str=""
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
counts_str=""
|
||||||
|
fi
|
||||||
|
|
||||||
comment="$comment
|
comment="$comment
|
||||||
- ✅ **${browser}**: [View Report](${url})"
|
- ✅ **${browser}**: [View Report](${url})${counts_str}"
|
||||||
else
|
else
|
||||||
comment="$comment
|
comment="$comment
|
||||||
- ❌ **${browser}**: Deployment failed"
|
- ❌ **${browser}**: Deployment failed"
|
||||||
fi
|
fi
|
||||||
i=$((i + 1))
|
i=$((i + 1))
|
||||||
done
|
done
|
||||||
|
unset IFS
|
||||||
|
|
||||||
comment="$comment
|
comment="$comment
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user