web-perf-ux

Web Performance & UX Optimization

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "web-perf-ux" with this command: npx skills add october-academy/agent-plugins/october-academy-agent-plugins-web-perf-ux

Web Performance & UX Optimization

End-to-end workflow: measure → analyze → explain → plan → execute

Preflight Checklist

Before starting, verify:

Playwright MCP - Browser automation available

  • Test: browser_navigate to target URL works

  • If unavailable: User must configure Playwright MCP server

Lighthouse CLI - Performance measurement

  • Test: lighthouse --version

  • Install if missing: npm install -g lighthouse

Target URL accessible

  • Production/staging preferred for reliable metrics

  • Local dev works but label results as "non-prod environment"

Workflow

  1. Measure

Primary: Lighthouse CLI

Run via scripts/lighthouse-runner.sh :

./lighthouse-runner.sh https://example.com # Mobile (default) ./lighthouse-runner.sh https://example.com --desktop # Desktop ./lighthouse-runner.sh http://localhost:3000 --no-throttling # Local dev

Secondary: Real-time Web Vitals via Playwright

Inject scripts/collect-vitals.js for interaction-based metrics:

// Navigate and inject await page.evaluate(collectVitalsScript); // Interact with page... // Retrieve metrics const vitals = await page.evaluate(() => window.GET_VITALS_SUMMARY());

Use Playwright collection when:

  • Testing specific user flows (not just page load)

  • Measuring INP on real interactions

  • Debugging CLS sources during navigation

  1. Analyze

Parse Lighthouse JSON for:

  • Performance score and individual metric values

  • Failed audits with estimated savings

  • Diagnostic details (render-blocking resources, unused JS, etc.)

Cross-reference with:

  • CWV thresholds → See references/core-web-vitals.md

  • Known issue patterns → See references/common-issues.md

Framework-specific analysis:

  • Next.js apps → See references/nextjs.md

  • React apps → See references/react.md

  1. Explain (Generate Report)

Output markdown report with this structure:

Performance & UX Report

Target: [URL] Environment: [Production | Staging | Local Dev*] Device: [Mobile | Desktop] Date: [timestamp]

*Local dev results may not reflect production performance

Executive Summary

Top 3 issues with expected impact:

  1. [Issue] → [Expected improvement]
  2. [Issue] → [Expected improvement]
  3. [Issue] → [Expected improvement]

Scorecard

MetricValueRatingTarget
PerformanceX/100🟡≥90
LCPXs🔴≤2.5s
CLSX.XX🟢≤0.1
INPXms🟡≤200ms
FCPXs≤1.8s
TTFBXms≤800ms

Rating: 🟢 Good | 🟡 Needs Improvement | 🔴 Poor

Findings

Finding 1: [Title]

Evidence:

  • [Lighthouse audit / metric value / observation]

Root Cause:

  • [Technical explanation of why this happens]

Fix Proposal:

  • [Specific code/config change]
  • [File(s) likely affected]

Verification:

  • [How to confirm the fix worked]

Finding 2: ...

Quick Wins vs Strategic Fixes

Quick Wins (< 1 hour, high confidence)

  • [Action item]
  • [Action item]

Strategic Fixes (requires planning)

  • [Action item with complexity note]

Re-test Plan

After implementing fixes:

  1. Run Lighthouse again with same configuration

  2. Compare: [specific metrics to watch]

  3. Expected improvements: [quantified targets]

  4. Plan (Create Tasks)

Convert findings to tasks using Plan/Task tools:

Task structure:

Subject: [Action verb] [specific target] Description:

  • Rationale: [why this matters, linked to finding]
  • Files/Areas: [likely locations to modify]
  • Acceptance Criteria: [measurable outcome]
  • Re-measure: [how to verify improvement]

Prioritization:

  • Impact (CWV improvement potential)

  • Confidence (certainty the fix will help)

  • Category labels: Hydration/SSR , Images , JS Bundle , Fonts , CSS , Caching , Third-Party

Example tasks:

High Priority:

  1. [Images] Add width/height to hero image

    • Impact: CLS 0.15 → <0.1
    • Files: components/Hero.tsx
    • Verify: CLS audit passes
  2. [JS Bundle] Lazy load chart component

    • Impact: TBT -200ms, LCP -0.5s
    • Files: pages/dashboard.tsx
    • Verify: Main bundle <200KB

Medium Priority: 3. [Fonts] Preload critical font

  • Impact: FCP -100ms
  • Files: app/layout.tsx

Environment Notes

Production/Staging (Recommended)

  • Metrics reflect real CDN, caching, compression

  • Use throttled Lighthouse (default) for user-representative results

Local Development

  • Always label results as "Local Dev Environment"

  • Use --no-throttling flag

  • Useful for: regression testing, debugging specific issues

  • Not reliable for: absolute performance numbers, TTFB, caching

When to Use Each Reference

Situation Reference File

Understanding metric thresholds core-web-vitals.md

Identifying root cause of issue common-issues.md

Next.js-specific optimizations nextjs.md

React patterns (any framework) react.md

Optional: Implementation Mode

By default, stop at plan + task list. If user requests implementation:

  • Confirm scope of changes

  • For high-confidence fixes, provide specific code patches

  • Still require user approval before applying

  • Re-run measurement after changes to verify

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

sync

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

ship

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

interview-spec

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

deploy

No summary provided by upstream source.

Repository SourceNeeds Review