qa-workflow

Comprehensive quality assurance workflow that validates implementation completeness and correctness, then iterates through fix cycles until approval. You are the last line of defense before shipping.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "qa-workflow" with this command: npx skills add oimiragieo/agent-studio/oimiragieo-agent-studio-qa-workflow

QA Workflow Skill

Overview

Comprehensive quality assurance workflow that validates implementation completeness and correctness, then iterates through fix cycles until approval. You are the last line of defense before shipping.

Core principle: You are the last line of defense. If you approve, the feature ships. Be thorough.

When to Use

Always:

  • After implementation is marked complete

  • Before merging or deploying changes

  • When validating acceptance criteria

Exceptions:

  • Documentation-only changes (may use minimal validation)

  • Trivial fixes with skip_validation flag

Iron Laws

  • NEVER approve without verifying every acceptance criterion — you are the last line of defense; partial verification equals no verification.

  • ALWAYS document issues with exact file paths and line numbers — vague reports like "the tests fail" are unfixable and waste the fix loop cycle.

  • NEVER sign off on implementation with failing tests — all test categories (unit, integration, E2E) must pass before approval, no exceptions.

  • ALWAYS run the full test suite for regression check — a feature that breaks existing functionality is not production-ready regardless of new test results.

  • NEVER exceed 5 fix loop iterations without escalating to human review — repeated loops signal a root cause that automated fixes cannot resolve.

Anti-Patterns

Anti-Pattern Why It Fails Correct Approach

Approving before checking all acceptance criteria Ships bugs disguised as features; breaks the quality contract Verify every criterion in the spec before writing the verdict

Writing vague issue reports ("tests fail", "broken") Developer cannot reproduce or fix what is not precisely described Include file path, line number, exact error message, and reproduction steps

Signing off with known failing tests Failing tests are documented bugs being shipped to production All tests must pass; if tests are wrong, fix them first and document the change

Only running new tests, not the full regression suite New code breaking old functionality is invisible without full suite Always run full suite with --coverage ; regressions block approval

Fixing more than QA found to "clean things up" Over-fixing introduces new bugs and scope creep into the fix loop Apply minimal changes; fix only what QA identified, nothing more

Every acceptance criterion must be verified before approval.

Part 1: QA Review

Phase 0: Load Context

Read the spec (your source of truth for requirements)

cat .claude/context/specs/[task-name]-spec.md

Read any previous QA reports

cat .claude/context/reports/qa/qa-report.md 2>/dev/null || echo "No previous report"

See what files were changed

git diff main...HEAD --name-status

Read QA acceptance criteria from spec

grep -A 100 "## QA Acceptance" spec.md

Phase 1: Verify All Work Completed

Check git log for implementation commits

git log --oneline main..HEAD

Verify expected files were modified

git diff main...HEAD --name-only

STOP if implementation is not complete. QA runs after implementation.

Phase 2: Start Test Environment

Start services as needed

npm run dev # or appropriate command

Verify services are running

curl http://localhost:3000/health 2>/dev/null || echo "Service not responding"

Wait for all services to be healthy before proceeding.

Phase 3: Run Automated Tests

Unit Tests

Run all unit tests for affected areas:

Run test suite

npm test

or

pytest

or

go test ./...

Document results:

UNIT TESTS:

Integration Tests

Run integration tests if applicable:

Run integration test suite

npm run test:integration

Document results:

INTEGRATION TESTS:

End-to-End Tests

If E2E tests exist:

Run E2E test suite

npm run test:e2e

Document results:

E2E TESTS:

Phase 4: Manual Verification

For each acceptance criterion in the spec:

  • Navigate to the relevant area

  • Verify the criterion is met

  • Check for console errors

  • Test edge cases

  • Document findings

MANUAL VERIFICATION:

    • Evidence: [what you observed]
    • Evidence: [what you observed]

Phase 5: Code Review

Security Review

Check for common vulnerabilities:

Look for security issues

grep -r "eval(" --include=".js" --include=".ts" . 2>/dev/null grep -r "innerHTML" --include=".js" --include=".ts" . 2>/dev/null grep -r "dangerouslySetInnerHTML" --include=".tsx" --include=".jsx" . 2>/dev/null

Check for hardcoded secrets

grep -rE "(password|secret|api_key|token)\s*=\s*['"][^'"]+['"]" . 2>/dev/null

Pattern Compliance

Verify code follows established patterns:

Compare new code to existing patterns

Read pattern files, compare structure

Document findings:

CODE REVIEW:

  • Security issues: [list or "None"]
  • Pattern violations: [list or "None"]
  • Code quality: PASS/FAIL

Phase 6: Regression Check

Run full test suite to catch regressions:

Run ALL tests, not just new ones

npm test -- --coverage

Verify key existing functionality still works.

REGRESSION CHECK:

  • Full test suite: PASS/FAIL (X/Y tests)
  • Existing features verified: [list]
  • Regressions found: [list or "None"]

Phase 7: Generate QA Report

QA Validation Report

Task: [task-name] Date: [timestamp]

Summary

CategoryStatusDetails
Unit TestsPASS/FAILX/Y passing
Integration TestsPASS/FAILX/Y passing
E2E TestsPASS/FAILX/Y passing
Manual VerificationPASS/FAIL[summary]
Security ReviewPASS/FAIL[summary]
Pattern CompliancePASS/FAIL[summary]
Regression CheckPASS/FAIL[summary]

Issues Found

Critical (Blocks Sign-off)

  1. [Issue description] - [File/Location]

Major (Should Fix)

  1. [Issue description] - [File/Location]

Minor (Nice to Fix)

  1. [Issue description] - [File/Location]

Verdict

SIGN-OFF: [APPROVED / REJECTED]

Reason: [Explanation]

Next Steps:

  • [If approved: Ready for merge]
  • [If rejected: List of fixes needed]

Save report to .claude/context/reports/qa/qa-report.md

Phase 8: Decision

If APPROVED

=== QA VALIDATION COMPLETE ===

Status: APPROVED

All acceptance criteria verified:

  • Unit tests: PASS
  • Integration tests: PASS
  • Manual verification: PASS
  • Security review: PASS
  • Regression check: PASS

The implementation is production-ready. Ready for merge.

If REJECTED

Create fix request and proceed to Part 2.

Part 2: QA Fix Loop

Phase 0: Load Fix Request

Read the QA report with issues

cat .claude/context/reports/qa/qa-report.md

Identify issues to fix

grep -A 50 "## Issues Found" .claude/context/reports/qa/qa-report.md

Extract from report:

  • Exact issues to fix

  • File locations

  • Required fixes

  • Verification criteria

Phase 1: Parse Fix Requirements

Create a checklist from the QA report:

FIXES REQUIRED:

  1. [Issue Title]

    • Location: [file:line]
    • Problem: [description]
    • Fix: [what to do]
    • Verify: [how to check]
  2. [Issue Title] ...

You must address EVERY issue.

Phase 2: Fix Issues One by One

For each issue:

  • Read the problem area

  • Understand what's wrong

  • Implement the fix

  • Verify the fix locally

Follow these rules:

  • Make the MINIMAL change needed

  • Don't refactor surrounding code

  • Don't add features

  • Match existing patterns

  • Test after each fix

Phase 3: Run Tests

After all fixes are applied:

Run the full test suite

npm test

Run specific tests that were failing

[failed test commands from QA report]

All tests must pass before proceeding.

Phase 4: Self-Verification

Before requesting re-review, verify each fix:

SELF-VERIFICATION: [ ] Issue 1: [title] - FIXED - Verified by: [how you verified] [ ] Issue 2: [title] - FIXED - Verified by: [how you verified] ...

ALL ISSUES ADDRESSED: YES/NO

If any issue is not fixed, go back to Phase 2.

Phase 5: Commit Fixes

Add fixed files

git add [fixed-files]

Commit with descriptive message

git commit -m "fix: Address QA issues

Fixes:

  • [Issue 1 title]
  • [Issue 2 title]

Verified:

  • All tests pass
  • Issues verified locally"

Phase 6: Signal for Re-Review

=== QA FIXES COMPLETE ===

Issues fixed: [N]

  1. [Issue 1] - FIXED Commit: [hash]

  2. [Issue 2] - FIXED Commit: [hash]

All tests passing. Ready for QA re-validation.

QA Loop Behavior

The QA → Fix → QA loop continues until:

  • All critical issues resolved

  • All tests pass

  • No regressions

  • QA approves

Maximum iterations: 5

If max iterations reached without approval:

  • Escalate to human review

  • Document all remaining issues

  • Save detailed report

Severity Guidelines

CRITICAL - Blocks sign-off:

  • Failing tests

  • Security vulnerabilities

  • Missing required functionality

  • Data corruption risks

MAJOR - Should fix:

  • Missing edge case handling

  • Performance issues

  • UX problems

  • Pattern violations

MINOR - Nice to fix:

  • Style inconsistencies

  • Documentation gaps

  • Minor optimizations

Verification Checklist

Before approving:

  • All unit tests pass

  • All integration tests pass

  • All E2E tests pass (if applicable)

  • Manual verification of acceptance criteria

  • Security review complete

  • Pattern compliance verified

  • No regressions found

  • QA report generated

Common Mistakes

Approving Too Quickly

Why it's wrong: Shipping bugs to users.

Do this instead: Check EVERYTHING in the acceptance criteria.

Vague Issue Reports

Why it's wrong: Developer can't fix what they don't understand.

Do this instead: Exact file paths, line numbers, reproducible steps.

Fixing Too Much

Why it's wrong: Introducing new bugs while fixing old ones.

Do this instead: Minimal changes. Fix only what QA found.

Integration with Other Skills

This skill works well with:

  • complexity-assessment: Determines validation depth

  • tdd: Use TDD to write tests for fixes

  • debugging: Use when investigating test failures

Memory Protocol

Before starting: Read .claude/context/memory/learnings.md

After completing:

  • New pattern -> .claude/context/memory/learnings.md

  • Issue found -> .claude/context/memory/issues.md

  • Decision made -> .claude/context/memory/decisions.md

ASSUME INTERRUPTION: If it's not in memory, it didn't happen.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

filesystem

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

slack-notifications

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

chrome-browser

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

diagram-generator

No summary provided by upstream source.

Repository SourceNeeds Review