quality-gate

Run parallel quality reviews (React, SOLID, Security, Simplification, Slop) on branch changes and auto-fix issues

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "quality-gate" with this command: npx skills add benjaming/ai-skills/benjaming-ai-skills-quality-gate

Quality Gate

!IMPORTANT: Follow this process exactly. Do not skip steps.

Arguments: $0 (optional) — base branch to diff against. If omitted, auto-detect.

Step 1: Get the Diff

Detect the base branch:

git rev-parse --verify main >/dev/null 2>&1 && echo "main" || (git rev-parse --verify master >/dev/null 2>&1 && echo "master" || echo "develop")

Then get the full diff and changed file list:

git diff <base>...HEAD --name-only
git diff <base>...HEAD

Store the diff output — you will pass it to review agents.

Also detect the project stack:

# Check if React/Next.js project
cat package.json 2>/dev/null | jq -r '.dependencies // {} | keys[]' | grep -E '^(react|next)$'

Step 2: Parallel Review (Agent Team)

2a. Create Team

TeamCreate  team_name: "quality-gate"  description: "Parallel quality review of branch changes"

2b. Create Review Tasks

Create one TaskCreate per review dimension. Skip Task 1 if the project does not use React/Next.js.

Each task description MUST include:

  1. The full diff from Step 1 (if diff exceeds ~50KB, list changed files and instruct teammate to read files directly)
  2. The list of changed files
  3. The skill command to invoke and review instructions (see table below)
  4. The classification rules (see below)
  5. The required output format (see below)
  6. Instruction: Do NOT modify any files. Report findings only.
  7. Instruction: Write the full findings to /tmp/quality-gate-findings-<reviewer-name>.md using the Write tool (e.g. /tmp/quality-gate-findings-solid-reviewer.md)
  8. Instruction: Send a brief "done" notification to the lead via SendMessage with type: "message" and recipient: "lead" (the lead will read findings from the file, no need to include full findings in the message)
  9. Instruction: Mark task completed via TaskUpdate with status: "completed" after sending findings
TasksubjectactiveFormSkill command & instructions
1Review React/Next.js best practicesReviewing React best practices/vercel-react-best-practices Review ONLY the changed code in the diff against the rules. Categorize each finding as FIX or NITPICK
2Review SOLID principlesReviewing SOLID principles/applying-solid-principles Review ONLY the changed code in the diff against SOLID principles and clean code practices. Categorize each finding as FIX or NITPICK
3Review securityReviewing security/security-review Review ONLY the changed code in the diff against the security checklist. Categorize each finding as FIX or NITPICK
4Review simplification opportunitiesReviewing simplification/simplify Review ONLY the changed code in the diff for simplification opportunities (clarity, consistency, maintainability). Categorize each finding as FIX or NITPICK. Do NOT modify any files — report only.
5Review code slopReviewing code slop/code-slop but override: Do NOT modify any files. Instead, identify all slop issues and report them in FIX/NITPICK format below.

2c. Spawn Teammates (all in parallel)

Spawn all teammates in a single response using the Task tool with team_name: "quality-gate" and each teammate's name:

nameAssigned task
react-reviewerTask 1 (skip if not React)
solid-reviewerTask 2
security-reviewerTask 3
simplify-reviewerTask 4
slop-cleanerTask 5

Each teammate's prompt must instruct them to:

  1. Check TaskList and claim their assigned task via TaskUpdate with status: "in_progress" and owner: "<their-name>"
  2. Invoke the designated skill via the Skill tool with the review instructions
  3. Format findings per the output format below
  4. Write the full formatted findings to /tmp/quality-gate-findings-<their-name>.md using the Write tool (e.g. /tmp/quality-gate-findings-solid-reviewer.md)
  5. Send a brief "done" notification to the lead via SendMessage with type: "message", recipient: "lead", and summary: "<reviewer-name> done — findings written to /tmp/quality-gate-findings-<their-name>.md"
  6. Mark task completed via TaskUpdate with status: "completed"

2d. Assign Tasks

After spawning, assign each task to its teammate via TaskUpdate with owner: "<teammate-name>".

Classification Rules (include in each task description)

FIX (will be auto-applied):

  • Bugs or logic errors
  • Security vulnerabilities
  • Performance issues with measurable impact
  • Clear violations of critical rules
  • Obvious simplifications that reduce complexity without trade-offs

NITPICK (user decides):

  • Style preferences or minor readability tweaks
  • Debatable architectural choices
  • Low-impact optimizations
  • "Nice to have" improvements

Required Output Format (include in each task description)

## FIX
- `file/path.ts:42` — [RULE-ID] Description of the issue. Suggested fix: <concrete suggestion>
- `file/path.ts:85` — [RULE-ID] Description. Suggested fix: <suggestion>

## NITPICK
- `file/path.ts:15` — [RULE-ID] Description. Suggestion: <suggestion>

## NO ISSUES
(use this section if nothing found in a category)

If no issues at all, return: No issues found.

Step 3: Consolidate Findings and Tear Down Team

3a. Collect Results

Monitor TaskList until all review tasks reach completed status. Once all tasks are completed, read findings from each reviewer's output file using the Read tool:

  • /tmp/quality-gate-findings-solid-reviewer.md
  • /tmp/quality-gate-findings-security-reviewer.md
  • /tmp/quality-gate-findings-simplify-reviewer.md
  • /tmp/quality-gate-findings-slop-cleaner.md
  • /tmp/quality-gate-findings-react-reviewer.md (if spawned)

Do NOT rely on SendMessage content for findings — those are "done" pings only. The files are the source of truth.

3b. Shut Down Team

Send SendMessage with type: "shutdown_request" to each teammate. After all teammates confirm shutdown, call TeamDelete. Then clean up temp files:

rm -f /tmp/quality-gate-findings-*.md

3c. Consolidate

  1. Collect all FIX items across all reviewers
  2. Deduplicate overlapping findings on the same file:line
  3. Display a summary:
### Quality Gate Results

**Fixes to auto-apply:** N items
- [React] file:line — description (x items)
- [SOLID] file:line — description (x items)
- [Security] file:line — description (x items)
- [Simplify] file:line — description (x items)
- [Slop Cleaner] file:line — description (x items)
**Nitpicks for review:** N items

Step 4: Auto-Fix

Apply all FIX items to the codebase:

  • Read each affected file
  • Apply the suggested fixes using the Edit tool
  • After all fixes, run the project's linter/formatter if configured (check package.json scripts for lint/format)

Step 5: Present Nitpicks

If there are nitpicks, display them grouped by category and use AskUserQuestion:

### Nitpicks for your review

**React/Next.js:**
- `file:line` — description — suggestion

**SOLID:**
- `file:line` — description — suggestion

**Security:**
- `file:line` — description — suggestion

**Simplification:**
- `file:line` — description — suggestion

**Slop Cleaner:**
- `file:line` — description — suggestion

Ask: "Which nitpicks should I apply?" with options:

  • All of them
  • None
  • Let me pick (then list individually)

Step 6: Apply Selected Nitpicks

Apply whichever nitpicks the user selected.

Step 7: Commit & Push (if changes made)

If any changes were applied (fixes or nitpicks):

git add .
git commit -m "refactor: apply quality gate fixes"

If a remote branch exists and the branch was already pushed:

git push

Execution Notes

  • Requires: CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 environment variable in settings
  • Total teammates: 4-5 (skip react-reviewer if not a React project)
  • Team lifecycle: TeamCreate at Step 2a, TeamDelete at Step 3b
  • All review teammates are read-only — they report findings via SendMessage, the lead applies fixes
  • Teammate idle is normal — teammates go idle after each turn; do not treat idle notifications as errors
  • Deduplication matters — multiple reviewers may flag the same issue differently; apply only once
  • Preserve behavior — fixes must not change functionality, only improve quality
  • Be surgical — only modify code that was part of the original diff, do not refactor unrelated code

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

confluence-cli

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

atlassian-cli-jira

No summary provided by upstream source.

Repository SourceNeeds Review
General

ralph-loop

No summary provided by upstream source.

Repository SourceNeeds Review
General

interview

No summary provided by upstream source.

Repository SourceNeeds Review