code-review

A team-based code review workflow that spawns three specialized reviewers to analyze the current branch's changes.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "code-review" with this command: npx skills add bang9/claude-code-settings/bang9-claude-code-settings-code-review

Code Review

A team-based code review workflow that spawns three specialized reviewers to analyze the current branch's changes.

When This Skill Activates

  • User asks to review current branch or PR

  • User mentions "code review", "review changes", "review branch"

  • User wants feedback before merging

Workflow

Step 1: Gather Branch Context

Before spawning reviewers, collect the diff and context:

  • Run git log main..HEAD --oneline to get the list of commits on this branch

  • Run git diff main...HEAD to get the full diff against the base branch

  • Identify all changed files with git diff main...HEAD --name-only

If the diff is too large (> 5000 lines), summarize per-file changes and have reviewers focus on their specific areas.

Step 2: Create Review Team

Create a team named code-review and spawn three reviewer agents in parallel:

Reviewer 1: Security Reviewer (security-reviewer )

  • Agent type: general-purpose

  • Focus: Security implications of the changes

  • Checklist:

  • Input validation and sanitization

  • Authentication/authorization changes

  • Sensitive data exposure (secrets, tokens, PII)

  • Injection vulnerabilities (SQL, XSS, command injection)

  • Dependency security (new packages, known CVEs)

  • Error handling that may leak internal details

  • CORS, CSP, or other security header changes

  • Output format: List findings with severity (Critical / High / Medium / Low / Info)

Reviewer 2: Performance Reviewer (performance-reviewer )

  • Agent type: general-purpose

  • Focus: Performance impact of the changes

  • Checklist:

  • Algorithm complexity (time/space)

  • Unnecessary re-renders or recomputations

  • Memory leaks (event listeners, subscriptions, timers not cleaned up)

  • Bundle size impact (new dependencies, large imports)

  • N+1 queries or redundant API calls

  • Caching opportunities missed

  • Lazy loading and code splitting considerations

  • Output format: List findings with impact level (High / Medium / Low / Negligible)

Reviewer 3: Test Coverage Reviewer (test-reviewer )

  • Agent type: general-purpose

  • Focus: Test quality and coverage

  • Checklist:

  • New code has corresponding tests

  • Edge cases and error paths are tested

  • Test descriptions are clear and meaningful

  • Mocks are appropriate (not over-mocked)

  • Integration/E2E coverage for critical paths

  • Existing tests updated for changed behavior

  • Missing negative test cases

  • Output format: List findings with priority (Must Have / Should Have / Nice to Have)

Step 3: Assign Review Tasks

Create tasks for each reviewer and assign them. Each reviewer should:

  • Read the full diff (or their relevant subset for large diffs)

  • Analyze changes against their specific checklist

  • Report findings with clear file paths, line references, and severity/priority

  • Mark their task as completed when done

Step 4: Compile Review Report

After all three reviewers complete, compile their findings into a unified report:

Code Review Summary

Branch: {branch_name} Commits: {commit_count} commits Files Changed: {file_count}


Security Review

{security findings, grouped by severity}

Performance Review

{performance findings, grouped by impact}

Test Coverage Review

{test coverage findings, grouped by priority}


Action Items

{consolidated list of items that should be addressed before merge}

Step 5: Shutdown

After presenting the report to the user, gracefully shutdown all reviewers and clean up the team.

Important Notes

  • Reviewers should read actual code, not guess based on file names

  • Each finding must reference specific file paths and line numbers

  • Do not report style/formatting issues unless they affect readability significantly

  • Focus on substance: bugs, risks, and missing coverage

  • If no issues found in a category, explicitly state "No issues found" rather than omitting the section

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

problem-solver

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

Sharedintellect Quorum

Multi-agent validation framework — 6 independent AI critics evaluate artifacts against rubrics with evidence-grounded findings.

Registry SourceRecently Updated
3700Profile unavailable
Coding

Cortex Engine

Persistent cognitive memory for AI agents — query, record, review, and consolidate knowledge across sessions with spreading activation, FSRS scheduling, and...

Registry SourceRecently Updated
371Profile unavailable
Coding

Claude Code Controller

控制和管理 Claude Code 编码助手,支持项目感知编码、代码审查、重构和功能实现。使用 ACP 运行时在隔离会话中执行 Claude Code 任务,或在主会话中管理配置和项目上下文。

Registry SourceRecently Updated
1320Profile unavailable