Architecture Review Skill
Step 1: Gather Architecture Context
Understand the current architecture by:
-
Read project structure: Use Glob to map directory structure
-
Identify key components: Find entry points, services, data layers
-
Review dependencies: Check package.json, imports, module graph
-
Understand data flow: Trace requests through the system
Step 2: Evaluate Design Principles
Check adherence to fundamental principles:
SOLID Principles:
-
Single Responsibility: Does each class/module have one reason to change?
-
Open/Closed: Can behavior be extended without modification?
-
Liskov Substitution: Are subtypes substitutable for base types?
-
Interface Segregation: Are interfaces focused and minimal?
-
Dependency Inversion: Do high-level modules depend on abstractions?
Other Principles:
-
DRY: Is logic duplicated unnecessarily?
-
YAGNI: Are there unused or speculative features?
-
Separation of Concerns: Are responsibilities properly divided?
Step 3: Check for Anti-Patterns
Identify common anti-patterns:
-
God Class/Module: Classes doing too much
-
Spaghetti Code: Tangled, hard-to-follow logic
-
Circular Dependencies: Modules that reference each other
-
Feature Envy: Classes that use other classes' data excessively
-
Shotgun Surgery: Changes that require touching many files
-
Leaky Abstractions: Implementation details exposed to consumers
Step 4: Assess Non-Functional Requirements
Evaluate against NFRs:
-
Scalability: Can the system handle increased load?
-
Maintainability: How easy is it to modify and extend?
-
Testability: Can components be tested in isolation?
-
Security: Are there potential vulnerabilities?
-
Performance: Are there obvious bottlenecks?
-
Observability: Can the system be monitored effectively?
Step 5: Generate Review Report
Create a structured report with:
-
Summary: Overall assessment and key findings
-
Strengths: What the architecture does well
-
Concerns: Issues requiring attention (prioritized)
-
Recommendations: Specific improvements with rationale
-
Trade-offs: Acknowledge valid design trade-offs
</execution_process>
<best_practices>
-
Be Constructive: Focus on improvements, not criticism
-
Prioritize Issues: Not all problems are equally important
-
Consider Context: Understand constraints and trade-offs
-
Suggest Alternatives: Don't just identify problems
-
Reference Patterns: Cite established patterns when relevant
</best_practices>
Review the architecture of src/services/ for scalability and maintainability
Example Response Structure:
Architecture Review: src/services/
Summary
The service layer follows a reasonable structure but has some coupling issues...
Strengths
- Clear separation between API handlers and business logic
- Good use of dependency injection
Concerns
- High Priority: UserService has 15 methods (God Class)
- Medium Priority: Circular dependency between OrderService and InventoryService
- Low Priority: Some magic numbers in validation logic
Recommendations
- Split UserService into UserAuthService and UserProfileService
- Introduce EventBus to decouple Order and Inventory
- Extract validation constants to configuration
</usage_example>
Iron Laws
-
ALWAYS review architecture before COMPLEX or EPIC implementation starts — architectural problems discovered after implementation cost 10-100x more to fix; review in the design phase.
-
NEVER approve an architecture with undocumented single points of failure — every SPOF must be explicitly identified and have a documented mitigation plan.
-
ALWAYS evaluate against non-functional requirements — performance, security, scalability, maintainability, and observability are non-optional; functional correctness without NFR compliance is incomplete.
-
NEVER treat architectural trade-offs as implicit — every trade-off must be explicitly documented with rationale; hidden trade-offs become future surprises and unowned technical debt.
-
ALWAYS check for circular dependencies before approving a design — circular module dependencies make testing, refactoring, and deployment order unpredictable and progressively harder to resolve.
Anti-Patterns
Anti-Pattern Why It Fails Correct Approach
No NFR evaluation Design may pass functional tests but fail at scale or under attack Always evaluate performance, security, scalability, observability
Reviewing only the happy path Systems fail at error boundaries, not in the happy path Review failure modes, retry behavior, and circuit breakers
Approving without trade-off documentation Hidden trade-offs become future surprises Explicitly document all trade-offs with rationale
Single point of failure left undocumented System has silent fragility that surfaces under load Map all SPOFs; require mitigation plans for each
Checking SOLID without anti-pattern catalog Principle adherence doesn't guarantee absence of anti-patterns Check both principles AND concrete anti-patterns (God Class, Shotgun Surgery, etc.)
Architecture review after implementation starts Too late to fix structural issues without major rework Review in design phase, before any code is written
Rules
-
Always provide constructive feedback with actionable recommendations
-
Prioritize issues by impact and effort to fix
-
Consider existing constraints and trade-offs
Related Workflow
This skill has a corresponding workflow for complex multi-agent scenarios:
-
Workflow: .claude/workflows/architecture-review-skill-workflow.md
-
When to use workflow: For comprehensive audits or multi-phase analysis requiring coordination between multiple agents (developer, architect, security-architect, code-reviewer)
-
When to use skill directly: For quick reviews or single-agent execution
Memory Protocol (MANDATORY)
Before starting:
cat .claude/context/memory/learnings.md
After completing:
-
New pattern -> .claude/context/memory/learnings.md
-
Issue found -> .claude/context/memory/issues.md
-
Decision made -> .claude/context/memory/decisions.md
ASSUME INTERRUPTION: Your context may reset. If it's not in memory, it didn't happen.