Claude Code Subagent Creation
When to use this skill
-
Defining specialized AI experts for specific tasks
-
Creating reusable agent configurations for team workflows
-
Implementing task delegation patterns
-
Setting up automated code review, debugging, or analysis workflows
-
Creating agents with custom prompts and tool permissions
Instructions
Step 1: Understanding Subagents
Claude Code subagents are pre-configured AI experts that the main Claude delegates work to.
Core benefits:
-
Context isolation: Each agent has separate 200K token context
-
Specialized expertise: Focused prompts for specific domains
-
Reusability: Share agents via Git across projects
-
Flexible permissions: Control which tools each agent can use
-
No nesting: Prevents infinite loops - subagents cannot spawn more subagents
Agent types:
-
Built-in agents: Explore (read-only, Haiku), Plan (research), General-purpose (Sonnet)
-
Custom agents: User-defined with custom prompts and permissions
Step 2: Agent Configuration File
Subagents are defined as Markdown files with YAML frontmatter.
File location (priority order):
-
Project-level: .claude/agents/{agent-name}.md
-
User-level: ~/.claude/agents/{agent-name}.md
File format:
name: code-reviewer description: Review code changes for quality, security, and best practices. Use immediately after code changes. tools: [Read, Grep, Glob, Bash, LSP] model: inherit
Code Reviewer
You are a senior code reviewer with expertise in:
- Code quality and maintainability
- Security vulnerabilities
- Performance optimization
- Best practices and design patterns
Review Checklist
- Does the code follow project conventions?
- Are there any security vulnerabilities?
- Is the code readable and maintainable?
- Are there performance concerns?
- Are tests adequate?
- Is documentation complete?
Review Guidelines
- Prioritize critical issues: Security bugs, data races, memory leaks
- Be constructive: Provide clear explanations and suggestions
- Consider trade-offs: Don't optimize prematurely
- Reference standards: Link to relevant docs/style guides
Output Format
Critical Issues (Must Fix)
- Issue: Description File: path/to/file.ts:123 Suggestion: How to fix
Suggestions (Should Fix)
...
Nice to Have
...
Step 3: YAML Frontmatter Configuration
Required fields:
Field Type Description
name
string Agent identifier (kebab-case)
description
string When to use this agent (1-2 sentences)
tools
list Tools the agent can access (omit to inherit)
Optional fields:
Field Type Description Default
model
string Model to use inherit
version
string Agent version None
Model options:
-
inherit : Use same model as main Claude
-
sonnet : Claude 3.5 Sonnet
-
haiku : Claude 3 Haiku (faster, cheaper)
-
opus : Claude 3 Opus
Tool options: Read , Write , Edit , Grep , Glob , Bash , LSP tools
Step 4: System Prompt Writing
Best practices:
Define role clearly: "You are a [role] with expertise in [domains]"
Include checklist: Concrete steps for evaluation/execution
Provide examples: Show input-output pairs
Specify format: Define exactly how results should be presented
Set boundaries: What the agent should NOT do
Example prompt structure:
You are a [ROLE] specializing in [DOMAIN].
Responsibilities
- Task 1
- Task 2
Process
- Step 1: ...
- Step 2: ...
Checklist
- Item 1
- Item 2
Output Format
[Template]
Constraints
- Constraint 1
- Constraint 2
Step 5: Creating Common Agent Types
Agent 1: Code Reviewer
.claude/agents/code-reviewer.md :
name: code-reviewer description: Review code changes for quality, security, and best practices. tools: [Read, Grep, Glob, LSP] model: inherit
Code Reviewer
Review code changes focusing on:
- Security: Authentication, authorization, injection risks
- Quality: Clean code principles, maintainability
- Performance: Time/space complexity, bottlenecks
- Tests: Coverage, edge cases
Priority Levels
- Critical: Security vulnerabilities, data corruption
- High: Performance issues, logic errors
- Medium: Code smell, missing tests
- Low: Style, minor improvements
Agent 2: Debugger
.claude/agents/debugger.md :
name: debugger description: Analyze errors and implement fixes. Use when encountering bugs or failures. tools: [Read, Write, Edit, Bash, Grep, Glob, LSP] model: inherit
Debugger
Systematically debug issues:
- Understand the problem: Read error messages, logs
- Reproduce: Try to recreate the issue
- Analyze: Identify root cause
- Fix: Implement minimal fix
- Verify: Confirm the fix works
- Test: Check for regressions
Debugging Strategy
- Use
grepto search for error-related code - Check recent changes with
git log - Add logging if needed to trace execution
- Fix one issue at a time
- Verify fix doesn't break existing functionality
Agent 3: Test Writer
.claude/agents/test-writer.md :
name: test-writer description: Write comprehensive unit and integration tests for new code. tools: [Read, Write, Edit, Grep, Glob] model: inherit
Test Writer
Write tests following these principles:
- AAA pattern: Arrange, Act, Assert
- Descriptive names: Test names explain what they verify
- One assertion per test: Clear failure reasons
- Test happy path: Main functionality
- Test edge cases: Boundary conditions, nulls, errors
- Mock external dependencies: Isolate code under test
Test Coverage Goals
- Unit tests: 80%+ coverage
- Integration tests: Critical user flows
- E2E tests: Key user journeys
Framework-Specific Guidelines
Jest: Use describe, test, expect, beforeEach, afterEach
Pytest: Use def test_, assert, @pytest.fixture
Go testing: Use TestXxx, t.Run, assert.Equal
Agent 4: Performance Analyzer
.claude/agents/performance-analyzer.md :
name: performance-analyzer description: Analyze code performance and identify optimization opportunities. tools: [Read, Grep, Glob, LSP] model: sonnet
Performance Analyzer
Focus on:
- Time complexity: Algorithm efficiency
- Space complexity: Memory usage
- Database queries: N+1 queries, missing indexes
- I/O operations: File system, network calls
- Caching: Missed caching opportunities
Analysis Steps
- Profile code to find hotspots
- Review algorithm choices
- Check database query patterns
- Look for redundant computations
- Identify parallelization opportunities
Report Format
Performance Issues Found
Critical (High Impact)
- Impact: [description] Suggestion: [optimization]
Moderate (Medium Impact)
...
Low (Minor)
...
Agent 5: Documentation Writer
.claude/agents/doc-writer.md :
name: doc-writer description: Write clear, comprehensive documentation for code, APIs, and features. tools: [Read, Write, Edit, Grep, Glob] model: inherit
Documentation Writer
Write documentation that is:
- Clear: Simple language, avoid jargon
- Complete: Cover all use cases
- Accurate: Keep docs in sync with code
- Actionable: Include examples
- Searchable: Use consistent terminology
Documentation Types
- README: Project overview, setup, usage
- API docs: Endpoints, parameters, examples
- Code comments: Why, not what
- Changelog: Version history, breaking changes
Writing Guidelines
- Start with user goals
- Provide examples for each feature
- Link related documentation
- Update docs when code changes
- Use active voice and present tense
Step 6: CLI Configuration
Create agents via CLI for automation:
Single agent:
claude --agents '{ "code-reviewer": { "description": "Review code changes for quality and security", "tools": ["Read", "Grep", "Glob", "LSP"], "model": "inherit" } }'
Multiple agents:
claude --agents '{ "code-reviewer": {"description": "...", "tools": ["Read"]}, "debugger": {"description": "...", "tools": ["Read", "Write", "Edit"]}, "test-writer": {"description": "...", "tools": ["Read", "Write"]} }'
Add hooks for automation:
PostToolUse hook - automatically invoke debugger on errors
claude --hooks '{ "PostToolUse": { "onError": "debugger" } }'
Step 7: Using Subagents
Explicit Invocation
Directly call an agent:
"Use code-reviewer to review the recent authentication changes."
"Invoke debugger agent to fix the failing test."
Automatic Delegation
Claude automatically delegates based on agent descriptions:
"Refactor the authentication logic for better security." → Claude delegates to: code-reviewer (security expert)
"Fix the database connection timeout error." → Claude delegates to: debugger (error fixing)
Agent Chaining
Chain multiple agents for complex tasks:
"Use performance-analyzer to identify bottlenecks, then debugger to fix them."
"Let code-reviewer check the changes, then doc-writer update the documentation."
Resume Previous Context
Resume a previous agent session:
"Resume the code-reviewer session with agentId abc123 to continue where we left off."
Step 8: Version Control
Share agents via Git:
Commit agents:
cd /path/to/project git add .claude/agents/ git commit -m "feat: add code-reviewer and debugger agents" git push
Clone project with agents:
git clone https://github.com/myorg/project.git
Agents are automatically available in .claude/agents/
Examples
Example 1: Complete Agent Creation Workflow
1. Create project-level agents directory
mkdir -p .claude/agents
2. Create code-reviewer agent
cat > .claude/agents/code-reviewer.md << 'EOF'
name: code-reviewer description: Review code changes for quality, security, and best practices. tools: [Read, Grep, Glob, LSP] model: inherit
Code Reviewer
Review code focusing on security, quality, performance, and tests.
Priority
- Critical: Security vulnerabilities, data corruption
- High: Performance issues, logic errors
- Medium: Code smell, missing tests
- Low: Style improvements EOF
3. Commit to Git
git add .claude/agents/code-reviewer.md git commit -m "feat: add code-reviewer subagent" git push
Example 2: Automated Code Review Workflow
Scenario: After completing a feature, automatically review code.
Setup:
Create post-commit hook
cat > .git/hooks/post-commit << 'EOF' #!/bin/bash claude --agents '{ "code-reviewer": { "description": "Review HEAD commit for quality and security", "tools": ["Read", "Grep", "Glob", "LSP"], "model": "inherit" } }' << INPUT Review the changes in the most recent commit. INPUT EOF chmod +x .git/hooks/post-commit
Usage:
git commit -m "feat: add user authentication"
Post-commit hook automatically invokes code-reviewer
Example 3: Multi-Agent Pipeline
Scenario: Code → Review → Test → Document
In Claude Code:
"Here's the new payment processing code I just wrote.
Use the following agents in sequence:
- code-reviewer - Check for security issues and quality
- test-writer - Write unit tests for the payment flow
- doc-writer - Update API documentation
Return a summary of all findings."
Example 4: Specialized Domain Agent
.claude/agents/database-expert.md :
name: database-expert description: Design and optimize database schemas, queries, and migrations. tools: [Read, Write, Edit, Grep, Glob, Bash] model: sonnet
Database Expert
Expertise in:
- SQL design (PostgreSQL, MySQL, SQLite)
- NoSQL design (MongoDB, Redis, DynamoDB)
- Query optimization and indexing
- Migration strategies
- Data modeling and normalization
Review Checklist
- Schema normalized? (3NF for relational DBs)
- Appropriate indexes?
- Query performance acceptable?
- Foreign keys/constraints defined?
- Migration reversible?
- Backup strategy documented?
Optimization Tips
- Use EXPLAIN ANALYZE to analyze query plans
- Create composite indexes for multi-column WHERE clauses
- Avoid SELECT * in production
- Use connection pooling
- Implement read replicas for read-heavy workloads
Design Principles
-
Single responsibility: Each agent has one focused role
-
Minimal permissions: Only grant tools actually needed
-
Detailed prompts: Include checklists, steps, examples
-
Model selection: Choose model based on task complexity
-
Version control: Track agent configurations in Git
-
Reusability: Design agents to work across projects
-
Clear descriptions: Help Claude understand when to delegate
Common Patterns
Pattern Use Case Example
Code → Review → Fix Iterative improvement Code changes, review, implement fixes
Analyze → Optimize Performance Profile, identify issues, optimize
Design → Implement → Test Feature development Design, write code, test
Document → Deploy Release Update docs, deploy
Best Practices
-
Start with tools: Define what the agent needs, not what you think it needs
-
Inherit model: Use model: inherit unless you need specific behavior
-
Test prompts: Verify agents work as expected before relying on them
-
Iterate: Improve agent prompts based on real usage
-
Document agent usage: Add README notes on when/how to use each agent
-
Use built-in agents: Explore, Plan, General-purpose cover many common cases
Common Pitfalls
-
Too broad scope: Agents should focus on one domain, not do everything
-
Missing constraints: Specify what agents should NOT do
-
Over-permissive tools: Grant minimal tools needed for the task
-
Unclear descriptions: Make descriptions specific so Claude knows when to delegate
-
No examples: Show input/output patterns in prompts
References
-
Claude Code Documentation
-
Agent Configuration Guide
-
YouTube: Subagents Tutorial