agents-analyze

Analyze the plugin collection to identify where sub-agents would improve workflows by isolating verbose output, enforcing constraints, or specializing behavior.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "agents-analyze" with this command: npx skills add laurigates/claude-plugins/laurigates-claude-plugins-agents-analyze

/agents:analyze

Analyze the plugin collection to identify where sub-agents would improve workflows by isolating verbose output, enforcing constraints, or specializing behavior.

Context

  • Plugin directories: !find . -maxdepth 1 -type d -name '*-plugin'

  • Existing agents: !find agents-plugin/agents -maxdepth 1 -name '*.md'

  • Skills: !find . -path '/skills//skill.md'

  • Skills (user-invocable): !find . -path '/skills//SKILL.md' -not -path './agents-plugin/*'

Parameters

  • $1 : Optional --focus <plugin-name> to analyze a single plugin in depth

Your Task

Perform a systematic analysis of the plugin collection to identify sub-agent opportunities.

Step 1: Inventory Current State

Scan the repository to build an inventory:

  • List all plugins with their skill/command counts

  • Read existing agents in agents-plugin/agents/ to understand current coverage

  • If --focus is provided, restrict analysis to that plugin only

Step 2: Identify Sub-Agent Opportunities

For each plugin (or focused plugin), evaluate skills and commands against these criteria:

Context Isolation (Primary Value)

Operations that produce verbose output benefiting from isolation:

Indicator Examples

Build tools docker build, cargo build, webpack, tsc

Infrastructure ops terraform plan/apply, kubectl describe

Test runners Full test suite output, coverage reports

Profiling tools Flame graphs, benchmark results

Security scanners Vulnerability reports, audit output

Log analysis Application logs, system logs

Package managers Dependency trees, audit results

Constraint Enforcement

Operations that should be limited to specific tools:

Constraint Rationale

Read-only analysis Security audit, code review - no writes

No network Pure code analysis tasks

Limited bash Tasks that shouldn't execute arbitrary commands

Model Selection Opportunities

Assign opus when... Assign haiku when...

Complex reasoning required Structured/mechanical task

Security analysis Status checks

Architecture decisions Output formatting

Debugging methodology Configuration generation

Performance analysis File operations

Step 3: Gap Analysis

Compare identified opportunities against existing agents:

  • Missing agents: Skills that have no corresponding agent

  • Model mismatches: Agents using wrong model for their task complexity

  • Tool over-permissions: Agents with tools they don't need

  • Consolidation opportunities: Multiple agents that could be merged

  • Delegation mapping: Check if /delegate references agents that don't exist

Step 4: Produce Recommendations

For each recommended new agent, specify:

Proposed: <agent-name>

  • Model: opus | haiku
  • Covers plugins: <list>
  • Context value: <what verbose output it isolates>
  • Tools: <minimal set>
  • Constraint: <read-only, no-network, etc.>
  • Priority: HIGH | MEDIUM | LOW
  • Rationale: <why this is better than inline execution>

For model/tool corrections to existing agents:

Fix: <agent-name>

  • Current model: X → Recommended: Y
  • Reason: <why the change improves things>

Step 5: Implementation Check

If new agents are recommended, check:

  • Agent name doesn't conflict with existing

  • Agent fills a gap referenced by /delegate command

  • Model selection follows haiku-for-mechanical, opus-for-reasoning

  • Tool set is minimal (principle of least privilege)

  • Agent has clear "does / does NOT do" boundaries

Output Format

Sub-Agent Analysis Report

Scope: [All plugins | focused plugin name] Date: [today] Plugins analyzed: N Existing agents: N Skills without agent coverage: N

Current Coverage Map

DomainAgentSkills CoveredGaps
............

Recommended New Agents

[Proposals from Step 4]

Recommended Fixes

[Model/tool corrections from Step 4]

Delegation Mapping Updates

[Any updates needed for /delegate command's agent reference table]

Priority Summary

PriorityCountTop Recommendation
HIGHN...
MEDIUMN...
LOWN...

Post-Actions

After presenting the analysis:

  • Ask the user if they want to implement any of the recommendations

  • If yes, create the agent files following the existing patterns in agents-plugin/agents/

  • Update agents-plugin/README.md with new agents

  • Update /delegate command's agent reference table if needed

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

git-commit-workflow

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

git-branch-pr-workflow

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

workflow-checkpoint-refactor

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

ci-workflows

No summary provided by upstream source.

Repository SourceNeeds Review