kata-debug

Systematically debug issues, investigating bugs, troubleshooting problems, or tracking down errors with persistent state across context resets. Triggers include "debug", "investigate bug", "troubleshoot", "find the problem", "why isn't this working", and "debug session".

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "kata-debug" with this command: npx skills add gannonh/kata-skills/gannonh-kata-skills-kata-debug

<objective> Debug issues using scientific method with subagent isolation.

Orchestrator role: Gather symptoms, spawn kata-debugger agent, handle checkpoints, spawn continuations.

Why subagent: Investigation burns context fast (reading files, forming hypotheses, testing). Fresh 200k context per investigation. Main context stays lean for user interaction. </objective>

<context> User's issue: $ARGUMENTS

Check for active sessions:

find .planning/debug -maxdepth 1 -name "*.md" 2>/dev/null | grep -v resolved | head -5
</context> <process>

0. Resolve Model Profile

Read model profile for agent spawning:

MODEL_PROFILE=$(node scripts/kata-lib.cjs read-config "model_profile" "balanced")

Default to "balanced" if not set.

Model lookup table:

Agentqualitybalancedbudget
kata-debuggeropussonnetsonnet

Store resolved model for use in Task calls below.

1. Check Active Sessions

If active sessions exist AND no $ARGUMENTS:

  • List sessions with status, hypothesis, next action
  • User picks number to resume OR describes new issue

If $ARGUMENTS provided OR user describes new issue:

  • Continue to symptom gathering

2. Gather Symptoms (if new issue)

Use AskUserQuestion for each:

  1. Expected behavior - What should happen?
  2. Actual behavior - What happens instead?
  3. Error messages - Any errors? (paste or describe)
  4. Timeline - When did this start? Ever worked?
  5. Reproduction - How do you trigger it?

After all gathered, confirm ready to investigate.

3. Read Instruction Files

Before spawning agents, read agent instructions using the Read tool:

  • references/debugger-instructions.md (relative to skill base directory) — store as debugger_instructions_content

4. Spawn Debugger Agent

Fill prompt and spawn:

<objective>
Investigate issue: {slug}

**Summary:** {trigger}
</objective>

<symptoms>
expected: {expected}
actual: {actual}
errors: {errors}
reproduction: {reproduction}
timeline: {timeline}
</symptoms>

<mode>
symptoms_prefilled: true
goal: find_and_fix
</mode>

<debug_file>
Create: .planning/debug/{slug}.md
</debug_file>
Task(
  prompt="<agent-instructions>\n{debugger_instructions_content}\n</agent-instructions>\n\n" + filled_prompt,
  subagent_type="general-purpose",
  model="{debugger_model}",
  description="Debug {slug}"
)

5. Handle Agent Return

If ## ROOT CAUSE FOUND:

  • Display root cause and evidence summary
  • Offer options:
    • "Fix now" - spawn fix subagent
    • "Plan fix" - suggest /kata-plan-phase --gaps
    • "Manual fix" - done

If ## CHECKPOINT REACHED:

  • Present checkpoint details to user
  • Get user response
  • Spawn continuation agent (see step 5)

If ## INVESTIGATION INCONCLUSIVE:

  • Show what was checked and eliminated
  • Offer options:
    • "Continue investigating" - spawn new agent with additional context
    • "Manual investigation" - done
    • "Add more context" - gather more symptoms, spawn again

6. Spawn Continuation Agent (After Checkpoint)

When user responds to checkpoint, spawn fresh agent:

<objective>
Continue debugging {slug}. Evidence is in the debug file.
</objective>

<prior_state>
Debug file: @.planning/debug/{slug}.md
</prior_state>

<checkpoint_response>
**Type:** {checkpoint_type}
**Response:** {user_response}
</checkpoint_response>

<mode>
goal: find_and_fix
</mode>
Task(
  prompt="<agent-instructions>\n{debugger_instructions_content}\n</agent-instructions>\n\n" + continuation_prompt,
  subagent_type="general-purpose",
  model="{debugger_model}",
  description="Continue debug {slug}"
)
</process>

<success_criteria>

  • Active sessions checked
  • Symptoms gathered (if new)
  • kata-debugger spawned with context
  • Checkpoints handled correctly
  • Root cause confirmed before fixing </success_criteria>

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

kata-resume-work

No summary provided by upstream source.

Repository SourceNeeds Review
General

kata-review-pull-requests

No summary provided by upstream source.

Repository SourceNeeds Review
General

kata-add-issue

No summary provided by upstream source.

Repository SourceNeeds Review
General

kata-discuss-phase

No summary provided by upstream source.

Repository SourceNeeds Review