clarify

Resolve ambiguity before planning so implementation starts from explicit decisions, not assumptions.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "clarify" with this command: npx skills add corca-ai/claude-plugins/corca-ai-claude-plugins-clarify

Clarify

Resolve ambiguity before planning so implementation starts from explicit decisions, not assumptions.

Quick Start

cwf:clarify <requirement> # Research-first (default) cwf:clarify <requirement> --light # Direct Q&A, no sub-agents

Mode Selection

Before entering Default or --light mode, assess clarify depth based on input specificity:

  • If next-session.md exists in the active session directory or current workspace, read it as optional prior-session context.

  • Always parse the user's task description (required input).

  • Check if the combined input provides: target file paths, expected changes per file, and BDD-style success criteria.

  • Apply this heuristic:

Input specificity Clarify depth Rationale

All 3 present (files + changes + criteria) AskUserQuestion only — ask 2-3 binary/choice questions for remaining ambiguities, skip Phases 2-2.5 Prior session retro effectively served as clarify

1-2 present --light mode — iterative Q&A without sub-agents Partial clarity, direct questions suffice

None present (vague requirement) Default mode — full research + expert analysis Scope is open, exploration needed

This heuristic can be overridden by explicit --light flag or user instruction.

Default Mode

Phase 0: Update Live State

Use the live-state helper for scalar fields, then edit list fields in the resolved live-state file:

bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh set .
phase="clarify"
task="{requirement summary}" live_state_file=$(bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh resolve)

Set live.key_files in {live_state_file} to files relevant to the requirement.

Phase 0.5: Resolve Session Directory

Resolve the effective live-state file, then read live.dir before any {session_dir} placeholder use:

live_state_file=$(bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh resolve) session_dir=$(bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh get . dir)

session_dir: "{live.dir value from resolved live-state file}"

Phase 1: Capture & Decompose

  • Record the original requirement verbatim

  • Decompose into concrete decision points — specific questions that need answers before implementation can begin

  • Frame as questions, not categories ("Which auth library?" not "Authentication")

  • Focus on decisions that affect implementation

  • Present the decision points to the user before proceeding

Original Requirement

"{user's original request verbatim}"

Decision Points

  1. {specific question}
  2. {specific question} ...

Phase 2: Research

Context recovery (before launching)

Apply the context recovery protocol to these files:

  • {session_dir}/clarify-codebase-research.md

  • {session_dir}/clarify-web-research.md

Launch sub-agents simultaneously using the Task tool (only for missing or invalid results).

  • Shared output persistence contract: agent-patterns.md § Sub-agent Output Persistence Contract.

Sub-agent A: Codebase Researcher

Task tool: subagent_type: Explore max_turns: 20 prompt: | Read {SKILL_DIR}/references/research-guide.md and follow Section 1: Codebase Research exactly. Report evidence only — do not make decisions.

Decision points:
{list from Phase 1}

Write your complete findings to: {session_dir}/clarify-codebase-research.md
The file MUST exist when you finish. End your output file with the exact
line `&#x3C;!-- AGENT_COMPLETE -->` as the last line.

Sub-agent B: Web Researcher

Task tool: subagent_type: general-purpose max_turns: 20 prompt: | Read {SKILL_DIR}/references/research-guide.md and follow Section 2: Web / Best Practice Research exactly. Also follow the "Web Research Protocol" section in {CWF_PLUGIN_DIR}/references/agent-patterns.md for tool order and fallback behavior. Report findings only — do not make decisions.

Decision points:
{list from Phase 1}

Write your complete findings to: {session_dir}/clarify-web-research.md
The file MUST exist when you finish. End your output file with the exact
line `&#x3C;!-- AGENT_COMPLETE -->` as the last line.

Both sub-agents run in parallel. Wait for both to complete. Read the output files from session dir (not the in-memory Task return values).

Phase 2.5: Expert Analysis

Context recovery (before launching)

Apply the context recovery protocol to these files:

  • {session_dir}/clarify-expert-alpha.md

  • {session_dir}/clarify-expert-beta.md

Launch two domain expert sub-agents simultaneously using the Task tool (only for missing or invalid results).

Expert selection:

  • Read expert_roster from cwf-state.yaml

  • Analyze decision points for domain keywords; match against each roster entry's domain field

  • Select 2 experts with contrasting frameworks — different analytical lenses on the same problem

  • If roster has < 2 domain matches, fill remaining slots via independent selection (prioritize well-known figures with contrasting methodological approaches)

Expert α

Task tool: subagent_type: general-purpose max_turns: 12 prompt: | Read {CWF_PLUGIN_DIR}/references/expert-advisor-guide.md. You are Expert α, operating in clarify mode.

Your identity: {selected expert name}
Your framework: {expert's domain from roster or independent selection}

Decision points:
{list from Phase 1}

Research findings summary:
{summarized outputs from Phase 2 codebase + web research}

Analyze which decisions are most critical through your published framework.
Use web search to verify your expert identity and cite published work.
Output your analysis in the clarify mode format from the guide.

Write your complete findings to: {session_dir}/clarify-expert-alpha.md
The file MUST exist when you finish. End your output file with the exact
line `&#x3C;!-- AGENT_COMPLETE -->` as the last line.

Expert β

Task tool: subagent_type: general-purpose max_turns: 12 prompt: | Read {CWF_PLUGIN_DIR}/references/expert-advisor-guide.md. You are Expert β, operating in clarify mode.

Your identity: {selected expert name — contrasting framework from Expert α}
Your framework: {expert's domain from roster or independent selection}

Decision points:
{list from Phase 1}

Research findings summary:
{summarized outputs from Phase 2 codebase + web research}

Analyze which decisions are most critical through your published framework.
Use web search to verify your expert identity and cite published work.
Output your analysis in the clarify mode format from the guide.

Write your complete findings to: {session_dir}/clarify-expert-beta.md
The file MUST exist when you finish. End your output file with the exact
line `&#x3C;!-- AGENT_COMPLETE -->` as the last line.

Both expert sub-agents run in parallel. Wait for both to complete. Read the output files from session dir (not the in-memory Task return values).

--light mode: Phase 2.5 is skipped (consistent with --light skipping all sub-agents).

Phase 3: Classify & Decide

Read {SKILL_DIR}/references/aggregation-guide.md for full classification rules.

For each decision point, classify using three evidence sources: codebase research (Phase 2), web research (Phase 2), and expert analysis (Phase 2.5).

  • T1 (Codebase-resolved) — codebase has clear evidence → decide autonomously, cite files

  • T2 (Best-practice-resolved) — best practice consensus → decide autonomously, cite sources

  • T3 (Requires human) — evidence conflicts, all silent, or subjective → queue

Constructive tension: When sources conflict, classify as T3. Expert analysis provides additional signal but does not override direct codebase or best-practice evidence.

Present the classification:

Agent Decisions (T1 & T2)

#Decision PointTierDecisionEvidence
1...T1...file paths
2...T2...sources

Requires Human Decision (T3)

#Decision PointReason
3...conflict / no evidence / subjective

If zero T3 items: Skip Phases 3.5 and 4 entirely. Go to Phase 5.

Phase 3.5: Advisory (T3 only)

Context recovery (before launching)

Apply the context recovery protocol to these files:

  • {session_dir}/clarify-advisor-alpha.md

  • {session_dir}/clarify-advisor-beta.md

Launch two advisory sub-agents simultaneously (only for missing or invalid results):

Advisor α

Task tool: subagent_type: general-purpose model: haiku max_turns: 12 prompt: | Read {SKILL_DIR}/references/advisory-guide.md. You are Advisor α.

Tier 3 decision points:
{list of T3 items}

Research context:
{codebase findings for these items}
{web research findings for these items}

Argue for the first perspective per the guide's side-assignment rules.

Write your complete findings to: {session_dir}/clarify-advisor-alpha.md
The file MUST exist when you finish. End your output file with the exact
line `&#x3C;!-- AGENT_COMPLETE -->` as the last line.

Advisor β

Task tool: subagent_type: general-purpose model: haiku max_turns: 12 prompt: | Read {SKILL_DIR}/references/advisory-guide.md. You are Advisor β.

Tier 3 decision points:
{list of T3 items}

Research context:
{codebase findings for these items}
{web research findings for these items}

Argue for the opposing perspective per the guide's side-assignment rules.

Write your complete findings to: {session_dir}/clarify-advisor-beta.md
The file MUST exist when you finish. End your output file with the exact
line `&#x3C;!-- AGENT_COMPLETE -->` as the last line.

Both advisors run in parallel. Wait for both to complete. Read the output files from session dir (not the in-memory Task return values).

Phase 4: Persistent Questioning (T3 only)

Read {SKILL_DIR}/references/questioning-guide.md for full methodology.

For each T3 item, use AskUserQuestion with:

  • Research context: What codebase and web research found

  • Advisor α's position: Their argument (brief)

  • Advisor β's position: Their argument (brief)

  • The question: With 2-4 concrete options from advisory perspectives

After each answer:

  • Why-dig 2-3 times on surface-level answers (see questioning-guide.md)

  • Detect tensions between this answer and previous answers

  • Check for new ambiguities revealed by the answer → classify → repeat if T3

Phase 5: Output

Requirement Clarification Summary

Before (Original)

"{original request verbatim}"

After (Clarified)

Goal: {precise description} Scope: {what is included and excluded} Constraints: {limitations and requirements}

Expert Analysis

(Only in default mode, omitted in --light)

ExpertFrameworkKey Insight
{Expert α name}{framework}{1-line summary of analysis}
{Expert β name}{framework}{1-line summary of analysis}

All Decisions

#Decision PointDecisionDecided ByEvidence
1......Agent (T1)file paths
2......Agent (T2)sources
3......Humanadvisory context

Save this clarified requirement to a project-appropriate location with a descriptive filename (required).

Completion tracking (after writing the required summary file):

  • Run: bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh set . clarify_completed_at="{ISO 8601 UTC timestamp}"

  • Run: bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh set . clarify_result_file="{saved summary file path}"

This state is what cwf:impl Phase 1.0 checks as a pre-condition.

Follow-up suggestions (when CWF plugin is loaded):

  • cwf:review --mode clarify — Multi-perspective review of the clarified requirement before implementation

  • cwf:handoff --phase — Generate a phase handoff document that captures HOW context (protocols, rules, must-read references, constraints) for the implementation phase. Recommended when context will be cleared before implementation, as plan.md carries WHAT but not HOW.

Present both suggestions. If context is getting large or the user is about to clear context, emphasize the phase handoff suggestion.

--light Mode

Fast, direct clarification without sub-agents. The original clarify behavior with added persistence.

Phase 1: Capture

  • Record the requirement verbatim

  • Identify ambiguities using the categories in questioning-guide.md

Phase 2: Iterative Q&A

Read {SKILL_DIR}/references/questioning-guide.md for methodology.

Loop using AskUserQuestion :

while ambiguities remain: pick most critical ambiguity ask with 2-4 concrete options why-dig on surface-level answers (2-3 levels) detect tensions with prior answers check for new ambiguities

Phase 3: Output

Requirement Clarification Summary

Before (Original)

"{original request verbatim}"

After (Clarified)

Goal: {precise description} Reason: {the ultimate purpose or jobs-to-be-done} Scope: {what is included and excluded} Constraints: {limitations, requirements, preferences} Success Criteria: {how to verify correctness}

Decisions Made

QuestionDecision
......

Save the summary to a project-appropriate location with a descriptive filename (required).

Completion tracking (after writing the required summary file):

  • Run: bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh set . clarify_completed_at="{ISO 8601 UTC timestamp}"

  • Run: bash {CWF_PLUGIN_DIR}/scripts/cwf-live-state.sh set . clarify_result_file="{saved summary file path}"

This state is what cwf:impl Phase 1.0 checks as a pre-condition.

Expert Roster Update (when experts were used in Phase 2.5): Follow the Roster Maintenance procedure in {CWF_PLUGIN_DIR}/references/expert-advisor-guide.md .

Follow-up (when CWF plugin is loaded): Suggest cwf:handoff --phase if the user plans to clear context before implementation.

Rules

  • Research first, ask later (default mode): Exhaust research before asking

  • Cite evidence: Every autonomous decision must include specific evidence

  • Respect the tiers: Do not ask about T1/T2; do not auto-decide T3

  • Constructive tension: Conflicts between sources are signals, not problems

  • Persistent but not annoying: Why-dig on vague answers, accept clear ones

  • Preserve intent: Refine the requirement, don't redirect it

  • Grounded experts: Best practice research must cite real published work

  • Honest advisors: Advisory opinions argue in good faith, not strawman

  • Expert evidence is supplementary: Expert analysis enriches classification but does not override direct codebase or best-practice evidence

References

  • references/research-guide.md — Default sub-agent research methodology

  • references/aggregation-guide.md — Tier classification rules

  • references/advisory-guide.md — Advisor α/β methodology

  • references/questioning-guide.md — Persistent questioning

  • expert-advisor-guide.md — Expert sub-agent identity, grounding, and analysis format

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

retro

No summary provided by upstream source.

Repository SourceNeeds Review
General

clarify

Identify and improve unclear, confusing, or poorly written interface text to make the product easier to understand and use.

Repository Source
8.4K13.2K
pbakaus
General

documentation-update

No summary provided by upstream source.

Repository SourceNeeds Review