Beads From Plan
Convert markdown implementation plans into structured beads tasks with full coverage guarantees.
| Mode | Triggers | Action |
|---|---|---|
| DECOMPOSE | "create tasks from plan", "plan to beads", "break down plan" | Analyze markdown -> JSON task plan -> create beads |
| VERIFY | "check plan coverage", "verify tasks" | Validate existing plan JSON against source markdown |
Purpose: Ensure every section of a plan becomes a trackable, dependency-ordered beads task with quality gates.
Execute autonomously. Never skip sections.
Script Path
The bd-from-plan script is at scripts/bd-from-plan relative to this skill's base directory.
Use the base directory provided at skill activation to construct the full path:
# The base directory is shown as "Base directory for this skill: <path>" when the skill loads.
# Create a plan directory with mktemp
PLAN_DIR=$(mktemp -d /tmp/task-plan-XXXXXXXX)
# Write _plan.json and epic-*.json files into PLAN_DIR (see steps below)
<base_directory>/scripts/bd-from-plan "$PLAN_DIR"
The Process
Overview
Markdown Plan (2000+ lines)
|
v
AI Analysis (per-epic, parallelizable)
- Parse all headings (##, ###, ####)
- Identify epics (top-level sections)
- Identify tasks (sub-sections)
- Map dependencies between tasks
- Verify 100% section coverage
|
v
Plan Directory (mktemp -d)
plan-dir/
_plan.json Global: prefix, workflow, coverage
epic-auth.json Epic + tasks (full details)
epic-payment.json Epic + tasks (full details)
|
v
bd-from-plan script
- Merges _plan.json + epic-*.json files
- Validates structure and coverage
- Rejects if unmapped sections exist
- Detects circular dependencies
- Topological sort by dependencies
- Creates epics and tasks in order
- Wires dependencies via bd dep add
- Reports summary
Critical Rules
100% Coverage Guarantee (STRICT)
Every content section in the markdown MUST map to at least one task.
This is the most important rule. A plan section that doesn't become a task will be forgotten.
| Section Type | Action |
|---|---|
| Implementation section | Map to a task |
| Overview/Introduction | Mark as context_only in coverage |
| Table of Contents | Mark as context_only |
| References/Links | Mark as context_only |
| Everything else | MUST become a task |
The script rejects plans with unmapped sections. Fix coverage before proceeding.
Dependency Accuracy (STRICT)
Dependencies must reflect real implementation order, not document order.
- A task that uses a model depends on the task that creates it
- A task that writes tests depends on the task that creates the code
- A task that configures something depends on the task that installs it
- Document order (section 1 before section 2) is NOT a dependency
Detecting Dependencies
Ask for each task:
- "What must exist before I can start this?"
- "What would break if I did this first?"
If the answer to both is "nothing" -> no dependencies.
Dependency Format
The script uses smart resolution — both formats work transparently:
| Dependency Type | Format | Example |
|---|---|---|
| Same-epic | Just task ID | "depends_on": ["create-model"] |
| Cross-epic | epicId-taskId | "depends_on": ["model-create-model"] |
Resolution: tries exact match against all epicId-taskId first, then falls back to same-epic.
Circular Dependencies
The script detects and rejects circular dependencies. If you find a cycle:
- Break it by splitting one task into two
- The setup part has no dependency, the integration part depends on the other
Atomic Task Decomposition (STRICT)
Each task MUST be completable by an AI agent in a single execution AND expressible as one commit.
This is the second most important rule (after 100% coverage). These tasks are designed for AI agent execution (including parallel agents), not human sessions. Over-broad tasks cause:
- Context rot — accuracy drops 20-50% as agent context grows from 10K→100K tokens (Chroma research)
- Success cliff — SWE-bench: <15 min tasks = 70%+ success, 1+ hour = 23% success
- Poor commits — impossible to create atomic commits from broad tasks
- Tracking failure — "50% done" tasks are invisible in beads
- Parallelism blocked — coarse tasks can't be distributed across parallel agents
Rule 1: Single Commit Test
If you can't describe the task's output in ONE commit message, split it.
| Task Title | Commit Message | Result |
|---|---|---|
| "Create User model" | feat(User): create model with migration | PASS |
| "Create config, migration, model, service" | Can't fit in one message | FAIL — split into 4 |
Rule 2: One File Rule
Each new file creation = separate task.
If a task creates 3 new files, it should be 3 tasks.
| Files Created | Tasks | Why |
|---|---|---|
Model.php | 1 task | Single file |
Model.php + ModelTest.php | 1 task | Code + its test = one concern |
Model.php + Migration.php + Factory.php | 3 tasks | Different concerns |
Service.php + ServiceInterface.php | 1 task | Compile-time dependency |
Exception: A source file + its direct test file = one task (they share one concern).
Rule 3: Maximum 15 Minutes
Implementation tasks MUST NOT exceed 15 minutes.
Tasks are executed by AI agents, not humans. There is no minimum — a 1-minute task is perfectly valid. The goal is maximum atomicity for agent success and parallelization.
| Estimate | Action |
|---|---|
| 1–15 min | Ideal agent task — high success rate, parallelizable |
| 16–30 min | MUST split — agent accuracy degrades significantly |
| > 30 min | MUST split aggressively — this is multiple tasks disguised as one |
Why 15 minutes? Data-driven: METR shows Claude 50% success at ~50 min with non-linear degradation. SWE-bench shows <15 min tasks achieve 70%+ success. Setting the max at 15 minutes keeps each task well within the high-success zone.
Rule 4: Verb-Object Test
A good task title has ONE verb and ONE object.
| Title | Analysis | Result |
|---|---|---|
| "Create MachineStateLock model" | create + model | PASS |
| "Add config and create migration" | add + config, create + migration | FAIL — 2 tasks |
| "Implement service with exception handling" | implement + service (exception is part of it) | PASS |
Red flag words: "and", "+", commas separating nouns. These usually indicate multiple concerns jammed into one task.
Rule 5: Count the Files
If a task implies creating or modifying >2 files, it's too broad.
Count the files mentioned or implied in the description. Source + test = 1 logical file.
Rule 6: Acceptance Criteria Count
If acceptance criteria lists >3 distinct checkpoints, the task combines multiple concerns.
| Acceptance | Criteria | Result |
|---|---|---|
| "Model exists. Migration runs." | 2 | PASS |
| "Manager acquires. Handle releases. Stale healed. Migration publishable." | 4 | FAIL — split |
Rule 7: Noun Count in Title
Count the distinct nouns (objects being created/modified) in the title. More than 2 = split.
| Title | Nouns | Result |
|---|---|---|
| "Create MachineLockManager service" | 1 (MachineLockManager) | PASS |
| "Lock infrastructure: config, migration, model, service, exception" | 5 | FAIL — 5 tasks |
Recursive Decomposition Algorithm
After initial task identification, the agent MUST run this loop:
FOR each task:
1. Single Commit Test → "Can I write ONE commit message for this?"
2. Verb-Object Test → "Does the title have ONE verb + ONE object?"
3. Noun Count → "How many distinct things am I creating?"
4. File Count → "How many files will this create/modify?"
5. Time Check → "Is this ≤ 15 minutes?"
6. Acceptance Count → "Are there ≤ 3 acceptance criteria?"
IF any check fails:
→ Split the task along the failing dimension
→ Re-run ALL checks on each sub-task
REPEAT until every task passes every check.
Decomposition Example
BEFORE (1 broad task, 120 min):
{
"id": "lock",
"title": "Lock infrastructure: config, migration, model, service, exception",
"estimate_minutes": 120,
"acceptance": "MachineLockManager acquires/blocks/times out. MachineLockHandle releases/extends. Stale locks self-healed. Migration publishable."
}
Failures: Single Commit ❌, Verb-Object ❌, Noun Count ❌ (5), File Count ❌ (5+), Time ❌ (120m), Acceptance ❌ (4+)
AFTER (6 atomic tasks):
[
{"id": "config", "title": "Add parallel_dispatch config section", "estimate_minutes": 5},
{"id": "migration", "title": "Create machine_locks migration", "estimate_minutes": 5},
{"id": "model", "title": "Create MachineStateLock Eloquent model", "estimate_minutes": 10},
{"id": "lock-manager", "title": "Create MachineLockManager service", "estimate_minutes": 15},
{"id": "lock-handle", "title": "Create MachineLockHandle value object", "estimate_minutes": 10},
{"id": "lock-ex", "title": "Create LockTimeoutException class", "estimate_minutes": 5}
]
Each task: one commit, one verb, one file, ≤ 15 min. Parallelizable where dependencies allow.
Expected Task Counts
Use this as calibration — if your count is significantly below, you're under-decomposing. With 15-minute max, expect more tasks than traditional approaches:
| Plan Size | Expected Tasks |
|---|---|
| 100 lines | 12–25 tasks |
| 500 lines | 40–70 tasks |
| 1000 lines | 70–120 tasks |
| 2000 lines | 120–200 tasks |
Quality Gates
The quality gate is a single executable command that combines all quality checks for the project. The agent discovers available commands from the project (composer.json, package.json, Makefile, CI config) and combines them with &&.
Examples:
# PHP/Laravel project
composer lint && composer test && composer larastan
# Node.js project
npm run lint && npm run test && npm run typecheck
# Python project
ruff check . && pytest && mypy .
# Documentation-only tasks (no gate)
(leave quality_gate empty)
The agent MUST verify the quality gate command runs successfully before including it in the plan.
Commit Strategy
Each task's commit_strategy determines HOW the agent commits after the quality gate passes.
| Strategy | Agent Action |
|---|---|
agentic-commits | Invoke the /agentic-commits skill — it splits changes into atomic one-file-per-commit hunks with structured messages |
conventional | git add changed files + git commit with type(scope): message format |
manual | Do NOT commit — leave changes staged for user to handle |
Default: agentic-commits for all code tasks. The workflow-level default applies unless a task overrides it.
MODE 1: DECOMPOSE
Step 0: Ask User Preferences (MANDATORY)
Before reading the plan, ask the user two questions. Do NOT skip this step.
Question 1: Quality Gate Command
Ask: "What quality check commands should run after each task?"
Discovery approach: First, try to discover existing quality commands from the project:
- Check
composer.jsonscripts (e.g.,lint,test,larastan,infection) - Check
package.jsonscripts (e.g.,lint,test,typecheck) - Check
Makefiletargets - Check CI config (
.github/workflows/,.gitlab-ci.yml)
Present discovered commands to the user, or ask them to specify:
I found these quality commands in your project:
- composer lint
- composer test
- composer larastan
Should I combine all of these as the quality gate, or do you want to customize?
The quality gate is a single executable command — combine multiple checks with &&:
composer lint && composer test && composer larastan
Step 0.5: Verify Quality Gate Command (MANDATORY)
Before generating the JSON plan, RUN the quality gate command to verify it works:
# Run the combined command
composer lint && composer test && composer larastan
If the command fails:
- Ask the user to fix the issue or adjust the command
- Do NOT proceed with JSON generation until the command succeeds
- This prevents writing a broken command into every task
Question 2: Commit Strategy
Ask: "How should completed tasks be committed?"
Present options:
Commit Strategy options:
[1] agentic-commits — atomic, one-file-per-commit, structured format (recommended)
[2] conventional — conventional commit messages (feat:, fix:, etc.)
[3] manual — no auto-commit, handle manually
Store in JSON
Record the user's choices in the workflow field of the JSON plan:
{
"workflow": {
"quality_gate": "composer lint && composer test && composer larastan",
"commit_strategy": "agentic-commits",
"checklist_note": "- [ ] Run quality gate: composer lint && composer test && composer larastan\n- [ ] Commit IMMEDIATELY after gate passes (do NOT batch with other tasks)\n- [ ] Commit using agentic-commits"
}
}
The checklist_note is a human-readable summary of the workflow. The script appends it to every task's description as a checklist.
Individual tasks can override the workflow defaults via their own quality_gate and commit_strategy fields. If not overridden, the workflow defaults apply.
Step 1: Read the Plan
Delegate plan reading to keep the main agent's context clean.
For large plans (500+ lines), use this approach:
-
Extract headings first — get the structural skeleton without reading content:
grep -n '^#' plan.md -
Delegate full reading to a subagent — spawn a single Agent (subagent_type: "general-purpose") with a clear prompt:
- Read the full plan file
- Extract epics, tasks, dependencies, and coverage mapping
- Return a structured summary (not the raw content)
-
For smaller plans (<500 lines) — reading directly is fine, but prefer the Read tool over cat.
Why? Large plans (2000+ lines) consume 30-50K tokens of context. Delegating to a subagent keeps the main context free for JSON generation and validation. Chunked parallel reading was tested and rejected — cross-chunk dependency loss outweighs the speed gain.
Step 2: Extract Structure
Parse all headings and build a section tree:
# Title -> context_only
## 1. Authentication -> epic: auth
### 1.1 User Model -> task: auth-user-model
### 1.2 Login Flow -> task: auth-login-flow
#### 1.2.1 JWT Tokens -> task: auth-jwt-tokens
### 1.3 Password Reset -> task: auth-password-reset
## 2. Authorization -> epic: authz
### 2.1 Role System -> task: authz-roles
...
Rules for section-to-task mapping:
#(h1) = Plan title ->context_only##(h2) = Epic candidates###(h3) = Task candidates####(h4) = Sub-task candidates (merge into parent task or create separate task)
Step 3: Identify Dependencies
For each task, scan the plan for:
- "requires X", "depends on X", "after X"
- References to entities created in other tasks
- Logical ordering (create before use, define before implement)
Build a dependency list per task.
Step 4: Build Coverage Map
Create a table mapping EVERY heading to a task or context_only:
| Section | Mapped To | Status |
|---------|-----------|--------|
| # Plan Title | - | context_only |
| ## Overview | - | context_only |
| ## 1. Auth | epic:auth | mapped |
| ### 1.1 User Model | task:auth-user-model | mapped |
| ### 1.2 Login Flow | task:auth-login-flow | mapped |
| ## Appendix | - | context_only |
If ANY section is unmapped and not context_only -> STOP and fix.
Step 5: Generate Plan Directory
Write the plan as a directory with separate files. This keeps each file small and recoverable.
PLAN_DIR=$(mktemp -d /tmp/task-plan-XXXXXXXX)
Step 5a: Write _plan.json (global metadata)
cat > "$PLAN_DIR/_plan.json" << 'EOF'
{
"version": 1,
"source": "docs/plans/feature-x.md",
"prefix": "feat",
"workflow": {
"quality_gate": "composer lint && composer test && composer type",
"commit_strategy": "agentic-commits",
"checklist_note": "- [ ] Run quality gate: composer lint && composer test && composer type\n- [ ] Commit IMMEDIATELY after gate passes (do NOT batch with other tasks)\n- [ ] Commit using agentic-commits"
},
"coverage": {
"total_sections": 12,
"mapped_sections": 10,
"unmapped": [],
"context_only": ["# Feature X Plan", "## Overview"]
}
}
EOF
Step 5b: Write one epic-{id}.json per epic
Write each epic as a separate file. Each file is small (~1-3K tokens), minimizing AI output errors.
cat > "$PLAN_DIR/epic-auth.json" << 'EOF'
{
"id": "auth",
"title": "Authentication System",
"description": "Implement user authentication with JWT tokens and password reset",
"priority": 1,
"labels": ["auth", "security"],
"source_sections": ["## 1. Authentication"],
"tasks": [
{
"id": "user-model",
"title": "Create User model and migration",
"description": "Define User model with email, password_hash, timestamps. Create migration with proper indexes.",
"type": "feature",
"priority": 1,
"estimate_minutes": 10,
"labels": ["model"],
"depends_on": [],
"source_sections": ["### 1.1 User Model"],
"source_lines": "15-42",
"acceptance": "User model exists with migration. Factory and seeder work. PHPStan passes.",
"commit_strategy": "agentic-commits"
},
{
"id": "login-flow",
"title": "Implement login endpoint with JWT",
"description": "POST /api/login accepts email+password, returns JWT.",
"type": "feature",
"priority": 1,
"estimate_minutes": 15,
"depends_on": ["user-model"],
"source_sections": ["### 1.2 Login Flow", "#### 1.2.1 JWT Tokens"],
"source_lines": "43-98",
"acceptance": "Login endpoint returns valid JWT. Invalid credentials return 401.",
"commit_strategy": "agentic-commits"
}
]
}
EOF
File naming convention: epic-{id}.json where {id} matches the epic's id field. Files are read in alphabetical order.
Step 6: Execute Plan
<base_directory>/scripts/bd-from-plan "$PLAN_DIR"
The script will:
- Validate the JSON
- Check coverage (fail if unmapped sections)
- Detect circular dependencies (fail if cycles)
- Create epics in order
- Create tasks in topological order
- Wire up dependencies
- Print summary with
bd readyoutput
Step 7: Verify
bd ready --pretty # See what's ready to work on
bd graph # Visualize dependency graph
bd epic status # Check epic completion status
MODE 2: VERIFY
Validate an existing plan JSON against its source markdown.
Step 1: Load Plan Directory
# Read the global metadata
cat "$PLAN_DIR/_plan.json" | jq .
# Read the source markdown
SOURCE=$(cat "$PLAN_DIR/_plan.json" | jq -r '.source')
# List all epic files
ls "$PLAN_DIR"/epic-*.json
Step 2: Extract Markdown Headings
grep -n '^#' "$SOURCE" | head -50
Step 3: Cross-Reference
For each heading in the markdown:
- Check if it appears in any task's
source_sections - Check if it appears in
coverage.context_only - If neither -> report as unmapped
Step 4: Report
Coverage Report:
Total sections: 12
Mapped to tasks: 10
Context only: 2
Unmapped: 0
Status: PASS
ID Naming Convention
IDs follow a hierarchical pattern:
prefix-epicId-taskId
| Component | Format | Example |
|---|---|---|
| prefix | lowercase alpha | feat, auth, fix |
| epicId | kebab-case | auth, data-layer, ui |
| taskId | kebab-case | user-model, login-flow |
| Full ID | prefix-epic-task | feat-auth-user-model |
The script combines these automatically:
- Epic ID:
{prefix}-{epicId}->feat-auth - Task ID:
{prefix}-{epicId}-{taskId}->feat-auth-user-model
Keep IDs short but descriptive. Avoid abbreviations that aren't obvious.
Dry Run
Always do a dry run first for large plans:
<base_directory>/scripts/bd-from-plan --dry-run "$PLAN_DIR"
This validates everything and shows what WOULD be created without actually creating anything.
Error Recovery
| Error | Action |
|---|---|
| Unmapped sections | Add missing tasks or mark as context_only |
| Circular dependency | Split the cycle-causing task |
| Duplicate IDs | Rename conflicting task IDs |
| bd create fails | Check bd is initialized (bd info), check prefix |
| Partial creation | Script tracks created IDs, re-run skips existing |
bd CLI Reference
CRITICAL: NEVER use bd edit — it opens $EDITOR which blocks the agent. Use bd update with flags instead.
Priority Format
Priorities are integers 0–4, never strings. Using "high" or "medium" will error.
| Value | Meaning |
|---|---|
| 0 | Critical |
| 1 | High |
| 2 | Medium (default) |
| 3 | Low |
| 4 | Backlog |
Task Lifecycle
After tasks are created, the agent works through them using this cycle:
1. FIND → bd ready --pretty # What can I work on?
2. READ → bd show <id> # Understand the task
3. CLAIM → bd update <id> --claim # Atomic claim (fails if taken)
4. WORK → implement the task
5. GATE → run quality gate command # Must pass before commit
6. COMMIT → commit using commit_strategy # IMMEDIATELY after gate passes
7. CLOSE → bd close <id> --reason="..." # Mark complete
8. NEXT → bd ready --pretty # What's next?
Commit After Every Task (STRICT)
Each task MUST be committed IMMEDIATELY after its quality gate passes. Do NOT batch commits.
| Pattern | Result |
|---|---|
| Task 1 done → commit → Task 2 done → commit → Task 3 done → commit | CORRECT |
| Task 1 done → Task 2 done → Task 3 done → commit all | WRONG |
Why? Batching commits defeats the purpose of atomic tasks:
- Impossible to revert a single task
bd closewith no matching commit breaks traceability- Parallel agents can't see each other's progress
- Context loss mid-session loses all uncommitted work
The commit strategy (from workflow.commit_strategy or task-level override) determines the format.
For agentic-commits: use the /agentic-commits skill to split changes into atomic, one-file-per-commit hunks.
Finding Work
bd ready --pretty # Tasks with all deps satisfied (no blockers)
bd list --status=open # All open tasks
bd blocked # Tasks waiting on dependencies
bd search "query" # Full-text search across all tasks
Claiming and Working
bd update <id> --claim # Atomic claim — fails if already claimed
bd update <id> --status=in_progress # Manual status change
bd update <id> --notes="progress update" # Add notes during work
Completing
bd close <id> --reason="Implemented with tests. All passing."
bd close <id1> <id2> <id3> # Batch close (more efficient)
Issue Management
bd create "Title" --type=task --priority=2
bd create "Title" --type=bug --parent=<epic-id>
bd update <id> --title="New title" # NEVER use bd edit
bd update <id> --add-label=foo
bd update <id> --defer="+2d" # Hide from ready until date
bd rename <old-id> <new-id> # Change issue ID
Dependencies
bd dep add <issue> <depends-on> # issue depends on depends-on
bd dep tree <id> # Text dependency tree
bd graph <id> --compact # Visual dependency graph
Epics and Hierarchy
bd epic status # Epic completion percentages
bd children <id> # List epic's children
Session End Protocol
Before ending a session:
bd closeall completed tasks- Check
bd ready --pretty— report what's next bd sync --from-mainif on an ephemeral branch
Issue Types
task | bug | feature | epic | chore
Issue Statuses
| Status | Meaning |
|---|---|
open | Not started |
in_progress | Being worked on |
blocked | Waiting on dependency |
deferred | Hidden until defer date |
closed | Completed |
Quick Reference
| Command | Purpose |
|---|---|
bd-from-plan plan-dir/ | Create tasks from plan directory |
bd-from-plan --dry-run plan-dir/ | Preview without creating |
bd ready --pretty | Show next available tasks |
bd show <id> | Task details |
bd update <id> --claim | Claim a task before working |
bd close <id> --reason="..." | Complete a task |
bd graph | Dependency visualization |
bd dep tree <id> | Show task dependency tree |
bd epic status | Epic completion overview |
bd blocked | Tasks waiting on dependencies |
bd search "query" | Full-text search |