Code Forge — Plan
Generate an implementation plan from a feature document or a requirement prompt.
Iron Law
ALL OUTPUT GOES INTO {output_dir}/{feature-name}/ AS SEPARATE FILES — overview.md, plan.md, tasks/*.md, state.json.
Anti-Rationalization Table
| Thought | Reality |
|---|---|
| "I'll put everything in one plan.md for simplicity" | Multi-file structure is how impl/status/review find individual tasks. One file breaks all downstream skills. |
| "docs/plan is close enough" | Output dir is {output_dir} (default: planning/). docs/plan, docs/plans are ALL wrong. |
| "I'll create the tasks inline in plan.md" | Tasks go in tasks/{name}.md as separate files. Step 8 sub-agent creates them. |
| "Numeric prefixes help with ordering" | Execution order is in overview.md and state.json. Files are setup.md, not 01-setup.md. |
| "I can skip state.json" | state.json drives impl, status, fixbug. Without it, no downstream skill works. |
| "The overview files are optional" | Both project-level and feature-level overview.md are mandatory outputs. |
| "The input looks like a path but has no @, I'll treat it as a prompt" | Run the Step 2.0 path-like input guard. Paths without @ are almost always user mistakes — ask before proceeding. |
| "I'll add FE-01- prefixes to feature directories for clarity" | Feature directory names must match the source filename exactly in kebab-case. core-dispatcher, not FE-01-core-dispatcher. |
| "I'll generate all features as flat files in one directory" | Each feature gets its own subdirectory with the full multi-file structure. Flat files break all downstream skills. |
When to Use
- Have a feature document that needs to be broken into development tasks
- Have a requirement idea (text prompt) that needs planning
- Need a structured plan with TDD task breakdown
- Use
--tmpto avoid adding plan files to the project (writes to.code-forge/tmp/, auto-gitignored)
Workflow
Input (Document or Prompt) → Analysis → Planning → Task Breakdown → Status Tracking
Context Management
Steps 4, 7, and 8 are offloaded to sub-agents via the Agent tool to prevent context window exhaustion on large projects. The main context retains only concise summaries returned by each sub-agent, while full document analysis, file generation, and code implementation happen in isolated sub-agent contexts that are discarded after completion.
Actual execution order: Steps 0 through 13, in sequential order.
Step 9 (overview.md) executes after Steps 7 and 8 because it references task files generated by those steps.
Detailed Steps
@../shared/configuration.md
Plan-specific additions to Step 0:
- 0.2 additional defaults:
reference_docs.sources=[],reference_docs.exclude=[] - 0.3 additional validation:
reference_docs.sourcesmust be an array of strings (fall back to[]on error)reference_docs.sourcesentries must NOT contain..(security risk)reference_docs.sourcesentries must NOT point to system directories (node_modules/,.git/,build/)reference_docs.excludemust be an array of strings (fall back to[]on error)
- 0.4 additional display: Resolved file creation path:
{output_dir}/{feature_name}/ - 0.4 error handling:
- Config file not found → note "using defaults" and continue
- Config file parse error → show error, fall back to defaults, continue
- Invalid config values → show warnings, fall back to defaults for invalid fields, continue
- 0.6 path resolution notes:
base_dirempty string means project root;input_dirdefault:docs/features/;output_dirdefault:planning/
Step 1: Resolve and Summarize Reference Docs
This step only runs when reference_docs.sources is non-empty in the merged configuration.
If reference_docs.sources is empty or not configured, skip directly to Step 2.
1.1 Resolve Glob Patterns
- Resolve each pattern in
config.reference_docs.sourcesagainstproject_root - Apply
config.reference_docs.excludepatterns to filter results - Auto-exclude
{output_dir}/**to prevent circular references - Deduplicate results (same file matched by multiple patterns)
- If 0 files matched → display:
Reference docs: 0 files matched for configured patterns. Continuing without reference context.→ skip to Step 2 - If > 30 files matched → display file list, use
AskUserQuestion: "Found {N} reference docs. This will spawn {N} parallel sub-agents."- "Proceed with all {N} files"
- "Let me refine the patterns" → show current
sources/excludeconfig, stop and let user update.code-forge.json
1.2 Display Matched Files
Display the matched file list:
Reference docs: {count} files matched
{path_1}
{path_2}
...
Proceed directly — no confirmation needed (unless > 30 files triggered 1.1 step 6).
1.3 Parallel Sub-agent Summarization
Spawn N parallel sub-agents via Agent tool, one per matched file:
subagent_type:"general-purpose"description:"Summarize reference doc: {filename}"
Each sub-agent prompt:
- The file path (sub-agent reads it from disk)
- Instruction to return ONLY a structured summary in this exact format:
DOC_PATH: {file_path}
DOC_TYPE: <architecture | api | requirements | conventions | data-model | other>
SUMMARY: <2-3 sentence summary of what this document describes>
KEY_DECISIONS: <bulleted list of important technical decisions, constraints, or patterns>
RELEVANCE_TAGS: <comma-separated keywords for matching against feature docs>
Target summary size: ~300-500 bytes per doc.
Error handling: If a sub-agent fails to summarize a file, log a warning and skip that file:
Warning: Failed to summarize {path} — skipping
Reference docs: {success_count} of {total_count} files summarized successfully
1.4 Store Reference Summaries
Collect all successful sub-agent results into a reference_summaries list (ordered by file path). Store in memory for use by Steps 4, 7, and 8.
1.5 Deduplicate Against Input Doc
After the input document path is known (after Step 3), remove it from reference_summaries if present — the feature doc is already read directly by Steps 4 and 7. This deduplication happens lazily: the summaries are stored now, deduplication is applied when injecting into sub-agent prompts.
Step 2: Prompt Mode — Delegate to spec-forge:feature
This step only runs when the input is NOT a file path (does NOT start with @).
If the input starts with @, skip directly to Step 3.
2.0 Path-Like Input Guard
Before treating input as a prompt, check if it looks like a file/directory path. If the input matches ANY of these patterns, it is almost certainly a path the user forgot to prefix with @:
- Contains
/(e.g.,../apcore-cli,docs/features/auth.md) - Starts with
.(e.g.,./src,../other-project) - Ends with
.md(e.g.,user-auth.md) - Matches an existing file or directory on disk
Action: Do NOT silently proceed as prompt mode. Instead, use AskUserQuestion:
Your input looks like a file/directory path: "{input}"
Did you mean to use file mode? (paths require an @ prefix)
- Options:
- "Yes, use as file path" → prepend
@and skip to Step 3 - "No, treat as text prompt" → continue with Step 2.1
- "Yes, use as file path" → prepend
This guard prevents the common mistake of forgetting @, which causes the entire workflow to bypass Directory/File Mode and produce incorrect output.
When a user provides a text prompt instead of a file path, code-forge:plan delegates feature spec creation to spec-forge:feature. This maintains the separation of concerns: spec-forge owns specification, code-forge owns implementation planning.
2.1 Generate Slug
Convert the prompt text to a kebab-case slug for the feature name:
- ASCII text: lowercase, replace spaces/special chars with hyphens (e.g., "User Login Feature" →
user-login-feature) - Non-ASCII text (Chinese, Japanese, etc.): use
AskUserQuestionto let user confirm or provide a custom slug. Suggest a reasonable English slug based on the prompt meaning.
2.2 Check for Existing Feature Spec
Check if {input_dir}/{slug}.md already exists:
- Exists → use it directly, skip to 2.4
- Does not exist → continue to 2.3
2.3 Auto-Delegate to spec-forge:feature
Invoke spec-forge:feature to generate the feature spec:
Launch Agent(subagent_type="general-purpose"):
- Sub-agent prompt: "Invoke the spec-forge:feature skill for '{slug}'. The user's requirement is: '{original prompt text}'. Use standalone mode — generate the feature spec at docs/features/{slug}.md based on this requirement description. Keep the Q&A minimal since the user already provided context in the prompt."
- Wait for completion → verify
docs/features/{slug}.mdexists
If spec-forge:feature is not available (skill not installed), fall back to generating a minimal feature document directly:
# {Feature Title}
> Feature spec for code-forge implementation planning.
> Source: auto-generated from prompt
> Created: {date}
## Purpose
{user's original prompt text, verbatim}
## Notes
- Generated from prompt by code-forge (spec-forge:feature not available)
- Consider running `/spec-forge:feature {slug}` for a more detailed spec
2.4 Set File Path
Set {input_dir}/{slug}.md as the current input document path (prefixed with @), then continue to Step 3.
Step 3: Validate Input Document
3.1 Check Document Path
User should provide an @ path pointing to a file or directory:
# File mode — plan a single feature
/code-forge:plan @docs/features/user-auth.md
# Directory mode — list features and let user pick
/code-forge:plan @docs/features/
/code-forge:plan @../../aipartnerup/apcore
Note: Use configured path ({input_dir}/). Also accepts spec-forge tech-design files directly: /code-forge:plan @docs/user-auth/tech-design.md
3.1.1 Directory Mode
If the @ path resolves to a directory (not a file):
- Scan for feature spec files in this order (stop at first match):
<path>/docs/features/*.md<path>/features/*.md<path>/*.md
- Exclude non-feature files from results: filter out
overview.md,README.md,index.md, and any file that is clearly not a feature spec (e.g., changelog, license) - If no
.mdfiles found: display error"No feature specs found in {path}"with the paths tried, then stop - If exactly 1 file found: use it directly (skip selection)
- If multiple files found: display list and use
AskUserQuestionto let user select:Feature specs found in {path}: 1. acl-system 2. core-executor 3. schema-system ... N. [Plan all — generate plans for all features sequentially]- Options: one per file (show filename without
.md), plus "Plan all" as the last option
- Options: one per file (show filename without
- Set the selected file as the input document path, then continue to Step 3.2
"Plan all" batch mode: When the user selects "Plan all":
- Store the full list of feature file paths as
batch_queue - For each file in
batch_queue, execute Steps 3.2 through 13 sequentially (one complete plan per feature) - Between features, display a brief progress line:
Completed {n}/{total}: {feature_name}. Next: {next_feature_name} - After all features are planned, display a batch summary:
Batch planning complete Features planned: {total} {feature_1} — {task_count} tasks {feature_2} — {task_count} tasks ... Project overview: {output_dir}/overview.md Next: /code-forge:impl {feature_name} - The project-level overview (Step 11) is generated/updated after EACH feature, so it always reflects the latest state
- If a feature fails during planning, log the error, skip it, and continue with the next feature. Display skipped features in the batch summary.
- Step 5 answer reuse: In batch mode, Step 5 (tech stack, testing strategy, granularity) is only asked for the first feature. Subsequent features reuse the same answers automatically — unless a feature document explicitly specifies a different tech stack, in which case only that question is re-asked for that feature. This avoids asking the same 3 questions N times.
- Context management: Batch planning accumulates context across features. For batches of more than 5 features, display a warning before starting:
"Planning {N} features sequentially. For very large batches (10+), consider splitting into multiple /code-forge:plan invocations to avoid context exhaustion."Proceed regardless — this is informational only.
Path resolution: Both relative and absolute paths are supported. Relative paths are resolved from the current working directory. External project paths (e.g., @../../other-project) are valid — the feature spec does not need to be inside the current project.
3.2-3.4 Validate Document and Handle Errors
Perform these checks on the provided document:
- File exists — if not found, list available files in
{input_dir}/and suggest corrections (check for typos) - File is not empty — if empty, suggest adding requirements content with a minimal example
- File is Markdown — if not
.md, warn and ask whether to continue as plain text
If no document is provided and Step 2 was not triggered: display usage instructions with examples.
On any error: display the issue, suggest a fix, and stop.
3.5 Detect Existing Plan
Check whether <output_dir>/<feature_name>/ already exists:
-
Has
state.json→ Resume mode: show progress summary (task statuses), ask viaAskUserQuestion:- Continue (recommended) — resume from current progress
- Restart — delete all files and regenerate
- View plan — open plan.md
- Cancel
-
Directory exists but no
state.json→ Conflict mode: warn about existing files, ask:- Backup and overwrite — move to
.backup/then regenerate - Force overwrite — overwrite directly
- Cancel — handle manually then rerun
- Backup and overwrite — move to
Step 4: Analyze Document Content (via Sub-agent)
Offload to sub-agent to keep the full document content out of the main context.
Spawn an Agent tool call with:
subagent_type:"general-purpose"description:"Analyze feature document"
Sub-agent prompt must include:
- The input document file path (so the sub-agent reads it, NOT the main context)
- Instruction to return ONLY a structured summary
- If
reference_summariesis non-empty (from Step 1), include a## Reference Contextsection:## Reference Context The following project documents provide architectural context. Use these to align your analysis with existing project decisions and patterns. {reference_summaries — all summaries concatenated, separated by blank lines}
Sub-agent must analyze and return:
- Feature Name — extracted from the source filename (kebab-case, without
.mdextension). Always use the filename, never the document title. Example: source filesecurity.md→ feature namesecurity, even if the document title is "Security Manager". - Technical Requirements — tech stack, frameworks, languages mentioned
- Functional Scope — 2-3 sentence summary of what needs to be implemented
- Constraints — performance, security, compatibility requirements
- Testing Requirements — testing strategy mentioned, or "not specified"
- Key Components — major modules/components to build (bulleted list)
- Estimated Complexity — low/medium/high with brief rationale
Main context retains: Only the structured summary returned by the sub-agent (~1-2KB). The full document content stays in the sub-agent's context and is discarded.
Important: Store the returned summary for use in Steps 5 and 7.
Step 5: Ask for Additional Information
If not clearly specified in the document, use a single AskUserQuestion combining up to 3 questions. Skip any question already answered by the document:
Question 1: Technology Stack Confirmation
- "Use {extracted_tech} mentioned in document"
- "Use existing project tech stack" — analyze project code, use existing frameworks
- "Custom" — user specifies
Question 2: Testing Strategy
- "Strict TDD (Recommended)" — write tests first for each task
- "Tests After" — implement first, write tests at end
- "Minimal Testing" — test only core logic
Question 3: Task Granularity
- "Fine-grained (5-10 tasks)" — each task 1-2 hours
- "Medium-grained (3-5 tasks)" — each task half day
- "Coarse-grained (2-3 tasks)" — each task 1-2 days
Step 6: Create Directory Structure
Extract feature name from the source filename (convert to kebab-case, strip .md extension). Always use the filename — never derive the feature name from the document title, as titles may differ from filenames.
6.0 Output Path Assertion (Hard Gate)
Before creating any directory or file, run these assertions. If ANY fails, STOP immediately — do not proceed.
ASSERT: resolved output path does NOT contain "docs/plan"
ASSERT: resolved output path does NOT contain "docs/plans"
ASSERT: resolved output path does NOT contain "docs/planning"
ASSERT: resolved output path equals "{project_root}/{output_dir}/{feature_name}/"
where {output_dir} is the value resolved in Step 0 (default: "planning/")
ASSERT: feature_name does NOT contain numeric prefixes (e.g., "FE-01-", "01-")
ASSERT: feature_name is kebab-case and matches the source document filename
(e.g., source "core-dispatcher.md" → feature_name "core-dispatcher")
On assertion failure: display the violation and the correct path, then stop. Example:
OUTPUT PATH VIOLATION: about to write to "docs/plans/FE-01-core-dispatcher.md"
Expected: "planning/core-dispatcher/"
Fix: use the resolved output_dir from Step 0 configuration
Output directory: {output_dir} defaults to planning/ — NEVER docs/plan/, docs/plans/, docs/planning/, or any other invented path. If you are about to write to any path other than {output_dir}/{feature_name}/, STOP — you are making a mistake. Always use the resolved output_dir from Step 0 configuration.
Create directory structure and proceed directly — no confirmation needed:
{output_dir}/{feature_name}/
├── overview.md
├── plan.md
├── tasks/
└── state.json
Example with defaults: planning/user-auth/, planning/user-auth/tasks/, etc.
Step 7: Generate plan.md (via Sub-agent)
Offload to sub-agent to keep plan generation output out of the main context.
Spawn an Agent tool call with:
subagent_type:"general-purpose"description:"Generate implementation plan"
Sub-agent prompt must include:
- The input document file path (sub-agent re-reads the original for full context)
- The structured summary from Step 4 (paste it into the prompt)
- User answers from Step 5 (tech stack choice, testing strategy, task granularity)
- The output file path:
{output_dir}/{feature_name}/plan.md - Instructions to write the plan file AND return a concise task list summary
- If
reference_summariesis non-empty, include a## Reference Contextsection:## Reference Context The following project documents provide architectural context. Ensure the implementation plan is consistent with existing architecture and conventions. {reference_summaries — all summaries concatenated, separated by blank lines}
Sub-agent must write plan.md with these required sections:
- Goal — one sentence describing what to implement
- Architecture Design — component structure, data flow, technical choices with rationale
- Task Breakdown — dependency graph (mermaid
graph TD) + task list with estimated time and dependencies - Risks and Considerations — identified technical challenges
- Acceptance Criteria — checklist (tests pass, code review, docs, performance)
- References — related technical docs and examples
Task ID naming rules (critical): Task IDs must be descriptive names without numeric prefixes. Use setup, models, api — NOT 01-setup, 02-models, 03-api. Execution order is controlled by overview.md and state.json, not by filename ordering or numeric prefixes.
Sub-agent must return (as response text, separate from the file it writes) a concise task list summary:
TASK_COUNT: <number>
TASKS:
- <task_id>: <task_title> [depends on: <deps or "none">] (~<estimated_time>)
- <task_id>: <task_title> [depends on: <deps or "none">] (~<estimated_time>)
...
EXECUTION_ORDER: <task_id_1>, <task_id_2>, ...
Main context retains: Only the task list summary (~1-2KB). The full plan content is on disk.
Step 8: Task Breakdown (via Sub-agent)
Offload to sub-agent to keep task file generation out of the main context.
Spawn an Agent tool call with:
subagent_type:"general-purpose"description:"Generate task breakdown files"
Sub-agent prompt must include:
- The plan file path:
{output_dir}/{feature_name}/plan.md(sub-agent reads it from disk) - The task list summary returned by Step 7 (paste it into the prompt)
- The tasks directory path:
{output_dir}/{feature_name}/tasks/ - All the principles and format requirements below
- If
reference_summariesis non-empty, include a## Reference Contextsection:## Reference Context The following project documents provide architectural context. Ensure task steps follow project conventions and integrate with existing components. {reference_summaries — all summaries concatenated, separated by blank lines}
Sub-agent must create tasks/{name}.md for each task, following these principles:
- TDD first: test → implement → verify
- Concrete steps: include code examples and commands
- Traceable: annotate dependencies (depends on / required by)
Each task file must include:
- Goal — what this task accomplishes
- Files Involved — files to create/modify
- Steps — numbered, with code examples where helpful
- Acceptance Criteria — checklist
- Dependencies — depends on / required by
- Estimated Time
Naming (critical): Use descriptive filenames: setup.md, models.md, api.md — NO numeric prefixes (01-setup.md, 02-models.md are WRONG). Execution order is defined in overview.md Task Execution Order table and state.json execution_order array, never in filenames.
Sub-agent must return (as response text) the list of generated files:
GENERATED_FILES:
- tasks/<task_id>.md: <task_title>
- tasks/<task_id>.md: <task_title>
...
Main context retains: Only the file list (~0.5KB). All task file content is on disk.
Step 9: Generate overview.md
Execution order: This step executes AFTER Steps 7 and 8. Use the task list summary returned by the Step 7 sub-agent and the file list returned by the Step 8 sub-agent to populate task-related sections.
Generate feature overview with these required sections:
- Overview — extract or summarize from source document
- Scope — included and excluded items
- Technology Stack — language/framework, key dependencies, testing tools
- Task Execution Order — table: #, Task File (linked to
./tasks/), Description, Status - Progress — total/completed/in_progress/pending counts
- Reference Documents — link to source document
Step 10: Initialize state.json
Create state.json with these required fields:
| Field | Description |
|---|---|
feature | Feature name (string) |
created, updated | ISO timestamps |
status | "pending" initially |
execution_order | Array of task IDs in execution order |
progress | { total_tasks, completed, in_progress, pending } |
tasks | Array of task objects (see below) |
metadata | { source_doc, created_by: "code-forge", version: "1.0" } |
Each task object in the tasks array:
| Field | Description |
|---|---|
id | Task identifier (matches filename without .md) |
file | Relative path: tasks/{id}.md |
title | Human-readable task title |
status | "pending" initially |
started_at, completed_at | ISO timestamps or null |
assignee | null initially |
commits | Empty array [] initially |
Step 11: Generate/Update Project-Level Overview
After initializing state.json, generate or update {output_dir}/overview.md — a bird's-eye view of all features.
@../shared/overview-generation.md
11.3 When to Regenerate
- After creating a new feature plan (this step)
- After feature completion
Display: Project overview updated: {output_dir}/overview.md
Step 12: Verify Output Structure
Mandatory — do NOT proceed to Step 13 until all checks pass. Fix failures before continuing.
{output_dir}/{feature_name}/existsplan.mdexists and non-emptytasks/contains.mdfiles with descriptive names (no numeric prefixes)overview.mdexists and non-emptystate.jsonis valid JSON with fields:feature,status,execution_order,progress,tasks- Task count in
state.jsonmatches files intasks/ {output_dir}/overview.md(project-level) exists- No files in
docs/plan/,docs/plans/,docs/planning/— move if found
Step 13: Display Plan and Next Steps
Output plan summary:
Implementation plan generated
Location: {output_dir}/{feature_name}/
Total Tasks: {count}
Estimated Total Time: {estimate}
Task Overview:
{id} - {title} [{status}]
...
Next steps:
/code-forge:impl {feature_name} Execute tasks
/code-forge:status {feature_name} View progress
cat {output_dir}/{feature_name}/plan.md View detailed plan
Integration with Claude Code Tasks
Optionally synchronize tasks to Claude Code's Task system:
- For each task in
execution_order, callTaskCreatewith:subject:"<task_id>: <task_title>"description: contents of the task fileactiveForm:"Implementing <task_title>"
Coordination with Other Skills
- With spec-forge:feature: Generate feature spec first →
/code-forge:plan @docs/features/{feature}.md - With spec-forge tech-design: Plan directly from tech-design →
/code-forge:plan @docs/{feature}/tech-design.md - With /brainstorming: Brainstorm design first → generate feature spec →
/code-forge:plan @docs/features/{feature}.md - With /code-forge:impl: After plan generated →
/code-forge:impl {feature}to execute - With /code-forge:review: After implementation →
/code-forge:review {feature}to review
Notes
- Document Quality: The more detailed the input document, the more accurate the generated plan
- Prompt Mode: When using prompt mode, the generated document is minimal. Step 4 sub-agent analysis handles expansion.
- Git Commits: Recommend committing the planning directory and
.code-forge.jsonto Git for team visibility - State Files:
state.jsoncan be optionally committed or added to .gitignore - Task Granularity: Recommend 1-3 hours per task for easy tracking
- Dependency Management: Dependencies between tasks affect execution order
- Project Overview: The project-level
overview.mdin{output_dir}/is auto-generated and shows all features, dependencies, and recommended implementation order - Tool Discovery:
.code-forge.jsoncontains a_toolsection with the plugin URL — new team members can find and install the tool from there - Status Definitions:
pending,in_progress,completed,blocked,skipped - Directory Structure:
This structure is mandatory, not a suggestion. Every file listed above must exist after plan generation completes.docs/ └── features/ # Input: feature specs (owned by spec-forge) └── user-auth.md # Generated by /spec-forge:feature or extracted from tech-design planning/ # Output: implementation plans (owned by code-forge) ├── overview.md # Project-level overview (auto-generated) └── {feature}/ # Per-feature directory ├── overview.md # Feature overview + task execution order ├── plan.md # Implementation plan ├── tasks/ # Task breakdown files └── state.json # Status tracking - Naming Conventions: Feature directories use kebab-case (
user-auth). Task files use descriptive names (setup.md). No "claude-" or tool prefixes. Suitable for Git commits. - Reference Docs: Configure
reference_docs.sourcesin.code-forge.jsonto auto-discover project documentation. Each doc is summarized by a parallel sub-agent and injected as context into Steps 4, 7, and 8. Reference context is baked into generated plan.md and task files — downstream skills do not re-read reference docs.
Common Mistakes
- Writing a single document instead of the multi-file structure (
overview.md+plan.md+tasks/*.md+state.json) - Using
docs/plan/,docs/plans/, ordocs/planning/instead of{output_dir} - Putting task content inside
plan.mdinstead of separatetasks/{name}.mdfiles - Using numeric prefixes on task files (
01-setup.mdinstead ofsetup.md) - Using numeric prefixes on feature directories (
FE-01-core-dispatcherinstead ofcore-dispatcher) - Generating flat files instead of per-feature subdirectories with multi-file structure
- Treating a path-like input without
@as a prompt instead of asking the user (Step 2.0 guard) - Skipping
state.json— downstream skills (impl, status, fixbug, finish) cannot operate without it - Skipping project-level
overview.md(Step 11) - Running Steps 4, 7, 8 inline instead of delegating to sub-agents via
Agenttool - Proceeding to Step 13 without running Step 12 verification