impl

Execute pending tasks for a feature — TDD-driven implementation with sub-agent isolation and progress tracking. Use when starting to build, implement, or code a planned feature, resuming partially completed work, or running the next task in a code-forge plan.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "impl" with this command: npx skills add tercel/code-forge/tercel-code-forge-impl

Code Forge — Impl

Execute pending implementation tasks for a feature, following the plan generated by /code-forge:plan.

When to Use

  • Have a generated plan (state.json + tasks/ directory) ready for execution
  • Need to resume a partially completed feature
  • Need task-by-task execution with TDD and progress tracking

Examples

/code-forge:impl user-auth          # Execute tasks for user-auth feature
/code-forge:impl                    # Auto-detect pending feature

Workflow

Locate Feature → Confirm Execution → Task Loop (sub-agents) → Verify → Complete

Context Management

Step 3 dispatches a dedicated sub-agent for each task, so code changes from one task don't pollute the context of the next. The main context only handles coordination: reading state, dispatching sub-agents, and updating status.

Detailed Steps

@../shared/configuration.md


Step 1: Locate Feature

1.1 With Feature Name Argument

If the user provided a feature name (e.g., /code-forge:impl user-auth):

  1. Look for {output_dir}/{feature_name}/state.json
  2. If not found, search {output_dir}/*/state.json for a feature whose feature field matches
  3. If not found in output_dir, also search .code-forge/tmp/{feature_name}/state.json and .code-forge/tmp/*/state.json (plan may have been created with --tmp)
  4. If still not found, show error: "Feature '{feature_name}' not found. Run /code-forge:status to see available features."

If found in .code-forge/tmp/, set output_dir to .code-forge/tmp/ and tmp_mode to true for the rest of the session.

1.2 Without Argument

If no feature name is provided:

  1. Scan both {output_dir}/*/state.json and .code-forge/tmp/*/state.json for all features
  2. Filter to features with status = "pending" or "in_progress" (exclude "completed")
  3. If none found: "No features ready for execution. Run /code-forge:plan to create one."
  4. If one found: use it automatically
  5. If multiple found: display table (mark tmp features with [tmp] suffix) and use AskUserQuestion to let user select

1.3 Validate Feature State

After locating the feature:

  1. Read state.json
  2. Check that tasks array is non-empty
  3. Check that task files in tasks/ directory exist
  4. Show feature progress summary: completed/in_progress/pending counts
  5. If all tasks are "completed": "All tasks already completed. Run /code-forge:review {feature} to review."

Step 2: Ask for Execution Method

Use AskUserQuestion:

  • "Start Execution Now (Recommended)" — execute tasks one by one, auto-track progress → enter Step 3
  • "Manual Execution Later" — save plan, show resume instructions (/code-forge:impl {feature})
  • "Team Collaboration Mode" — show guidelines: commit plan to Git, claim tasks via assignee, sync state.json
  • "View Plan Details" — display plan.md contents for review before executing

Step 3: Task Execution Loop (via Sub-agents)

Each task is executed by a dedicated sub-agent to prevent cross-task context accumulation. The main context only handles coordination: reading state, dispatching sub-agents, and updating status.

3.1 Coordination Loop (Main Context)

  1. Read state.json
  2. Find the next task in execution_order that is "pending" with no unmet dependencies
  3. If no such task exists: display "All tasks completed!" and exit loop
  4. Display: "Starting task: {id} - {title}"
  5. Update task status to "in_progress" in state.json
  6. Dispatch sub-agent for this task (see 3.2)
  7. Review the sub-agent's execution summary
  8. Ask user via AskUserQuestion: "Is the task completed?"
    • "Completed, continue to next" → update status to "completed", continue loop
    • "Encountered issue, pause" → keep "in_progress", exit loop
    • "Skip this task" → update status to "skipped", continue loop
  9. Repeat from step 1

3.2 Task Execution Sub-agent

Spawn an Agent tool call with:

  • subagent_type: "general-purpose"
  • description: "Execute task: {task_id}"

Sub-agent prompt must include:

  • The task file path: {output_dir}/{feature_name}/tasks/{task_id}.md (sub-agent reads it)
  • The project root path
  • Tech stack and testing strategy (from state.json metadata or plan.md)
  • Instruction to follow TDD: write tests → run tests → implement → verify
  • Coding standards (mandatory): include the following standards in the sub-agent prompt so it writes quality code from the start:

@../shared/coding-standards.md

  • Instruction to return ONLY a concise execution summary

Sub-agent executes:

  1. Read the task file from disk
  2. Follow the task steps (TDD: write tests → run tests → implement → verify)
  3. Commit changes if all tests pass (with descriptive commit message)

Sub-agent must return a concise execution summary:

STATUS: completed | partial | blocked
FILES_CHANGED:
- path/to/file.ext (created | modified)
- ...
TEST_RESULTS: X passed, Y failed
SUMMARY: <1-2 sentence description of what was done>
ISSUES: <any blockers or concerns, or "none">

Main context retains: Only the execution summary (~0.5-1KB per task). All code changes, test outputs, and file reads stay in the sub-agent's context and are discarded.

3.3 Parallel Execution (Optional)

When multiple pending tasks have no mutual dependencies (none depends on another), they may be dispatched as parallel sub-agents using multiple Agent tool calls in a single message. Each sub-agent works in isolation on its own task.

Use parallel execution only when:

  • Tasks modify different files (no overlap in "Files Involved")
  • Tasks have no dependency relationship (neither depends on the other)
  • User has agreed to parallel execution

After all parallel sub-agents complete, review each summary and update state.json for all completed tasks before continuing the loop.

Step 4: Verify Generated Files

Before completion summary, verify all generated files:

Checks:

  1. Required files exist and are non-empty: overview.md, plan.md, state.json
  2. tasks/ directory exists and contains .md files with descriptive names
  3. state.json is valid JSON with required fields (feature, status, tasks, execution_order); task count matches task files; all IDs in execution_order match tasks entries
  4. plan.md contains: title heading, ## Goal, ## Task Breakdown, ## Acceptance Criteria
  5. overview.md contains ## Task Execution Order table

On pass: Show checklist with all items passing, continue.

On error (missing required files): Show what's missing. Attempt auto-fix:

  • Empty overview.md → generate template from plan data
  • Missing tasks/ → create directory
  • Missing state.json → generate initial state from task files found Then re-verify.

On warnings (count mismatch, missing optional section): Show warnings, continue by default.


Step 5: Completion Summary

After all tasks are completed:

  1. Update state.json with final status
  2. Regenerate the project-level overview ({output_dir}/overview.md)
Feature implementation completed!

Completed tasks: {completed}/{total}
Location: {output_dir}/{feature_name}/
Total time: {actual_time}

Next steps:
  /code-forge:review {feature_name}   Review code quality
  /code-forge:status {feature_name}   View final status

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

status

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

port

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

fixbug

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

review

No summary provided by upstream source.

Repository SourceNeeds Review
impl | V50.AI