sparc-workflow

Systematic software development through Specification, Pseudocode, Architecture, Refinement (TDD), and Completion phases.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "sparc-workflow" with this command: npx skills add vamseeachanta/workspace-hub/vamseeachanta-workspace-hub-sparc-workflow

SPARC Workflow Skill

Systematic software development through Specification, Pseudocode, Architecture, Refinement (TDD), and Completion phases.

Quick Start

Run full SPARC development cycle

Run TDD-focused workflow

List available SPARC modes

When to Use

  • Implementing a new feature from scratch

  • Complex problem requiring structured analysis before coding

  • Building production-quality code with comprehensive tests

  • Refactoring existing code systematically

  • API or UI development requiring clear specifications

Prerequisites

  • Understanding of TDD (Test-Driven Development)

  • Project with .agent-os/ directory structure

  • Access to testing framework (pytest, jest, etc.)

Overview

SPARC is a systematic methodology for software development that ensures quality through structured phases. Each phase builds on the previous, creating well-documented, well-tested code.

SPARC Phases

┌─────────────────────────────────────────────────────────────────┐ │ S → P → A → R → C │ │ │ │ Specification → Pseudocode → Architecture → Refinement → Done │ └─────────────────────────────────────────────────────────────────┘

Phase Overview

Phase Focus Output

Specification What to build Requirements document

Pseudocode How it works Algorithm design

Architecture How it fits System design

Refinement Make it work Tested implementation

Completion Make it right Production-ready code

Phase 1: Specification

Purpose

Define what needs to be built with clear, measurable requirements.

Process

  • Gather requirements from user prompt

  • Identify acceptance criteria

  • Define scope (in-scope and out-of-scope)

  • Document constraints and assumptions

Output Template

Feature Specification

Overview

[One paragraph describing the feature]

Requirements

Functional Requirements

  1. FR-1: [Requirement]
  2. FR-2: [Requirement]
  3. FR-3: [Requirement]

Non-Functional Requirements

  1. NFR-1: Performance - [Requirement]
  2. NFR-2: Security - [Requirement]
  3. NFR-3: Usability - [Requirement]

Scope

In Scope

  • [Item 1]
  • [Item 2]

Out of Scope

  • [Item 1]
  • [Item 2]

Acceptance Criteria

  • AC-1: [Testable criterion]
  • AC-2: [Testable criterion]
  • AC-3: [Testable criterion]

Constraints

  • [Technical constraint]
  • [Business constraint]

Assumptions

  • [Assumption 1]
  • [Assumption 2]

Specification Checklist

  • All requirements are clear and unambiguous

  • Each requirement is testable

  • Scope is explicitly defined

  • Constraints are documented

  • User stories follow "As a... I want... So that..." format

Phase 2: Pseudocode

Purpose

Design the algorithm and logic before implementation.

Process

  • Break down requirements into logical steps

  • Write language-agnostic pseudocode

  • Identify edge cases

  • Design error handling

Pseudocode Guidelines

FUNCTION process_data(input_data): // Validate input IF input_data is empty: RAISE ValidationError("Input cannot be empty")

// Initialize result
result = EMPTY_LIST

// Process each item
FOR EACH item IN input_data:
    // Check conditions
    IF item.meets_criteria():
        processed_item = transform(item)
        APPEND processed_item TO result

RETURN result

FUNCTION transform(item): // Apply transformation logic new_value = item.value * MULTIPLIER RETURN Item(new_value, item.metadata)

Pseudocode Best Practices

  • Be explicit: Show all decision points

  • Include error handling: Show how errors are managed

  • Note complexity: O(n), O(n²), etc.

  • Identify data structures: Lists, maps, trees

  • Show edge cases: Empty input, single item, maximum size

Output Template

Pseudocode Design

Main Algorithm

``` FUNCTION main_feature(params): [Algorithm steps] ```

Helper Functions

``` FUNCTION helper_one(input): [Steps]

FUNCTION helper_two(input): [Steps] ```

Error Handling

``` TRY: [Main logic] CATCH ValidationError: [Handle validation] CATCH ProcessingError: [Handle processing] FINALLY: [Cleanup] ```

Edge Cases

CaseInputExpected Output
Empty[][]
Single[1][processed_1]
Maximum[1..10000][processed_all]

Complexity Analysis

  • Time: O(n)
  • Space: O(n)

Phase 3: Architecture

Purpose

Design how the feature fits into the system architecture.

Process

  • Identify affected components

  • Design interfaces and contracts

  • Plan data flow

  • Consider dependencies

Architecture Considerations

Component Design

New Components

  • ComponentA: [Purpose]
  • ComponentB: [Purpose]

Modified Components

  • ExistingComponent: [Changes needed]

Interface Design

```python class IProcessor(Protocol): def process(self, data: InputData) -> OutputData: """Process input and return output.""" ...

def validate(self, data: InputData) -> bool:
    """Validate input data."""
    ...

```

Data Flow

``` Input → Validator → Processor → Transformer → Output ↓ ↓ Logger Cache ```

Dependencies

Internal

  • module_a (version ≥ 1.2.0)
  • module_b

External

  • library_x (version 2.0.0)

File Structure

``` src/ └── feature_name/ ├── init.py ├── processor.py # Main processing logic ├── validator.py # Input validation ├── transformer.py # Data transformation └── models.py # Data models ```

Phase 4: Refinement (TDD)

Purpose

Implement the feature using Test-Driven Development.

TDD Cycle

┌──────────────┐ │ 1. RED │ Write failing test └──────┬───────┘ │ ▼ ┌──────────────┐ │ 2. GREEN │ Write minimal code to pass └──────┬───────┘ │ ▼ ┌──────────────┐ │ 3. REFACTOR │ Improve code quality └──────┬───────┘ │ └──────────► Repeat

TDD Process

Write Test First

def test_process_valid_input(): """Test processing with valid input.""" processor = Processor() result = processor.process([1, 2, 3]) assert result == [2, 4, 6]

Run Test (Should Fail)

pytest tests/test_processor.py -v

Expected: FAILED

Write Minimal Implementation

class Processor: def process(self, data): return [x * 2 for x in data]

Run Test (Should Pass)

pytest tests/test_processor.py -v

Expected: PASSED

Refactor

class Processor: def init(self, multiplier: int = 2): self.multiplier = multiplier

def process(self, data: List[int]) -> List[int]:
    return [x * self.multiplier for x in data]

Test Categories

Unit Tests

class TestProcessor: def test_process_valid_input(self): """Test with valid input.""" ...

def test_process_empty_input(self):
    """Test with empty input."""
    ...

def test_process_invalid_input(self):
    """Test with invalid input raises error."""
    ...

Integration Tests

class TestProcessorIntegration: def test_end_to_end_workflow(self): """Test complete workflow.""" ...

Performance Tests

class TestProcessorPerformance: def test_large_dataset_performance(self): """Test performance with large dataset.""" ...

Phase 5: Completion

Purpose

Finalize for production: documentation, cleanup, and verification.

Completion Checklist

Code Quality

  • All tests passing
  • Test coverage ≥ 80%
  • No linting errors
  • Type hints complete
  • Docstrings complete

Documentation

  • README updated
  • API documentation
  • Usage examples
  • Changelog entry

Security

  • Input validation
  • Error messages safe
  • No hardcoded secrets
  • Dependencies audited

Performance

  • Benchmarks run
  • Memory usage checked
  • No N+1 queries
  • Caching implemented (if needed)

Deployment

  • Configuration documented
  • Migration scripts (if needed)
  • Rollback plan
  • Monitoring in place

Using SPARC with Claude Flow

Start SPARC Workflow

Available Modes

Mode Focus

dev

Full development cycle

api

API development

ui

UI development

test

Testing focus

refactor

Code improvement

TDD Mode

SPARC File Locations

.agent-os/ ├── specs/ │ └── feature-name/ │ ├── spec.md # Specification │ ├── tasks.md # Task breakdown │ └── sub-specs/ │ ├── pseudocode.md # Pseudocode │ ├── architecture.md # Architecture │ ├── tests.md # Test spec │ └── api-spec.md # API spec (if applicable) └── product/ └── decisions.md # Decision log

Execution Checklist

  • Requirements gathered and documented in spec.md

  • Pseudocode designed with edge cases identified

  • Architecture defined with clear interfaces

  • Tests written BEFORE implementation (TDD)

  • Implementation passes all tests

  • Code refactored for quality

  • Documentation complete

  • Code review completed

  • Deployed to staging/production

Integration with Agent OS

Creating a Spec

Use the create-spec workflow

Reference: @~/.agent-os/instructions/create-spec.md

Executing Tasks

Use the execute-tasks workflow

Reference: @~/.agent-os/instructions/execute-tasks.md

Error Handling

Specification Phase Issues

  • Unclear requirements: Ask clarifying questions before proceeding

  • Scope creep: Document out-of-scope items explicitly

  • Missing acceptance criteria: Derive from requirements

TDD Phase Issues

  • Tests too complex: Break into smaller units

  • Flaky tests: Isolate external dependencies with mocks

  • Low coverage: Add edge case tests

Completion Phase Issues

  • Documentation gaps: Review against checklist

  • Performance issues: Profile and optimize hot paths

  • Security concerns: Run security audit tools

Metrics & Success Criteria

  • Test Coverage: >= 80% for all new code

  • Code Quality: Zero linting errors, all type hints present

  • Documentation: 100% of public APIs documented

  • Performance: Meets defined NFR benchmarks

  • TDD Adherence: Tests written before implementation

Best Practices

Specification

  • Write requirements from user perspective

  • Make every requirement testable

  • Explicitly define boundaries

  • Get stakeholder approval

Pseudocode

  • Stay language-agnostic

  • Show all decision branches

  • Include error paths

  • Note time/space complexity

Architecture

  • Keep components loosely coupled

  • Design for testability

  • Plan for scalability

  • Document dependencies

Refinement

  • One test per behavior

  • Test edge cases first

  • Keep tests isolated

  • Maintain fast test suite

Completion

  • Review all checklist items

  • Run full test suite

  • Update documentation

  • Plan deployment

Integration Points

MCP Tools

// Start SPARC mode mode: "dev", task_description: "Implement user authentication" })

// Orchestrate tasks task: "Complete SPARC refinement phase", strategy: "sequential", priority: "high" })

Related Skills

  • agent-orchestration - Multi-agent coordination

  • compliance-check - Standards verification

  • repo-sync - Repository management

References

  • Agent OS Create Spec

  • Agent OS Execute Tasks

Version History

  • 1.1.0 (2026-01-02): Upgraded to SKILL_TEMPLATE_v2 format - added Quick Start, When to Use, Execution Checklist, Error Handling, Metrics, Integration Points, MCP hooks

  • 1.0.0 (2024-10-15): Initial release with 5 SPARC phases, TDD integration, Claude Flow support, Agent OS integration

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

cli-productivity

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

python-docx

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

python-scientific-computing

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

python-pptx

No summary provided by upstream source.

Repository SourceNeeds Review