start

Initialize your AI development session and begin working on tasks.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "start" with this command: npx skills add mindfold-ai/trellis/mindfold-ai-trellis-start

Start Session

Initialize your AI development session and begin working on tasks.

Operation Types

Marker Meaning Executor

[AI]

Bash scripts or tool calls executed by AI You (AI)

[USER]

Skills executed by user User

Initialization [AI]

Step 1: Understand Development Workflow

First, read the workflow guide to understand the development process:

cat .trellis/workflow.md

Follow the instructions in workflow.md - it contains:

  • Core principles (Read Before Write, Follow Standards, etc.)

  • File system structure

  • Development process

  • Best practices

Step 2: Get Current Context

python3 ./.trellis/scripts/get_context.py

This shows: developer identity, git status, current task (if any), active tasks.

Step 3: Read Guidelines Index

Discover available spec modules

python3 ./.trellis/scripts/get_context.py --mode packages

Read index.md for the package you'll work on (e.g., cli, docs-site)

cat .trellis/spec/<package>/<layer>/index.md

Always read shared thinking guides

cat .trellis/spec/guides/index.md

Step 4: Report and Ask

Report what you learned and ask: "What would you like to work on?"

Task Classification

When user describes a task, classify it:

Type Criteria Workflow

Question User asks about code, architecture, or how something works Answer directly

Trivial Fix Typo fix, comment update, single-line change, < 5 minutes Direct Edit

Simple Task Clear goal, 1-2 files, well-defined scope Quick confirm → Task Workflow

Complex Task Vague goal, multiple files, architectural decisions Brainstorm → Task Workflow

Decision Rule

If in doubt, use Brainstorm + Task Workflow.

Task Workflow ensures code-specs are injected to the right context, resulting in higher quality code. The overhead is minimal, but the benefit is significant.

Subtask Decomposition: If brainstorm reveals multiple independent work items, consider creating subtasks using --parent flag or add-subtask command. See the brainstorm skill's Step 8 for details.

Question / Trivial Fix

For questions or trivial fixes, work directly:

  • Answer question or make the fix

  • If code was changed, remind user to run $finish-work

Simple Task

For simple, well-defined tasks:

  • Quick confirm: "I understand you want to [goal]. Ready to proceed?"

  • If yes, proceed to Task Workflow Phase 1 Path B (create task, write PRD, then research)

  • If no, clarify and confirm again

Complex Task - Brainstorm First

For complex or vague tasks, use the brainstorm process to clarify requirements.

See $brainstorm for the full process. Summary:

  • Acknowledge and classify - State your understanding

  • Create task directory - Track evolving requirements in prd.md

  • Ask questions one at a time - Update PRD after each answer

  • Propose approaches - For architectural decisions

  • Confirm final requirements - Get explicit approval

  • Proceed to Task Workflow - With clear requirements in PRD

Task Workflow (Development Tasks)

Why this workflow?

  • Run a dedicated research pass before coding

  • Configure specs in jsonl context files

  • Implement using injected context

  • Verify with a separate check pass

  • Result: Code that follows project conventions automatically

Overview: Two Entry Points

From Brainstorm (Complex Task): PRD confirmed → Research → Configure Context → Activate → Implement → Check → Complete

From Simple Task: Confirm → Create Task → Write PRD → Research → Configure Context → Activate → Implement → Check → Complete

Key principle: Research happens AFTER requirements are clear (PRD exists).

Phase 1: Establish Requirements

Path A: From Brainstorm (skip to Phase 2)

PRD and task directory already exist from brainstorm. Skip directly to Phase 2.

Path B: From Simple Task

Step 1: Confirm Understanding [AI]

Quick confirm:

  • What is the goal?

  • What type of development? (frontend / backend / fullstack)

  • Any specific requirements or constraints?

If unclear, ask clarifying questions.

Step 2: Create Task Directory [AI]

TASK_DIR=$(python3 ./.trellis/scripts/task.py create "<title>" --slug <name>)

Step 3: Write PRD [AI]

Create prd.md in the task directory with:

<Task Title>

Goal

<What we're trying to achieve>

Requirements

  • <Requirement 1>
  • <Requirement 2>

Acceptance Criteria

  • <Criterion 1>
  • <Criterion 2>

Technical Notes

<Any technical decisions or constraints>

Phase 2: Prepare for Implementation (shared)

Both paths converge here. PRD and task directory must exist before proceeding.

Step 4: Code-Spec Depth Check [AI]

If the task touches infra or cross-layer contracts, do not start implementation until code-spec depth is defined.

Trigger this requirement when the change includes any of:

  • New or changed command/API signatures

  • Database schema or migration changes

  • Infra integrations (storage, queue, cache, secrets, env contracts)

  • Cross-layer payload transformations

Must-have before proceeding:

  • Target code-spec files to update are identified

  • Concrete contract is defined (signature, fields, env keys)

  • Validation and error matrix is defined

  • At least one Good/Base/Bad case is defined

Step 5: Research the Codebase [AI]

Based on the confirmed PRD, run a focused research pass and produce:

  • Relevant spec files in .trellis/spec/

  • Existing code patterns to follow (2-3 examples)

  • Files that will likely need modification

Use this output format:

Relevant Specs

  • <path>: <why it's relevant>

Code Patterns Found

  • <pattern>: <example file path>

Files to Modify

  • <path>: <what change>

Step 6: Configure Context [AI]

Initialize default context:

python3 ./.trellis/scripts/task.py init-context "$TASK_DIR" <type>

type: backend | frontend | fullstack

Add specs found in your research pass:

For each relevant spec and code pattern:

python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" implement "<path>" "<reason>" python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" check "<path>" "<reason>"

Step 7: Activate Task [AI]

python3 ./.trellis/scripts/task.py start "$TASK_DIR"

This sets .current-task so hooks can inject context.

Phase 3: Execute (shared)

Step 8: Implement [AI]

Implement the task described in prd.md .

  • Follow all specs injected into implement context

  • Keep changes scoped to requirements

  • Run lint and typecheck before finishing

Step 9: Check Quality [AI]

Run a quality pass against check context:

  • Review all code changes against the specs

  • Fix issues directly

  • Ensure lint and typecheck pass

Step 10: Complete [AI]

  • Verify lint and typecheck pass

  • Report what was implemented

  • Remind user to:

  • Test the changes

  • Commit when ready

  • Run $record-session to record this session

Continuing Existing Task

If get_context.py shows a current task:

  • Read the task's prd.md to understand the goal

  • Check task.json for current status and phase

  • Ask user: "Continue working on ?"

If yes, resume from the appropriate step (usually Step 7 or 8).

Skills Reference

User Skills [USER]

Skill When to Use

$start

Begin a session (this skill)

$finish-work

Before committing changes

$record-session

After completing a task

AI Scripts [AI]

Script Purpose

python3 ./.trellis/scripts/get_context.py

Get session context

python3 ./.trellis/scripts/task.py create

Create task directory

python3 ./.trellis/scripts/task.py init-context

Initialize jsonl files

python3 ./.trellis/scripts/task.py add-context

Add spec to jsonl

python3 ./.trellis/scripts/task.py start

Set current task

python3 ./.trellis/scripts/task.py finish

Clear current task

python3 ./.trellis/scripts/task.py archive

Archive completed task

Workflow Phases [AI]

Phase Purpose Context Source

research Analyze codebase direct repo inspection

implement Write code implement.jsonl

check Review & fix check.jsonl

debug Fix specific issues debug.jsonl

Key Principle

Code-spec context is injected, not remembered.

The Task Workflow ensures agents receive relevant code-spec context automatically. This is more reliable than hoping the AI "remembers" conventions.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

before-frontend-dev

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

before-backend-dev

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

cc-codex-spec-bootstrap

No summary provided by upstream source.

Repository SourceNeeds Review
General

trellis-meta

No summary provided by upstream source.

Repository SourceNeeds Review