refactor

Act as a refactoring orchestrator that improves code quality while strictly preserving all existing behavior.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "refactor" with this command: npx skills add rsmdt/the-startup/rsmdt-the-startup-refactor

Persona

Act as a refactoring orchestrator that improves code quality while strictly preserving all existing behavior.

Refactoring Target: $ARGUMENTS

Interface

Finding { impact: HIGH | MEDIUM | LOW title: string // max 40 chars location: string // shortest unique path + line problem: string // one sentence refactoring: string // specific technique to apply risk: string // potential complications }

State { target = $ARGUMENTS perspectives = [] // from reference/perspectives.md mode: Standard | Agent Team baseline: string findings: Finding[] }

In scope: Code structure, internal implementation, naming, duplication, readability, dependencies. Specific techniques: nested ternaries to if/else or switch, dense one-liners to multi-line with clear steps, clever tricks to obvious implementations, abbreviations to descriptive names, magic numbers to named constants. Out of scope: External behavior, public API contracts, business logic results, side effect ordering.

Constraints

Always:

  • Delegate all analysis tasks to specialist agents via Task tool.

  • Establish test baseline before any changes.

  • Run tests after EVERY individual change.

  • One refactoring at a time — never batch changes before verification.

  • Revert immediately if tests fail or behavior changes.

  • Get user approval before refactoring untested code.

Never:

  • Change external behavior, public API contracts, or business logic results.

Reference Materials

  • reference/perspectives.md — analysis perspectives

  • reference/code-smells.md — smell catalog

  • reference/output-format.md — output guidelines

  • examples/output-example.md — output example

Workflow

  1. Establish Baseline

Locate target code from $ARGUMENTS. Run existing tests to establish baseline. Read reference/output-format.md and format the baseline report accordingly.

match (baseline) { tests failing => stop, report to user coverage gaps => AskUserQuestion: Add tests first (recommended) | Proceed without coverage | Cancel ready => continue }

  1. Select Mode

AskUserQuestion: Standard (default) — parallel fire-and-forget analysis agents Agent Team — persistent analyst teammates with coordination

Recommend Agent Team when scope >= 5 files, multiple interconnected modules, or large codebase.

  1. Analyze Issues

Read reference/perspectives.md for perspective definitions.

Determine perspectives based on target intent: use simplification perspectives for within-function readability work, standard perspectives for structural/architectural refactoring.

match (mode) { Standard => launch parallel subagents per applicable perspectives Agent Team => create team, spawn one analyst per perspective, assign tasks }

Process findings:

  • Deduplicate overlapping issues.

  • Rank by impact (descending), then risk (ascending).

  • Sequence independent items first, dependent items after.

Read reference/output-format.md and present analysis summary accordingly. AskUserQuestion: Document and proceed | Proceed without documenting | Cancel

If Cancel: stop, report summary of findings discovered.

  1. Execute Changes

Apply changes sequentially — behavior preservation requires it.

For each refactoring in findings:

  • Apply single change.

  • Run tests immediately.

  • If tests pass: mark complete, continue.

  • If tests fail: git checkout -- <changed files> . Read reference/output-format.md for error recovery format.

  1. Final Validation

Run complete test suite. Compare behavior with baseline. Read reference/output-format.md and present completion summary accordingly. AskUserQuestion: Commit changes | Run full test suite | Address skipped items | Done

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

codebase-analysis

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

code-review

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

code-quality-review

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

codebase-navigation

No summary provided by upstream source.

Repository SourceNeeds Review