test

Auto-detects your test framework, runs the full suite, diagnoses failures, and fixes them. Use after making code changes to verify tests pass, or when you encounter test failures you want diagnosed and fixed automatically.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "test" with this command: npx skills add skinnyandbald/fish-skills/skinnyandbald-fish-skills-test

Run Tests and Fix Failures

Detect test framework, run tests, diagnose and fix failures automatically.

Instructions

Step 1: Detect Test Framework

Check for test configuration files in this order:

FileFrameworkRun Command
vitest.config.* or vite.config.* with testVitestnpx vitest run
jest.config.* or package.json jest keyJestnpx jest
phpunit.xmlPHPUnit./vendor/bin/phpunit
pest.php or Pest in composer.jsonPest./vendor/bin/pest
pytest.ini / pyproject.toml [tool.pytest]pytestpytest
go.mod with _test.go filesGogo test ./...
Cargo.tomlRustcargo test
Gemfile with rspecRSpecbundle exec rspec

Also check package.json scripts for test, test:run, test:unit, test:integration.

If CLAUDE.md specifies test commands, use those instead.

Step 2: Run Tests

Run the detected test command with run_in_background=true. Use verbose output flags where available.

Step 3: Analyze Results

If all tests pass, report the summary and stop.

If tests fail, for each failure:

  1. Read the failing test to understand what it expects
  2. Read the source code being tested
  3. Classify the failure:
    • Test is wrong — test expectations don't match intended behavior (update test)
    • Source is wrong — code has a bug (fix source)
    • Both need updating — behavior changed intentionally (update both)
    • Environment issue — missing dependency, stale cache, wrong config (fix setup)

Step 4: Fix and Re-run

For each failure:

  1. Make the minimal fix
  2. Re-run the specific failing test to verify
  3. Move to the next failure

After all individual fixes, run the full suite once more to catch regressions.

Step 5: Report

## Test Results

**Framework:** [detected framework]
**Command:** [command used]
**Result:** X passed, Y fixed, Z remaining

### Fixed
- [test name]: [what was wrong and what was fixed]

### Still Failing (if any)
- [test name]: [diagnosis and why it couldn't be auto-fixed]

Rules

  • Never delete or skip a failing test without explicit user approval
  • Prefer fixing source over fixing tests (tests define expected behavior)
  • If a fix requires changing more than ~20 lines, show the proposed change and ask before applying
  • If the same root cause produces multiple failures, fix the root cause once

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

counselors

No summary provided by upstream source.

Repository SourceNeeds Review
General

simplify-parallel

No summary provided by upstream source.

Repository SourceNeeds Review
General

pr-resolution

No summary provided by upstream source.

Repository SourceNeeds Review
General

deepproduct

No summary provided by upstream source.

Repository SourceNeeds Review