test-driven-development

Test-Driven Development (TDD)

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "test-driven-development" with this command: npx skills add almeidamarcell/claude-code-skills/almeidamarcell-claude-code-skills-test-driven-development

Test-Driven Development (TDD)

Philosophy

Core principle: If you didn't watch the test fail, you don't know if it tests the right thing.

Tests verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't. A good test reads like a specification — "user can checkout with valid cart" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure.

Violating the letter of the rules is violating the spirit of the rules.

See tests.md for good/bad examples, mocking.md for mocking guidelines, interface-design.md for testable design, and deep-modules.md for module design.

When to Use

Always:

  • New features

  • Bug fixes

  • Refactoring

  • Behavior changes

Exceptions (ask your human partner):

  • Throwaway prototypes

  • Generated code

  • Configuration files

Thinking "skip TDD just this once"? Stop. That's rationalization.

The Iron Law

NO PRODUCTION CODE WITHOUT A FAILING TEST FIRST

Write code before the test? Delete it. Start over.

No exceptions:

  • Don't keep it as "reference"

  • Don't "adapt" it while writing tests

  • Don't look at it

  • Delete means delete

Implement fresh from tests. Period.

Anti-Pattern: Horizontal Slices

DO NOT write all tests first, then all implementation. This is "horizontal slicing" — treating RED as "write all tests" and GREEN as "write all code."

This produces bad tests:

  • Tests written in bulk test imagined behavior, not actual behavior

  • You end up testing the shape of things (data structures, function signatures) rather than user-facing behavior

  • Tests become insensitive to real changes — they pass when behavior breaks, fail when behavior is fine

  • You outrun your headlights, committing to test structure before understanding the implementation

Correct approach: Vertical slices via tracer bullets. One test → one implementation → repeat.

WRONG (horizontal): RED: test1, test2, test3, test4, test5 GREEN: impl1, impl2, impl3, impl4, impl5

RIGHT (vertical): RED→GREEN: test1→impl1 RED→GREEN: test2→impl2 RED→GREEN: test3→impl3

Workflow

  1. Planning

Before writing any code:

  • Confirm with user what interface changes are needed

  • Confirm which behaviors to test (prioritize — you can't test everything)

  • Identify opportunities for deep modules (small interface, deep implementation)

  • Design interfaces for testability

  • List behaviors to test (not implementation steps)

  • Get user approval on the plan

Ask: "What should the public interface look like? Which behaviors are most important to test?"

  1. Tracer Bullet

Write ONE test that confirms ONE thing about the system:

RED: Write test for first behavior → test fails GREEN: Write minimal code to pass → test passes

This is your tracer bullet — proves the path works end-to-end.

  1. Red-Green-Refactor Loop

For each remaining behavior:

RED — Write Failing Test

Write one minimal test showing what should happen.

const result = await retryOperation(operation);

expect(result).toBe('success'); expect(attempts).toBe(3); });

Clear name, tests real behavior, one thing </Good>

<Bad>

test('retry works', async () => {
  const mock = jest.fn()
    .mockRejectedValueOnce(new Error())
    .mockRejectedValueOnce(new Error())
    .mockResolvedValueOnce('success');
  await retryOperation(mock);
  expect(mock).toHaveBeenCalledTimes(3);
});

Vague name, tests mock not code

Requirements:

- One behavior

- Clear name

- Real code (no mocks unless unavoidable — see mocking.md)

Verify RED — Watch It Fail

MANDATORY. Never skip.

npm test path/to/test.test.ts

Confirm:

- Test fails (not errors)

- Failure message is expected

- Fails because feature missing (not typos)

Test passes? You're testing existing behavior. Fix test.
Test errors? Fix error, re-run until it fails correctly.

GREEN — Minimal Code

Write simplest code to pass the test.

Don't add features, refactor other code, or "improve" beyond the test.

Verify GREEN — Watch It Pass

MANDATORY.

npm test path/to/test.test.ts

Confirm:

- Test passes

- Other tests still pass

- Output pristine (no errors, warnings)

Test fails? Fix code, not test.
Other tests fail? Fix now.

REFACTOR — Clean Up

After green only. Look for refactor candidates:

- Remove duplication

- Improve names

- Extract helpers

- Deepen modules (move complexity behind simple interfaces)

Keep tests green. Don't add behavior. Never refactor while RED.

Repeat

Next failing test for next behavior.

Cycle Checklist

[ ] Test describes behavior, not implementation
[ ] Test uses public interface only
[ ] Test would survive internal refactor
[ ] Watched test fail before implementing
[ ] Test failed for expected reason (feature missing, not typo)
[ ] Code is minimal for this test
[ ] No speculative features added
[ ] All tests pass
[ ] Output pristine (no errors, warnings)

Why Order Matters

"I'll write tests after to verify it works"

Tests written after code pass immediately. Passing immediately proves nothing — might test the wrong thing, might test implementation not behavior, might miss edge cases you forgot. You never saw it catch the bug.

Test-first forces you to see the test fail, proving it actually tests something.

"I already manually tested all the edge cases"

Manual testing is ad-hoc. No record of what you tested, can't re-run when code changes, easy to forget cases under pressure. "It worked when I tried it" ≠ comprehensive.

Automated tests are systematic. They run the same way every time.

"Deleting X hours of work is wasteful"

Sunk cost fallacy. The time is already gone. Delete and rewrite with TDD (X more hours, high confidence) vs. keep it and add tests after (30 min, low confidence, likely bugs). The "waste" is keeping code you can't trust.

"TDD is dogmatic, being pragmatic means adapting"

TDD IS pragmatic: finds bugs before commit, prevents regressions, documents behavior, enables refactoring. "Pragmatic" shortcuts = debugging in production = slower.

"Tests after achieve the same goals — it's spirit not ritual"

No. Tests-after answer "What does this do?" Tests-first answer "What should this do?" Tests-after are biased by your implementation. You test what you built, not what's required. Tests-first force edge case discovery before implementing.

Common Rationalizations

Excuse
Reality

"Too simple to test"
Simple code breaks. Test takes 30 seconds.

"I'll test after"
Tests passing immediately prove nothing.

"Tests after achieve same goals"
Tests-after = "what does this do?" Tests-first = "what should this do?"

"Already manually tested"
Ad-hoc ≠ systematic. No record, can't re-run.

"Deleting X hours is wasteful"
Sunk cost fallacy. Keeping unverified code is technical debt.

"Keep as reference, write tests first"
You'll adapt it. That's testing after. Delete means delete.

"Need to explore first"
Fine. Throw away exploration, start with TDD.

"Test hard = design unclear"
Listen to test. Hard to test = hard to use. Redesign with interface-design.md.

"TDD will slow me down"
TDD faster than debugging. Pragmatic = test-first.

"Manual test faster"
Manual doesn't prove edge cases. You'll re-test every change.

"Existing code has no tests"
You're improving it. Add tests for existing code.

Red Flags — STOP and Start Over

- Code before test

- Test after implementation

- Test passes immediately

- Can't explain why test failed

- Tests added "later"

- Rationalizing "just this once"

- "I already manually tested it"

- "Tests after achieve the same purpose"

- "It's about spirit not ritual"

- "Keep as reference" or "adapt existing code"

- "Already spent X hours, deleting is wasteful"

- "TDD is dogmatic, I'm being pragmatic"

- "This is different because..."

All of these mean: Delete code. Start over with TDD.

Example: Bug Fix

Bug: Empty email accepted

RED

test('rejects empty email', async () => {
  const result = await submitForm({ email: '' });
  expect(result.error).toBe('Email required');
});

Verify RED

$ npm test
FAIL: expected 'Email required', got undefined

GREEN

function submitForm(data: FormData) {
  if (!data.email?.trim()) {
    return { error: 'Email required' };
  }
  // ...
}

Verify GREEN

$ npm test
PASS

REFACTOR
Extract validation for multiple fields if needed.

When Stuck

Problem
Solution

Don't know how to test
Write wished-for API. Write assertion first. Ask your human partner.

Test too complicated
Design too complicated. Simplify interface. See interface-design.md.

Must mock everything
Code too coupled. See mocking.md. Use dependency injection.

Test setup huge
Extract helpers. Still complex? Simplify design. See deep-modules.md.

Debugging Integration

Bug found? Write failing test reproducing it. Follow TDD cycle. Test proves fix and prevents regression.

Never fix bugs without a test.

Verification Checklist

Before marking work complete:

-  Every new function/method has a test

-  Watched each test fail before implementing

-  Each test failed for expected reason (feature missing, not typo)

-  Wrote minimal code to pass each test

-  All tests pass

-  Output pristine (no errors, warnings)

-  Tests use real code (mocks only at system boundaries)

-  Edge cases and errors covered

-  Tests verify through public interfaces, not implementation details

Can't check all boxes? You skipped TDD. Start over.

Final Rule

Production code → test exists and failed first
Otherwise → not TDD

No exceptions without your human partner's permission.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

entire

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

breadboarding

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

mcp-builder

No summary provided by upstream source.

Repository SourceNeeds Review