fp-generate

Generate E2E test files from E2E_TESTS.md specifications

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "fp-generate" with this command: npx skills add endorhq/flightplanner/endorhq-flightplanner-fp-generate

Generate E2E Tests

Generate E2E test files from E2E_TESTS.md specifications. This creates new test files for all specified suites, overwriting any existing generated tests.

Additional instructions from the user: "$ARGUMENTS". Ignore if empty.

Warning

Before proceeding, warn the user:

Warning: This skill will delete all existing E2E test files and regenerate them from scratch based on the E2E_TESTS.md specifications. Any manual edits or customizations in existing test files will be lost.

If you only need to bring existing tests in sync with updated specs — without discarding them — use /fp-update instead. That skill analyzes the diff between specs and tests, and applies incremental changes.

Ask the user to confirm they want to proceed with full regeneration. Do not continue unless they explicitly confirm.

This command has four phases. Complete all four in order.

Phase 1: Discover

  1. Find all E2E_TESTS.md files by searching recursively from the project root.
  2. If E2E tests already exist, run them first to establish a baseline of current pass/fail status. Note any pre-existing failures.
  3. Read the root-level spec first to understand project-wide testing constraints.
  4. Read each package-level E2E_TESTS.md and extract all suites, features, preconditions, postconditions, and metadata.
  5. Identify the project's:
    • Programming language and test framework (from package files, existing tests, config)
    • Existing test utilities (look for e2e-utils files)
    • Test runner configuration (look for E2E-specific config files)
    • Existing patterns and conventions

Phase 2: Plan

For each suite in each spec:

  1. Determine the target test file path following the project's naming conventions.
  2. Map each feature to a test case, grouping by category.
  3. Map Preconditions sections (at whatever heading level they appear) to per-test or per-suite setup hooks (the framework's equivalent of running code before each test or before all tests in a suite).
  4. Map Postconditions sections (at whatever heading level they appear) to assertions and per-test teardown hooks (the framework's equivalent of running cleanup code after each test).
  5. Identify shared utilities needed (cleanup, mock helpers, git init).
  6. Plan the mock strategy for each external dependency.

Present the plan to the user:

  • List of files to be created
  • Number of test cases per file
  • Any shared utilities that need to be created or updated

Ask the user if they want to proceed or adjust the plan.

Phase 3: Generate

Delete existing autogenerated E2E tests

Before generating anything, find and delete all existing autogenerated E2E test files. Autogenerated files are identified by the presence of an autogenerated header comment. Remove these files so that tests are created from a clean slate.

Create new test files

For each planned test file:

  1. Add the autogenerated header with references to the source spec files.
  2. Import required dependencies and shared utilities.
  3. Create the outer suite/group block for the suite (e.g., describe() in vitest/jest, suite in other frameworks).
  4. Implement the per-test setup hook (e.g., beforeEach in vitest/jest) from preconditions:
    • Create temp directory
    • Save and set environment variables
    • Initialize git repository if needed
    • Set up mock tools
  5. Implement the per-test teardown hook (e.g., afterEach in vitest/jest) for cleanup:
    • Restore CWD and environment variables
    • Terminate background processes
    • Safe cleanup of temp directory
  6. For each feature, create a test case:
    • Add category as a comment
    • Implement Setup-Execute-Verify pattern
    • Handle skip metadata with skipIf and skip documentation block
  7. Add the autogenerated footer.

Generation Rules

  • Follow existing patterns: If the project already has E2E tests, match their style exactly (imports, assertion style, helper usage).
  • Create shared utilities: If e2e-utils doesn't exist, create it with safeCleanup, mock helpers, and common constants.
  • Use real implementations: Real file systems, real git repos. Mock only external CLI tools and services.
  • Sequential execution: Configure the test runner for sequential execution (single fork) if not already configured.
  • Framework adaptation: Use the project's actual test framework, not pseudocode. Adapt all patterns to the real language and tooling.

Phase 4: Verify

  1. Run the E2E tests for each package that was generated.
  2. If tests fail:
    • Read the failure output
    • Fix the test implementation (not the spec)
    • Re-run until passing
  3. Run the project's formatter/linter on generated files.
  4. Present a summary:
    • Files created
    • Test cases generated per file
    • Category breakdown
    • Any issues encountered and how they were resolved

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

fp-update-spec

No summary provided by upstream source.

Repository SourceNeeds Review
General

fp-add

No summary provided by upstream source.

Repository SourceNeeds Review
General

fp-fix

No summary provided by upstream source.

Repository SourceNeeds Review
General

fp-update

No summary provided by upstream source.

Repository SourceNeeds Review