[IMPORTANT] Use TaskCreate to break ALL work into small tasks BEFORE starting — including tasks for each file read. This prevents context loss from long files. For simple tasks, AI MUST ask user whether to skip.
Prerequisites: MUST READ before executing:
-
.claude/skills/shared/scan-and-update-reference-doc-protocol.md
-
.claude/skills/shared/understand-code-first-protocol.md
Quick Summary
Goal: Scan E2E test codebase and populate docs/project-reference/e2e-test-reference.md with architecture, base classes, page objects, step definitions, configuration, and best practices.
Workflow:
-
Read — Load current target doc, detect init vs sync mode
-
Detect — Identify E2E framework(s) and tech stack
-
Scan — Discover E2E patterns via parallel sub-agents
-
Report — Write findings to external report file
-
Generate — Build/update reference doc from report
-
Verify — Validate code examples reference real files
Key Rules:
-
Generic — works with any E2E framework (Selenium, Playwright, Cypress, WebdriverIO, Puppeteer, etc.)
-
BDD frameworks (SpecFlow, Cucumber, Behave) are E2E — scan feature files, step definitions, contexts
-
Detect framework first, then scan for framework-specific patterns
-
Every code example must come from actual project files with file:line references
-
Use docs/project-config.json e2eTesting section if available for project-specific paths
Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).
Scan E2E Tests
Phase 0: Read & Assess
-
Read docs/project-reference/e2e-test-reference.md
-
Detect mode: init (placeholder) or sync (populated)
-
If sync: extract existing sections and note what's already well-documented
-
Read docs/project-config.json e2eTesting section if it exists — use as hints for paths and framework
Phase 1: Detect E2E Framework
Detect E2E framework and tech stack from project files:
.NET / C#
Selenium + SpecFlow (BDD)
grep -r "Selenium.WebDriver|SpecFlow" --include=".csproj" -l find . -name ".feature" -type f | head -10 grep -r "[Binding]|[Given|[When|[Then" --include="*.cs" -l | head -10
Playwright .NET
grep -r "Microsoft.Playwright" --include="*.csproj" -l
TypeScript / JavaScript
Playwright
ls playwright.config.* 2>/dev/null grep -l "playwright" package.json */package.json 2>/dev/null
Cypress
ls cypress.config.* 2>/dev/null grep -l "cypress" package.json */package.json 2>/dev/null
WebdriverIO
ls wdio.conf.* 2>/dev/null
Puppeteer
grep -l "puppeteer" package.json */package.json 2>/dev/null
Python
Selenium + Behave (BDD)
grep -r "selenium|behave" requirements*.txt setup.py pyproject.toml 2>/dev/null find . -name "*.feature" -type f | head -10
Playwright Python
grep -r "playwright" requirements*.txt pyproject.toml 2>/dev/null
Java
Selenium + Cucumber (BDD)
grep -r "selenium|cucumber" --include="pom.xml" --include="build.gradle" -l 2>/dev/null find . -name "*.feature" -type f | head -10
Output: Detected framework(s), language, BDD framework (if any), test runner
Phase 2: Execute Scan (Parallel Sub-Agents)
Launch 3 Explore agents in parallel:
Agent 1: E2E Framework & Architecture
-
Find E2E project structure (test directories, page object directories)
-
Find base classes for tests and page objects
-
Find DI/startup configuration for test projects
-
Find WebDriver/browser management (driver creation, lifecycle, options)
-
Find settings/configuration classes (URLs, credentials, timeouts)
-
Count test files, feature files, page objects
Agent 2: Page Object Model & Components
-
Find page object classes and their hierarchy
-
Find UI component wrappers (reusable element abstractions)
-
Find selector patterns (CSS, data-testid, XPath, BEM)
-
Find navigation helpers (page transitions, URL routing)
-
Find wait/retry patterns (explicit waits, polling, retry logic)
-
Find assertion helpers and validation patterns
Agent 3: BDD & Test Patterns (if BDD detected)
-
Find feature files (.feature) — count, categorize by area
-
Find step definition classes — count, list patterns
-
Find context/state sharing between steps (ScenarioContext, World, IBddStepsContext)
-
Find hooks (Before/After scenario, BeforeAll/AfterAll)
-
Find test data patterns (fixtures, factories, unique generators)
-
Find test account/credential management patterns
-
Find environment configuration (per-env settings, CI headless mode)
Write all findings to: plans/reports/scan-e2e-tests-{YYMMDD}-{HHMM}-report.md
Phase 3: Generate Reference Doc
Build docs/project-reference/e2e-test-reference.md with these sections:
Required Sections (all frameworks)
-
Architecture Overview — Layer diagram, project dependencies
-
Project Structure — Directory tree with annotations
-
Key Dependencies — Package versions table
-
Base Classes — Test/page object hierarchies with code examples
-
Page Object Pattern — How to create page objects, component wrappers
-
Wait & Assertion Patterns — Resilient waits, retry, assertion helpers
-
Navigation & Page Discovery — URL routing, page transitions
-
Configuration — Settings files, environment variants, CI setup
-
Running Tests — Commands for all, filtered, headed, CI modes
-
Best Practices — Project-specific conventions
Conditional Sections (framework-specific)
-
BDD Pattern (SpecFlow/Cucumber/Behave) — Feature file conventions, step definitions, context sharing, tags
-
Test Account System (if credential management found) — Account types, numbered variants
-
Common Patterns (if shared steps/helpers found) — Login flows, error assertions, reusable steps
-
Environment Variants (if multi-env found) — Abstract/concrete page pattern, env-specific configs
Section Template
Each section should include:
-
Brief description of the pattern
-
Code example from actual project files (with file:line reference)
-
Key class/method names for searchability
Phase 4: Update project-config.json
If docs/project-config.json exists, update/create the e2eTesting section:
{ "e2eTesting": { "framework": "<detected>", "language": "<detected>", "guideDoc": "docs/project-reference/e2e-test-reference.md", "runCommands": { ... }, "bestPractices": [ ... ], "entryPoints": [ ... ], "stats": { "featureFiles": N, "stepDefinitionFiles": N, "featureAreas": N }, "dependencies": { ... }, "architecture": { ... } } }
Phase 5: Verify
-
Spot-check 3-5 code examples — do file:line references exist?
-
Verify class names match actual code (grep for each)
-
Verify dependency versions against .csproj / package.json / requirements.txt
-
Verify file counts (feature files, step defs, page objects) are accurate
-
Run schema validation if project-config.json was updated
Output
Report what changed:
-
Sections created vs updated
-
Framework detected and version
-
File counts discovered
-
Any patterns not documented (gaps)
IMPORTANT Task Planning Notes (MUST FOLLOW)
-
Always plan and break work into many small todo tasks
-
Always add a final review todo task to verify work quality and identify fixes/enhancements