e2e-testing

AI-powered E2E testing for any app — Flutter, React Native, iOS, Android, Electron, Tauri, KMP, .NET MAUI. Test 8 platforms with natural language through MCP. No test code needed. Just describe what to test and the agent sees screenshots, taps elements, enters text, scrolls, and verifies UI state automatically.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "e2e-testing" with this command: npx skills add ai-dashboad/flutter-skill/ai-dashboad-flutter-skill-e2e-testing

AI E2E Testing — 8 Platforms, Zero Test Code

Give your AI agent eyes and hands inside any running app.

flutter-skill is an MCP server that connects AI agents to running apps. The agent can see screenshots, tap elements, enter text, scroll, navigate, inspect UI trees, and verify state — all through natural language.

Supported Platforms

PlatformSetup
Flutter (iOS/Android/Web)flutter pub add flutter_skill
React Nativenpm install flutter-skill-react-native
Electronnpm install flutter-skill-electron
iOS (Swift/UIKit)SPM: FlutterSkillSDK
Android (Kotlin)Gradle: flutter-skill-android
Tauri (Rust)cargo add flutter-skill-tauri
KMP DesktopGradle dependency
.NET MAUINuGet package

Test scorecard: 562/567 (99.1%) across all 8 platforms.

Install

# npm (recommended)
npm install -g flutter-skill

# Homebrew
brew install ai-dashboad/flutter-skill/flutter-skill

# Or download binary from GitHub Releases

MCP Configuration

Add to your AI agent's MCP config (Claude Desktop, Cursor, Windsurf, OpenClaw, etc.):

{
  "mcpServers": {
    "flutter-skill": {
      "command": "flutter-skill",
      "args": ["server"]
    }
  }
}

OpenClaw

If using OpenClaw, add to your gateway config under mcp.servers:

mcp:
  servers:
    flutter-skill:
      command: flutter-skill
      args: ["server"]

Quick Start

1. Initialize your app (one-time)

cd /path/to/your/app
flutter-skill init

Auto-detects project type and patches your app with the testing bridge.

2. Launch and connect

flutter-skill launch .

3. Test with natural language

Tell the agent what to test:

"Test the login flow — enter admin@test.com and password123, tap Login, verify Dashboard appears"

The agent will automatically:

  1. screenshot() → see the current screen
  2. inspect_interactive() → discover all tappable/typeable elements with semantic refs
  3. tap(ref: "button:Login") → tap using stable semantic reference
  4. enter_text(ref: "input:Email", text: "admin@test.com") → type into field
  5. wait_for_element(key: "Dashboard") → verify navigation
  6. screenshot() → confirm final state

Available MCP Tools

Core Actions

ToolDescription
screenshotCapture current screen as image
tapTap element by key, text, ref, or coordinates
enter_textType text into a field
scrollScroll up/down/left/right
swipeSwipe gesture between points
long_pressLong press an element
dragDrag from point A to B
go_backNavigate back
press_keySend keyboard key events

Inspection (v0.8.0+)

ToolDescription
inspect_interactiveNEW — Get all interactive elements with semantic ref IDs
get_elementsList all elements on screen
find_elementFind element by key or text
wait_for_elementWait for element to appear (with timeout)
get_element_propertiesGet detailed properties of an element

Text Manipulation

ToolDescription
set_textReplace text in a field
clear_textClear a text field
get_textRead text content

App Control

ToolDescription
get_logsRead app logs
clear_logsClear log buffer

Semantic Refs (v0.8.0)

inspect_interactive returns elements with stable semantic reference IDs:

button:Login          → Login button
input:Email           → Email text field
toggle:Dark Mode      → Dark mode switch
button:Submit[1]      → Second Submit button (disambiguated)

Format: {role}:{content}[{index}]

7 roles: button, input, toggle, slider, select, link, item

Use refs for reliable element targeting that survives UI changes:

tap(ref: "button:Login")
enter_text(ref: "input:Email", text: "test@example.com")

Testing Workflow

Basic Flow

screenshot() → inspect_interactive() → tap/enter_text → screenshot() → verify

Comprehensive Testing

"Explore every screen of this app. Test all buttons, forms, navigation, and edge cases. Report any bugs you find."

The agent will systematically:

  • Navigate every screen via tab bars, menus, links
  • Interact with every interactive element
  • Test form validation (empty, invalid, valid inputs)
  • Test edge cases (long text, special characters, emoji)
  • Verify navigation flows (forward, back, deep links)
  • Take screenshots at each step for verification

Example Prompts

Quick smoke test:

"Tap every tab and screenshot each page"

Form testing:

"Fill the registration form with edge case data — emoji name, very long email, short password — and verify error messages"

Navigation:

"Test the complete user journey: sign up → create post → like → comment → delete → sign out"

Accessibility:

"Check every screen for missing labels, small tap targets, and contrast issues"

Tips

  1. Always start with screenshot() — see before you act
  2. Use inspect_interactive() to discover elements — don't guess at selectors
  3. Prefer ref: selectors — more stable than text or coordinates
  4. wait_for_element() after navigation — apps need time to transition
  5. Screenshot after every action — verify the expected effect
  6. Use press_key for keyboard shortcuts — test keyboard navigation

Links

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

flutter-skill

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

e2e-testing

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

e2e-testing

No summary provided by upstream source.

Repository SourceNeeds Review