test-generator

Mode: Cognitive/Prompt-Driven — No standalone utility script; use via agent context.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "test-generator" with this command: npx skills add oimiragieo/agent-studio/oimiragieo-agent-studio-test-generator

Mode: Cognitive/Prompt-Driven — No standalone utility script; use via agent context.

Step 1: Identify Test Type

Determine what type of test is needed:

  • Unit Test: Component/function testing

  • Integration Test: Service/API integration

  • E2E Test: Full user flow testing

  • API Test: Endpoint testing

Step 2: Analyze Target Code

Examine code to test (Use Parallel Read/Grep/Glob):

  • Read component/function code

  • Identify test cases

  • Understand dependencies

  • Note edge cases

Step 3: Analyze Test Patterns

Review existing tests:

  • Read similar test files

  • Identify testing patterns

  • Note testing framework usage

  • Understand mocking strategies

Step 4: Generate Test Code

Create test following patterns:

  • Use appropriate testing framework

  • Follow project conventions

  • Include comprehensive coverage

  • Add edge cases and error scenarios

Step 5: Coverage Analysis

After generating tests, analyze coverage:

Check that generated tests cover all requirements:

  • Verify all functions/methods are tested

  • Check all branches are covered (if/else, switch, etc.)

  • Ensure all edge cases are tested

  • Validate error scenarios are covered

Validate tests are runnable:

  • Check test syntax is valid

  • Verify imports are correct

  • Ensure test framework is properly configured

  • Validate test setup/teardown is correct

Report coverage percentage:

  • Calculate line coverage (if possible)

  • Calculate branch coverage (if possible)

  • Report uncovered code paths

  • Suggest additional tests for uncovered areas

Coverage Validation Checklist:

  • All public functions/methods have tests

  • All error paths are tested

  • All edge cases are covered

  • Tests are syntactically valid

  • Tests can be executed successfully

  • Coverage meets project thresholds (if defined) </execution_process>

import { render, screen, waitFor } from '@testing-library/react' import { describe, it, expect, vi } from 'vitest' import { UserProfile } from './user-profile'

describe('UserProfile', () => { it('renders user information', async () => { const mockUser = { id: '1', name: 'John', email: 'john@example.com' }

render(&#x3C;UserProfile user={mockUser} />)

await waitFor(() => {
  expect(screen.getByText('John')).toBeInTheDocument()
  expect(screen.getByText('john@example.com')).toBeInTheDocument()
})

})

it('handles loading state', () => { render(<UserProfile user={null} loading />) expect(screen.getByTestId('loading')).toBeInTheDocument() })

it('handles error state', () => { render(<UserProfile user={null} error="Failed to load" />) expect(screen.getByText('Failed to load')).toBeInTheDocument() }) })

</code_example>

<code_example> Integration Test (API)

import { describe, it, expect, beforeAll, afterAll } from 'vitest'; import { createTestClient } from './test-client';

describe('Users API', () => { let client: TestClient;

beforeAll(() => { client = createTestClient(); });

afterAll(async () => { await client.cleanup(); });

it('creates a user', async () => { const response = await client.post('/api/users', { email: 'test@example.com', name: 'Test User', });

expect(response.status).toBe(201);
expect(response.data).toHaveProperty('id');
expect(response.data.email).toBe('test@example.com');

});

it('validates required fields', async () => { const response = await client.post('/api/users', {});

expect(response.status).toBe(400);
expect(response.data).toHaveProperty('errors');

}); });

</code_example>

<code_example> E2E Test (Cypress)

describe('User Authentication Flow', () => { beforeEach(() => { cy.visit('/login'); });

it('allows user to login', () => { cy.get('[data-testid="email-input"]').type('user@example.com'); cy.get('[data-testid="password-input"]').type('password123'); cy.get('[data-testid="login-button"]').click();

cy.url().should('include', '/dashboard');
cy.get('[data-testid="user-menu"]').should('be.visible');

});

it('shows error for invalid credentials', () => { cy.get('[data-testid="email-input"]').type('invalid@example.com'); cy.get('[data-testid="password-input"]').type('wrong'); cy.get('[data-testid="login-button"]').click();

cy.get('[data-testid="error-message"]')
  .should('be.visible')
  .and('contain', 'Invalid credentials');

}); });

</code_example>

Integration with QA Agent:

  • Creates comprehensive test suites

  • Generates test plans

  • Validates test quality

<best_practices>

  • Follow Patterns: Match existing test structure

  • Comprehensive Coverage: Test happy paths and edge cases

  • Clear Test Names: Descriptive test descriptions

  • Isolate Tests: Each test should be independent

  • Mock Dependencies: Use appropriate mocking strategies </best_practices>

Generate tests for a file

node .claude/skills/test-generator/scripts/main.cjs src/components/UserProfile.tsx

The tool will analyze the file and generate appropriate tests

</usage_example>

Iron Laws

  • ALWAYS analyze existing test patterns and framework conventions before generating any test code

  • NEVER generate tests that inspect implementation details — test only public behavior and outputs

  • ALWAYS include edge cases: null/undefined inputs, boundary values, and error scenarios for every tested unit

  • NEVER produce tests with shared mutable state — use beforeEach/afterEach to isolate every test

  • ALWAYS verify generated tests are syntactically valid and runnable before marking generation complete

Anti-Patterns

Anti-Pattern Why It Fails Correct Approach

Testing implementation details Breaks on refactor even when behavior is unchanged Test public API behavior and observable outputs

No assertions in test body Test always passes, catches nothing Add explicit assertions for every test case

Shared mutable state between tests Tests fail depending on execution order Use beforeEach/afterEach for full isolation

Magic numbers in assertions Unclear expected values, brittle tests Use named constants or descriptive fixture data

Missing error path tests Half coverage, silent failures in production Test both success and failure scenarios

Memory Protocol (MANDATORY)

Before starting: Read .claude/context/memory/learnings.md

After completing:

  • New pattern -> .claude/context/memory/learnings.md

  • Issue found -> .claude/context/memory/issues.md

  • Decision made -> .claude/context/memory/decisions.md

ASSUME INTERRUPTION: If it's not in memory, it didn't happen.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

filesystem

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

slack-notifications

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

chrome-browser

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

diagram-generator

No summary provided by upstream source.

Repository SourceNeeds Review