a11y-checker-ci

Automated accessibility testing in CI/CD pipelines with comprehensive reporting.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "a11y-checker-ci" with this command: npx skills add hopeoverture/worldbuilding-app-skills/hopeoverture-worldbuilding-app-skills-a11y-checker-ci

A11y Checker CI

Automated accessibility testing in CI/CD pipelines with comprehensive reporting.

Overview

To enforce accessibility standards in continuous integration, this skill configures automated WCAG compliance checks using industry-standard tools and generates detailed reports for every pull request.

When to Use

Use this skill when:

  • Adding accessibility testing to CI/CD pipelines

  • Enforcing WCAG compliance in automated builds

  • Generating accessibility reports for pull requests

  • Setting up quality gates based on accessibility

  • Automating accessibility audits

  • Tracking accessibility improvements over time

  • Ensuring new features meet accessibility standards

Supported Tools

@axe-core/playwright

Industry-standard accessibility testing engine with Playwright integration.

Advantages:

  • Comprehensive WCAG rule coverage

  • Fast execution in parallel with E2E tests

  • Detailed violation reporting

  • Active maintenance and updates

pa11y-ci

Command-line accessibility testing tool for multiple URLs.

Advantages:

  • Simple configuration

  • Standalone execution (no browser automation needed)

  • Multiple URL scanning

  • Custom rule configuration

Implementation Steps

  1. Choose Testing Approach

To select the appropriate tool:

Use @axe-core/playwright when:

  • Already using Playwright for E2E tests

  • Need integration with existing test suites

  • Want to test dynamic/authenticated pages

  • Require detailed test context

Use pa11y-ci when:

  • Need simple URL-based scanning

  • Want standalone accessibility checks

  • Testing static pages or public URLs

  • Prefer configuration-based approach

  1. Install Dependencies

For @axe-core/playwright:

npm install -D @axe-core/playwright

For pa11y-ci:

npm install -D pa11y-ci

  1. Create Test Configuration

Option A: @axe-core/playwright

Create test file using assets/a11y-test.spec.ts :

import { test, expect } from '@playwright/test' import AxeBuilder from '@axe-core/playwright'

test.describe('Accessibility Tests', () => { test('homepage meets WCAG standards', async ({ page }) => { await page.goto('/')

const accessibilityScanResults = await new AxeBuilder({ page })
  .withTags(['wcag2a', 'wcag2aa', 'wcag21a', 'wcag21aa'])
  .analyze()

expect(accessibilityScanResults.violations).toEqual([])

}) })

Option B: pa11y-ci

Create configuration using assets/pa11y-config.json :

{ "defaults": { "timeout": 30000, "chromeLaunchConfig": { "executablePath": "/usr/bin/chromium-browser", "args": ["--no-sandbox"] }, "standard": "WCAG2AA", "runners": ["axe", "htmlcs"], "ignore": [] }, "urls": [ "http://localhost:3000", "http://localhost:3000/entities", "http://localhost:3000/timeline" ] }

  1. Generate Report Script

Create report generator using scripts/generate_a11y_report.py :

python scripts/generate_a11y_report.py
--input test-results/a11y-results.json
--output accessibility-report.md
--format github

The script generates markdown reports with:

  • Executive summary with pass/fail status

  • Violation count by severity (critical, serious, moderate, minor)

  • Detailed violation list with:

  • Rule ID and description

  • WCAG criteria

  • Impact level

  • Affected elements

  • Remediation guidance

  • Historical comparison (if available)

  1. Configure CI Pipeline

GitHub Actions

Use template from assets/github-actions-a11y.yml :

name: Accessibility Tests

on: pull_request: branches: [main, master] push: branches: [main, master]

jobs: a11y: runs-on: ubuntu-latest

steps:
  - uses: actions/checkout@v4

  - name: Setup Node
    uses: actions/setup-node@v4
    with:
      node-version: '20'
      cache: 'npm'

  - name: Install dependencies
    run: npm ci

  - name: Build application
    run: npm run build

  - name: Start server
    run: npm start &

  - name: Wait for server
    run: npx wait-on http://localhost:3000 -t 60000

  - name: Run accessibility tests
    run: npm run test:a11y

  - name: Generate report
    if: always()
    run: |
      python scripts/generate_a11y_report.py \
        --input test-results/a11y-results.json \
        --output accessibility-report.md \
        --format github

  - name: Comment PR
    if: github.event_name == 'pull_request' && always()
    uses: actions/github-script@v7
    with:
      script: |
        const fs = require('fs')
        const report = fs.readFileSync('accessibility-report.md', 'utf8')

        github.rest.issues.createComment({
          issue_number: context.issue.number,
          owner: context.repo.owner,
          repo: context.repo.repo,
          body: report
        })

  - name: Upload report
    if: always()
    uses: actions/upload-artifact@v4
    with:
      name: accessibility-report
      path: |
        accessibility-report.md
        test-results/

  - name: Fail on violations
    if: failure()
    run: exit 1

GitLab CI

Use template from assets/gitlab-ci-a11y.yml :

accessibility-test: stage: test image: mcr.microsoft.com/playwright:v1.40.0-focal script: - npm ci - npm run build - npm start & - npx wait-on http://localhost:3000 -t 60000 - npm run test:a11y - python scripts/generate_a11y_report.py --input test-results/a11y-results.json --output accessibility-report.md --format gitlab artifacts: when: always paths: - accessibility-report.md - test-results/ reports: junit: test-results/junit.xml only: - merge_requests - main

  1. Add Package Scripts

Add to package.json:

{ "scripts": { "test:a11y": "playwright test a11y.spec.ts", "test:a11y:ci": "playwright test a11y.spec.ts --reporter=json", "pa11y": "pa11y-ci --config .pa11yci.json" } }

Report Format

Executive Summary

Accessibility Test Report

Status: [ERROR] Failed Total Violations: 12 Pages Tested: 5 WCAG Level: AA Date: 2025-01-15

Summary by Severity

  • [CRITICAL] Critical: 2
  • [SERIOUS] Serious: 5
  • [MODERATE] Moderate: 3
  • [MINOR] Minor: 2

Violation Details

Violations

[CRITICAL] Critical (2)

1. Form elements must have labels (form-field-multiple-labels)

WCAG Criteria: 3.3.2 (Level A) Impact: Critical Occurrences: 3 elements

Description: Form fields should have exactly one associated label element.

Affected Elements:

  • Line 45: <input type="text" name="entity-name">
  • Line 67: <input type="email" name="user-email">
  • Line 89: <select name="entity-type">

How to Fix: Add a <label> element with a for attribute matching the input's id:

```html <label for="entity-name">Entity Name</label> <input id="entity-name" type="text" name="entity-name"> ```

More Info: https://dequeuniversity.com/rules/axe/4.7/label


Historical Comparison

Progress

MetricPreviousCurrentChange
Total Violations1512[OK] -3
Critical32[OK] -1
Serious75[OK] -2
Moderate43[OK] -1
Minor12[ERROR] +1

Quality Gates

Blocking Violations

To fail builds on specific violations, configure thresholds:

const results = await new AxeBuilder({ page }).analyze()

// Fail on any critical violations const critical = results.violations.filter(v => v.impact === 'critical') expect(critical).toHaveLength(0)

// Allow up to 5 moderate violations const moderate = results.violations.filter(v => v.impact === 'moderate') expect(moderate.length).toBeLessThanOrEqual(5)

Configuration File

Use assets/a11y-thresholds.json :

{ "thresholds": { "critical": 0, "serious": 0, "moderate": 5, "minor": 10 }, "allowedViolations": [ "color-contrast" ], "ignoreSelectors": [ "#third-party-widget", "[data-testid='external-embed']" ] }

Advanced Configuration

Custom Rules

To disable or configure specific rules:

const results = await new AxeBuilder({ page }) .disableRules(['color-contrast']) .withRules({ 'custom-rule': { enabled: true } }) .analyze()

Page-Specific Tests

Test different page types:

const pages = [ { url: '/', name: 'Homepage' }, { url: '/entities', name: 'Entity List' }, { url: '/timeline', name: 'Timeline View' } ]

for (const { url, name } of pages) { test(${name} accessibility, async ({ page }) => { await page.goto(url) const results = await new AxeBuilder({ page }).analyze() expect(results.violations).toEqual([]) }) }

Authenticated Pages

Test pages requiring authentication:

test.use({ storageState: 'auth.json' })

test('dashboard accessibility', async ({ page }) => { await page.goto('/dashboard') const results = await new AxeBuilder({ page }).analyze() expect(results.violations).toEqual([]) })

Report Customization

Custom Templates

Create custom report templates in assets/report-templates/ :

  • github-template.md

  • GitHub PR comments

  • gitlab-template.md

  • GitLab MR comments

  • slack-template.md

  • Slack notifications

  • html-template.html

  • HTML reports

Report Destinations

Configure report distribution:

python scripts/generate_a11y_report.py
--input results.json
--output-dir reports/
--formats github gitlab slack html
--slack-webhook $SLACK_WEBHOOK
--github-token $GITHUB_TOKEN

Monitoring and Tracking

Historical Data

Store results for trend analysis:

Save results with timestamp

python scripts/save_a11y_results.py
--input test-results/a11y-results.json
--database a11y-history.db

Generate trend report

python scripts/generate_trend_report.py
--database a11y-history.db
--days 30
--output a11y-trends.md

Metrics Dashboard

Generate metrics for dashboards:

{ "timestamp": "2025-01-15T10:30:00Z", "commit": "abc123", "branch": "feature/new-ui", "violations": { "critical": 2, "serious": 5, "moderate": 3, "minor": 2 }, "wcagCompliance": { "a": false, "aa": false, "aaa": false }, "pagesTested": 5, "totalElements": 1247, "testedElements": 1247 }

Resources

Consult the following resources for detailed information:

  • scripts/generate_a11y_report.py

  • Report generator

  • scripts/save_a11y_results.py

  • Historical data storage

  • scripts/generate_trend_report.py

  • Trend analysis

  • assets/a11y-test.spec.ts

  • Playwright test template

  • assets/pa11y-config.json

  • pa11y-ci configuration

  • assets/github-actions-a11y.yml

  • GitHub Actions workflow

  • assets/gitlab-ci-a11y.yml

  • GitLab CI configuration

  • assets/a11y-thresholds.json

  • Violation thresholds

  • references/wcag-criteria.md

  • WCAG standards reference

  • references/common-violations.md

  • Common issues and fixes

Best Practices

  • Run accessibility tests on every pull request

  • Set appropriate thresholds for violations

  • Generate readable reports for developers

  • Track accessibility metrics over time

  • Test authenticated and dynamic pages

  • Include accessibility in definition of done

  • Review and update ignored rules periodically

  • Provide remediation guidance in reports

  • Celebrate accessibility improvements

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

eslint-prettier-husky-config

No summary provided by upstream source.

Repository SourceNeeds Review
General

testing-next-stack

No summary provided by upstream source.

Repository SourceNeeds Review
General

markdown-editor-integrator

No summary provided by upstream source.

Repository SourceNeeds Review
General

form-generator-rhf-zod

No summary provided by upstream source.

Repository SourceNeeds Review