quality-auditor

Code quality gatekeeper and auditor. Enforces strict quality gates, resolves the AI verification gap, and evaluates codebases across 12 critical dimensions with evidence-based scoring. Use when auditing code quality, reviewing AI-generated code, scoring codebases against industry standards, or enforcing pre-commit quality gates. Use for quality audit, code review, codebase evaluation, security assessment, technical debt analysis.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "quality-auditor" with this command: pnpm dlx skills add oakoss/agent-skills

Quality Auditor

Overview

Evaluates tools, frameworks, systems, and codebases against the highest industry standards across 12 weighted dimensions. Produces evidence-based scores, identifies anti-patterns, and generates prioritized improvement roadmaps. Applies extra scrutiny to AI-generated code through the verification gap protocol, ensuring velocity does not compromise integrity.

When to use: Auditing code quality, reviewing AI-generated code, scoring codebases against industry benchmarks, enforcing pre-commit quality gates, comparing tools or frameworks, assessing technical debt.

When NOT to use: Quick code reviews without scoring, style-only linting (use a linter), feature implementation, routine PR reviews that do not require a full audit.

Quick Reference

DimensionWeightWhat to Evaluate
Code Quality10%Structure, patterns, SOLID, duplication, complexity, error handling
Architecture10%Design, modularity, scalability, coupling/cohesion, API design
Documentation10%Completeness, clarity, accuracy, examples, troubleshooting
Usability10%Learning curve, installation ease, error messages, ergonomics
Performance8%Speed, resource usage, caching, bundle size, Core Web Vitals
Security10%OWASP Top 10, input validation, auth, secrets, dependencies
Testing8%Coverage (unit/integration/e2e), quality, automation, organization
Maintainability8%Technical debt, readability, refactorability, versioning
Developer Experience10%Setup ease, debugging, tooling, hot reload, IDE integration
Accessibility8%WCAG compliance, keyboard nav, screen readers, cognitive load
CI/CD5%Automation, pipelines, deployment, rollback, monitoring
Innovation3%Novel approaches, forward-thinking design, unique value

Audit Phases

PhaseNamePurpose
0Resource CompletenessVerify registry/filesystem parity; audit fails if this fails
1DiscoveryRead docs, examine code, test system, review supporting materials
2EvaluationScore each dimension with evidence, strengths, and weaknesses
3SynthesisExecutive summary, detailed scores, recommendations, risk matrix

Scoring Scale

ScoreRatingMeaning
10ExceptionalIndustry-leading, sets new standards
8-9ExcellentExceeds expectations significantly
6-7GoodMeets expectations with improvements needed
5AcceptableBelow average, significant improvements
3-4PoorMajor gaps and fundamental problems
1-2CriticalBarely functional or non-functional

Common Mistakes

MistakeCorrect Pattern
Giving inflated scores without evidenceEvery score must cite specific files, metrics, or code examples as evidence
Skipping Phase 0 resource completeness checkAlways verify registry completeness first; missing resources cap the overall score at 6/10
Evaluating only code quality, ignoring dimensionsScore all 12 dimensions with appropriate weights; architecture, security, and DX matter equally
Accepting superficial "LGTM" reviewsPerform deep semantic audits checking contract integrity, security sanitization, and performance hygiene
Trusting AI-generated code without verificationApply the verification gap protocol: critic agents, verifiable goals, human oversight for critical paths
Proceeding after audit failure without re-auditStop, analyze the deviation, remediate, then restart the checklist from step 1
Using 10/10 scores without exceptional evidenceReserve 10/10 for truly industry-leading work; most quality tools score 6-7
Surface-level static analysis onlyCombine linting with architectural fit checks, risk-based PR categorization, and context-aware validation

Delegation

  • Discover codebase structure and gather audit evidence: Use Explore agent to survey file organization, dependencies, test coverage, and documentation
  • Execute targeted quality checks across dimensions: Use Task agent to run linters, security scanners, performance profilers, and accessibility audits
  • Design quality improvement roadmap: Use Plan agent to prioritize quick wins, short-term, and long-term recommendations from audit findings

For stylistic cleanup of AI-generated prose and code (emdash overuse, slop vocabulary, over-commenting, verbose naming), use the de-slopify skill.

If the usability-tester skill is available, delegate usability dimension evaluation and user flow validation to it. Otherwise, recommend: pnpm dlx skills add oakoss/agent-skills -s usability-tester -a claude-code -y

References

  • Audit Rubric -- pass/warn/fail thresholds, weighted scoring methodology, automated vs manual checklists, score caps, report format
  • Dimension Rubrics -- detailed scoring criteria, evidence requirements, and rubric tables for all 12 dimensions
  • Audit Report Template -- structured report format, executive summary, recommendations, risk assessment
  • Anti-Patterns Guide -- code, architecture, security, testing, and process anti-patterns to identify during audits
  • Verification Gap Protocol -- AI code verification methodology, critic agents, rejection protocol, risk-based review strategies

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

application-security

No summary provided by upstream source.

Repository SourceNeeds Review
Security

database-security

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

playwright

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

ui-ux-polish

No summary provided by upstream source.

Repository SourceNeeds Review