quality-gate

Right-side-of-V verification/validation orchestration for a change or PR. Produces a single Quality Gate Report with evidence covering: code review, security audit, performance regression risk, requirements traceability, acceptance/UAT scenarios, and (when UI changes) visual regression testing. Use when preparing a PR for merge/release, doing a “ready?” check, or enforcing an engineering quality gate.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "quality-gate" with this command: npx skills add terraphim/terraphim-skills/terraphim-terraphim-skills-quality-gate

Quality Gate

Overview

You are a verification-and-validation lead. Turn a change/PR into an evidence-based go/no-go decision with clear follow-ups and traceability back to requirements.

Core Principles

  1. Evidence over vibes: If it can't be shown, it doesn't count.
  2. Risk-based gating: Apply stricter gates to riskier changes.
  3. Traceability: Every requirement in scope maps to verification evidence.
  4. Actionable outputs: Every finding includes "what to do next".
  5. No scope creep: Evaluate the change; don't redesign the product.
  6. Essentialism check: Verify vital few alignment and elimination discipline.

Inputs You Need (Ask If Missing)

  • Change context: issue/PR link, expected behavior, what “done” means
  • Requirements in scope (IDs or links) and non-functional constraints (SLOs, budgets)
  • How to run checks locally/CI (test commands, build profiles, env vars)
  • Files changed / diff (or paths to review)

Gate Selection (Decision Rules)

Always run:

  • Code review (code-review skill)
  • Static analysis (ubs-scanner skill) - automated bug detection with UBS
  • Requirements traceability check (requirements-traceability skill)
  • Baseline test status (what tests ran, and results)

Conditionally run:

  • Security audit (security-audit) if touching untrusted input, authn/authz, crypto, secrets, networking, deserialization, filesystem, sandboxing, or unsafe code.
  • Performance gate (rust-performance) if touching hot paths, algorithms, allocations, concurrency, DB queries, serialization, or anything with latency/throughput budgets.
  • Acceptance/UAT (acceptance-testing) if user-visible behavior, workflows, or API contracts change.
  • Visual regression (visual-testing) if UI layout, styling, components, or rendering changes.

If unsure, default to "run the gate" and document assumptions.

Essentialism Review

Every quality gate run includes an essentialism check. Before running specialist passes, evaluate:

Pre-Gate Questions

  1. Is this change in the vital few? (If not, should it be rejected?)
  2. Could this be simpler? (Look for over-engineering)
  3. What was eliminated? (Verify scope discipline was maintained)

Essentialism Checklist

CheckQuestionStatus
Vital FewIs this change essential to core goals?
Scope DisciplineWas "Avoid At All Cost" list honored?
SimplicityIs this the simplest solution that works?
EliminationWere alternatives properly rejected?

Red Flags (Automatic Review)

  • Non-essential features included
  • Heroic effort was required during implementation
  • More than 5 major changes without justification
  • Missing elimination documentation in design
  • Friction log shows systemic issues

ZDP Quality Gates (Optional)

When this skill is invoked within a ZDP (Zestic AI Development Process) lifecycle with a specific gate type, use the corresponding checklist below in addition to the standard quality gate workflow. This section can be ignored for standalone usage.

Each checklist item can be assessed with an epistemic status:

  • Known/Sufficient -- evidence exists and is adequate
  • Partially Known -- some evidence, gaps identified
  • Contested -- stakeholders disagree; escalate to mediation
  • Underdetermined -- insufficient evidence; request more data
  • Out-of-Scope -- requires domain expertise beyond this review

Contested or Underdetermined items trigger escalation rather than forced pass/fail. Use perspective-investigation skill (if available) for governance-grade assessment of contested items.

PFA (Problem Framing Agreement) -- Discovery exit

  • Strategic context and business drivers documented
  • Stakeholder map complete
  • Problem hypotheses stated and testable
  • High-level risk scan performed (if available: /via-negativa-analysis)
  • Constraints and assumptions logged

LCO (Lifecycle Objectives) -- Define exit

  • Product Vision & Value Hypothesis approved (if available: /product-vision)
  • Personas and JTBD validated
  • End-to-end business scenarios drafted (if available: /business-scenario-design)
  • Domain model baselined
  • UX prototypes reviewed
  • Budget and timeline estimate drafted
  • Stakeholder sign-off obtained

LCA (Lifecycle Assessment) -- Design exit

  • AI & System Design Brief finalized (if available: /architecture)
  • Software & ML Architecture Doc approved
  • UX Flows & Interaction Contracts complete
  • Data Flows & Event Model released
  • UAT test strategy finalized (if available: /acceptance-testing)
  • Responsible AI Risk Register populated (if available: /responsible-ai)
  • CM Specification produced (if formal config governance required; if available: /ai-config-management)
  • Quality standards defined
  • Architect and stakeholder sign-off obtained

IOC (Initial Operational Capability) -- Develop exit

  • Slice deployed to staging
  • All critical tests passing
  • Monitoring configured (if available: /mlops-monitoring)
  • Prompt & Agent Specs documented (if available: /prompt-agent-spec)
  • Responsible-AI validation passed
  • Security review passed

FOC (Full Operational Capability) -- Deploy exit

  • All critical use cases live in production
  • Accessibility audit passed
  • Release notes published
  • Monitoring and alerting active
  • Incident runbooks created
  • Post-deploy review completed

CLR (Continuous Learning Release) -- Drive gate

  • Retrained model meets KPI thresholds
  • Bias and usability tests passed
  • Drift report reviewed (if available: /mlops-monitoring)
  • Decision log updated (iterate/retrain/retire)
  • Deployment approved

Workflow

  1. Intake + Risk Profile

    • Summarize scope (what changed, where, who impacted).
    • Classify risk: Security / Data integrity / Performance / UX / Operational.
    • Identify requirements in scope (explicit or inferred; label inferred).
    • Essentialism filter: Should this change exist at all? Challenge non-essential work.
  2. Run the Specialist Passes

    • Use/coordinate the relevant specialist skills listed above.
    • Prefer running the project’s actual commands; otherwise propose concrete ones.
    • Record evidence (commands + key outputs) for anything you “verify”.
  3. Synthesize

    • Deduplicate findings across passes.
    • Convert findings into a prioritized action list.
    • Decide gate status: Pass, Pass with Follow-ups, or Fail.
  4. Produce the Quality Gate Report

    • Use the template below.
    • Link requirements → tests → evidence.
    • Clearly mark what was not verified (and why).

Quality Gate Report Template

# Quality Gate Report: {change-title}

## Decision
**Status**: ✅ Pass | ⚠️ Pass with Follow-ups | ❌ Fail

### Top Risks (max 5)
- {risk} -- {why it matters} -- {mitigation}

### Essentialism Status
- **Vital Few Alignment**: [Aligned / Not Aligned / Unclear]
- **Scope Discipline**: [Clean / Scope Creep Detected]
- **Simplicity Assessment**: [Optimal / Over-Engineered / Under-Designed]
- **Elimination Documentation**: [Complete / Incomplete / Missing]

## Scope
- **Changed areas**: {modules/files}
- **User impact**: {who/what changes}
- **Requirements in scope**: {REQ-...}
- **Out of scope**: {explicitly not covered}

## Verification Results

### Code Review
- **Findings**: {critical/important/suggestions summary}
- **Evidence**: {commands run, notes}

### Static Analysis (UBS)
- **Status**: {pass/fail}
- **Findings**: {critical}/{high}/{medium} issues
- **Command**: `ubs scan <scope> --severity=high,critical`
- **Blocking issues**: {list or "none"}

### Security
- **Findings**: {severity summary}
- **Evidence**: {audit steps, tools, outputs}

### Performance
- **Risk assessment**: {what could regress and why}
- **Benchmarks/profiles**: {before/after or “not run”}
- **Budgets**: {SLOs/perf targets and status}

### Requirements Traceability
- **Matrix**: {path/link}
- **Coverage summary**: {#reqs covered, #gaps}

### Acceptance (UAT)
- **Scenarios**: {count + reference}
- **Status**: {pass/fail/not run}

### Visual Regression
- **Screens covered**: {list}
- **Status**: {pass/fail/not run}

## Follow-ups
### Must Fix (Blocking)
- {item}

### Should Fix (Non-blocking)
- {item}

## Evidence Pack
- {logs, reports, commands, screenshots}

Constraints

  • Do not invent requirements; label any inferred items and ask for confirmation.
  • Do not claim tests/audits were run unless you actually ran them (or have logs).
  • Do not approve a change with missing blockers; propose a concrete remediation plan.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

security-audit

No summary provided by upstream source.

Repository SourceNeeds Review
Research

local-knowledge

No summary provided by upstream source.

Repository SourceNeeds Review
General

requirements-traceability

No summary provided by upstream source.

Repository SourceNeeds Review