alignment-review

Review code changes for alignment with OpenEnv principles using a two-tier model.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "alignment-review" with this command: npx skills add meta-pytorch/openenv/meta-pytorch-openenv-alignment-review

Alignment Review

Review code changes for alignment with OpenEnv principles using a two-tier model.

Instructions

Run automated checks first:

  • Execute bash .claude/hooks/lint.sh

  • capture lint issues

  • Execute bash .claude/hooks/check-debug.sh

  • capture debug code

Read alignment documents:

  • .claude/docs/PRINCIPLES.md

  • design principles

  • .claude/docs/INVARIANTS.md

  • system invariants

Read open RFCs:

  • Scan rfcs/ directory for all RFC files

  • Note the status of each RFC (Draft, In Review, Accepted, Implemented)

  • Pay special attention to Draft and In Review RFCs - these represent active design discussions

Analyze changes (use git diff or provided diff):

  • Identify mechanical issues (Tier 1)

  • Flag alignment concerns (Tier 2)

  • Flag conflicts with open RFCs (Tier 2)

Tier 1: Uncontentious Issues (Fix Immediately)

These are issues to fix without human input:

  • Lint failures from hook output

  • Debug code from hook output (print statements, breakpoints)

  • Uninitialized variables, type errors

  • Missing imports, syntax errors

  • Security issues (credential exposure, injection vulnerabilities)

Tier 2: Alignment Discussion Points

For each potential alignment concern, format as:

ALIGNMENT FLAG: [Brief description]

  • Principle/RFC at stake: [Which principle from PRINCIPLES.md or RFC number]
  • The concern: [What seems misaligned or in conflict]
  • Suggested reviewer: @darktex [pull actual reviewers based on authors of the specific line of PRINCIPLES.md and INVARIANTS.md using git blame, and/or authors of conflicting RFCs]

Examples of Tier 2 Issues

Principle conflicts:

  • Adding external reward computation (violates "rewards in environment")

  • Client importing server code (violates client-server separation)

  • New API that differs from Gymnasium pattern

RFC conflicts (flag even for Draft/In Review RFCs):

  • Change conflicts with design proposed in an open RFC

  • Change pre-empts a decision being discussed in an RFC

  • Change implements something differently than an RFC proposes

  • Change affects an area covered by an RFC under review

Why flag RFC conflicts? Even if an RFC isn't finalized, flagging conflicts helps focus design discussions. The change might be correct and the RFC might need updating, or vice versa - either way, the team should discuss.

Output Format

Alignment Review Report

Automated Checks

  • Lint: [PASS/FAIL] - [summary]
  • Debug code: [CLEAN/FOUND] - [details]

Open RFCs Context

[List any RFCs in Draft or In Review status that might be relevant to these changes]

Tier 1: Fixes Required

  • path/file.py:123 - [issue description]
  • path/file.py:456 - [issue description]

Tier 2: Alignment Discussion

Principle Conflicts

[ALIGNMENT FLAGS for principle violations, or "None identified"]

RFC Conflicts

[ALIGNMENT FLAGS for RFC conflicts, or "None identified"]

Summary

  • X mechanical issues to fix
  • Y alignment points for human review
  • Z RFC conflicts to discuss

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

write-tests

No summary provided by upstream source.

Repository SourceNeeds Review
General

pre-submit-pr

No summary provided by upstream source.

Repository SourceNeeds Review
General

simplify

No summary provided by upstream source.

Repository SourceNeeds Review
General

rfc-check

No summary provided by upstream source.

Repository SourceNeeds Review