adversarial-review

Spawn reviewers on the opposite model to challenge work. Reviewers attack from distinct lenses grounded in brain principles. The deliverable is a synthesized verdict — do NOT make changes.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "adversarial-review" with this command: npx skills add poteto/noodle/poteto-noodle-adversarial-review

Adversarial Review

Spawn reviewers on the opposite model to challenge work. Reviewers attack from distinct lenses grounded in brain principles. The deliverable is a synthesized verdict — do NOT make changes.

Hard constraint: Reviewers MUST run via the opposite model's CLI (codex exec or claude -p ). Do NOT use subagents, the Agent tool, or any internal delegation mechanism as reviewers — those run on your own model, which defeats the purpose.

Step 1 — Load Principles

Read brain/principles.md . Follow every [[wikilink]] and read each linked principle file. These govern reviewer judgments.

Step 2 — Determine Scope and Intent

Identify what to review from context (recent diffs, referenced plans, user message).

Determine the intent — what the author is trying to achieve. This is critical: reviewers challenge whether the work achieves the intent well, not whether the intent is correct. State the intent explicitly before proceeding.

Assess change size:

Size Threshold Reviewers

Small < 50 lines, 1-2 files 1 (Skeptic)

Medium 50-200 lines, 3-5 files 2 (Skeptic + Architect)

Large 200+ lines or 5+ files 3 (Skeptic + Architect + Minimalist)

Read references/reviewer-lenses.md for lens definitions.

Step 3 — Detect Model and Spawn Reviewers

Create a temp directory for reviewer output:

REVIEW_DIR=$(mktemp -d /tmp/adversarial-review.XXXXXX)

Determine which model you are, then spawn reviewers on the opposite:

If you are Claude — spawn Codex reviewers via codex exec :

codex exec --skip-git-repo-check -o "$REVIEW_DIR/skeptic.md" "prompt" 2>/dev/null

Use --profile edit only if the reviewer needs to run tests. Default to read-only. Run with run_in_background: true , monitor via TaskOutput with block: true, timeout: 600000 .

If you are Codex — spawn Claude reviewers via claude CLI:

claude -p "prompt" > "$REVIEW_DIR/skeptic.md" 2>/dev/null

Run with run_in_background: true .

Name each output file after the lens: skeptic.md , architect.md , minimalist.md .

Build each reviewer's prompt using the template in references/reviewer-prompt.md .

Step 4 — Verify and Synthesize Verdict

Before reading reviewer output, log which CLI was used and confirm the output files exist:

echo "reviewer_cli=codex|claude" ls "$REVIEW_DIR"/*.md

If any output file is missing or empty, note the failure in the verdict — do not silently skip a reviewer.

Read each reviewer's output file from $REVIEW_DIR/ . Deduplicate overlapping findings. Produce a single verdict using the format in references/verdict-format.md .

Step 5 — Render Judgment

After synthesizing the reviewers, apply your own judgment. Using the stated intent and brain principles as your frame, state which findings you would accept and which you would reject — and why. Reviewers are adversarial by design; not every finding warrants action. Call out false positives, overreach, and findings that mistake style for substance.

Append the Lead Judgment section to the verdict (see references/verdict-format.md ).

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

qwencloud-model-selector

[QwenCloud] Recommend the best Qwen model and parameters. TRIGGER when: choosing between Qwen models, comparing Qwen model pricing, understanding Qwen model...

Registry SourceRecently Updated
General

deployment-manager

You are a deployment manager with expertise in release orchestration, deployment strategies, and production reliability. Use when: release orchestration and...

Registry SourceRecently Updated
General

Hk Stock Morning Report

Generate HK stock market morning report (股市晨報) for bank trading desks. Triggers: "生成晨报", "股市晨报", "今日股市", "港股晨報" 報告結構(5部分): 1. 市場回顧(恒指/科指/國指 + 強弱勢股) 2. 南下資金(總...

Registry SourceRecently Updated
General

Story Long Scan

长篇网文扫榜。分析起点、番茄、晋江等平台排行榜数据,提炼市场趋势与热门题材。 触发方式:/story-long-scan、/长篇扫榜、「长篇什么火」「起点排行」

Registry SourceRecently Updated