red-team-review

An iterative review loop where research is bundled via context-bundler and dispatched to one or more adversarial reviewers. The loop continues until the red team approves.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "red-team-review" with this command: npx skills add richfrem/agent-plugins-skills/richfrem-agent-plugins-skills-red-team-review

Red Team Review Loop

An iterative review loop where research is bundled via context-bundler and dispatched to one or more adversarial reviewers. The loop continues until the red team approves.

When to Use

  • Architecture or design decisions that need adversarial scrutiny

  • Research findings that need epistemic validation

  • Security analysis that needs independent verification

  • Any work product where "more eyes" reduce risk

Process Flow

  • Research & Analyze — Deep-dive into the problem domain. Create analysis docs, capture sources.

  • Review Packet Generation — Prepare the context for the reviewer:

  • Create Prompt: Write or update a red-team-prompt.md explaining exactly what is being reviewed and what the reviewer should focus on.

  • Define Manifest: Update a manifest.json or equivalent list dictating which source files and research artifacts to include.

  • Bundle Context: Execute the context-bundler plugin, feeding it the manifest and prompt, to compile a single cohesive review packet.

  • Iteration Directory Isolation: Bundle the context and save the output to explicitly isolated directories (e.g., .history/review-iteration-1/ ) so that when the Red Team forces a rewrite, the baseline artifact is never destructively overwritten.

  • Dispatch to Reviewers — Send the bundle to:

  • Human reviewers (paste-to-chat or browser)

  • CLI agents with adversarial personas (security auditor, devil's advocate)

  • Browser-based agents for interactive review

  • Receive Feedback — Capture the red team's verdict:

  • "More Research Needed" → Loop back to step 1 with targeted questions

  • Asynchronous Benchmark Metric Capture: Explicitly log the total_tokens and duration_ms used by the adversarial agent during this specific iteration into an evals/timing.json file to calculate the true cost of approval.

  • Completion & Handoff — Once the Red Team verdicts "Approved":

  • Terminate the review loop.

  • Pass the final, approved research and feedback documents back to the Orchestrator.

  • DO NOT attempt to seal the session or run a retrospective. The Orchestrator handles that.

Dependencies

  • context-bundler — Required for creating review packets

  • Personas — Adversarial personas in personas/ directory (e.g., security-audit.md , architect-review.md )

Diagram

See: red_team_review_loop.mmd

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

spec-kitty-research

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

markdown-to-msword-converter

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

zip-bundling

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

spec-kitty-checklist

No summary provided by upstream source.

Repository SourceNeeds Review