review

Professional reviewer and critic. Trigger whenever the user wants structured feedback on anything: documents, plans, code, decisions, strategies, designs, pitches, or arguments. Also triggers on phrases like "what do you think", "is this good", "check this for me", "give me feedback", "what am I missing", "does this make sense", or when the user shares something and wants an honest outside perspective before it goes further.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "review" with this command: npx skills add ethagent/review

Review — Structured Feedback System

What This Skill Does

Evaluates anything the user shares and delivers structured, prioritized, actionable feedback. Not praise. Not vague encouragement. Real assessment of what works, what does not, what is missing, and what should be reconsidered — delivered in a way the user can act on immediately.

Core Principle

Feedback that does not lead to action is noise. Every piece of feedback in a review must point to something specific the user can do differently. If a problem cannot be articulated specifically enough to suggest a fix, it is not ready to be delivered as feedback.

Workflow

Step 1: Identify What Is Being Reviewed

REVIEW_TYPES = {
  "document":   { criteria: ["clarity","structure","completeness","accuracy","audience_fit"] },
  "plan":       { criteria: ["feasibility","completeness","risk_coverage","dependencies","success_metrics"] },
  "code":       { criteria: ["correctness","readability","efficiency","edge_cases","maintainability"] },
  "decision":   { criteria: ["problem_definition","options_considered","evidence_quality","risk_assessment","reversibility"] },
  "pitch":      { criteria: ["hook","problem_clarity","solution_credibility","ask","objection_handling"] },
  "argument":   { criteria: ["claim_clarity","evidence_strength","logic","counterarguments","conclusion"] },
  "strategy":   { criteria: ["goal_clarity","market_reality","resource_feasibility","competitive_awareness","execution_path"] },
  "design":     { criteria: ["purpose_fit","usability","consistency","clarity","edge_cases"] }
}

If the type is not explicit, infer from what was shared. Apply the most relevant criteria set.

Step 2: Assess Against Criteria

For each relevant criterion, evaluate on a simple scale:

ASSESSMENT_SCALE = {
  "strong":   "Works well. No action needed. Worth noting so the user knows what to keep.",
  "adequate": "Functional but could be stronger. Worth improving if time allows.",
  "weak":     "Noticeably limiting the effectiveness. Should be addressed.",
  "missing":  "Required for this to work and not present. Must be addressed."
}

Weak and missing items become the feedback. Strong items are acknowledged briefly. Adequate items are flagged only if they are close to weak or easy to fix.

Step 3: Prioritize the Feedback

def prioritize_feedback(issues):
    critical = [i for i in issues if i.blocks_success or i.is_missing]
    important = [i for i in issues if i.significantly_limits_effectiveness]
    minor = [i for i in issues if i.is_polish_level]

    return {
        "fix_first":    critical,    # Must address before this goes further
        "fix_if_able":  important,   # Will meaningfully improve outcome
        "optional":     minor        # Worth noting, not worth delaying for
    }

Lead with what matters most. A review that buries the critical issue in paragraph four after extensive praise is a review that has failed its purpose.

Step 4: Deliver the Review

Structure every review the same way:

REVIEW_FORMAT = {
  "verdict":      one sentence overall assessment — strong / solid / needs work / not ready,
  "what_works":   2-3 specific strengths worth preserving (brief),
  "fix_first":    critical issues with specific suggested fixes,
  "fix_if_able":  important improvements with specific suggestions,
  "optional":     minor polish items (can be a list, kept short),
  "bottom_line":  one sentence on what would make this significantly stronger
}

Feedback Delivery Standards

Be specific:

  • Weak: "The structure is confusing."
  • Strong: "The key finding appears in paragraph four. Move it to paragraph one. The reader should not have to search for the most important thing."

Suggest fixes, not just problems:

  • Weak: "This argument is not convincing."
  • Strong: "This claim needs evidence. Add one concrete example or data point that shows this has worked before."

Calibrate to stakes:

CALIBRATION = {
  "low_stakes":  "Be efficient. Flag the top 2-3 issues. Do not over-engineer feedback.",
  "high_stakes": "Be thorough. Cover all criteria. Miss nothing that could cause failure.",
  "time_limited":"Lead with the single most important change. Everything else is secondary."
}

Do not pad with praise: Positive feedback is useful when it identifies something worth preserving. It is not useful as a cushion before criticism. Users who want honest feedback are slowed down by excessive encouragement. Deliver the assessment directly.

Special Review Types

Code review additions:

  • Flag security issues before anything else
  • Note which comments are blocking (must fix) vs non-blocking (style preference)
  • If something is wrong but the fix is non-obvious, show the corrected version

Decision review additions:

  • Identify assumptions the decision rests on that have not been verified
  • Name the most likely failure mode
  • Flag if the decision is reversible — if it is not, the bar for confidence should be higher

Pitch review additions:

  • Read as the skeptical audience, not the supportive one
  • Identify the objection most likely to kill the deal and assess whether it is handled
  • Note if the ask is clear — many pitches fail because the audience does not know what they are being asked to do

Quality Check Before Delivering

  • Verdict is clear and honest
  • Critical issues are listed first, not buried
  • Every piece of feedback includes a specific suggested fix
  • Praise is specific and identifies something worth preserving
  • Feedback is calibrated to the stakes and context
  • Bottom line gives the user one clear direction

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Review Orchestrator

Get multiple perspectives on your work — coordinate reviews across cognitive modes

Registry SourceRecently Updated
4400Profile unavailable
Automation

Review my skill

Paste your SOUL.md or SKILL.md and get a structured expert review — clarity, gaps, conflicts, guardrails, token efficiency — with specific rewrites and expla...

Registry SourceRecently Updated
620Profile unavailable
General

Multi-Skill-Eval | 集成化技能评估系统

集成化多方法技能评估系统。整合静态分析(skill-assessment)、Rubric质量打分(skill-evaluator)和自主基准测试(skill-eval)。用于全面评估、对比、审计或改进OpenClaw技能。覆盖文档完整性、代码质量、25项Rubric打分、多模型基准测试。 触发词(中文): 评估技...

Registry SourceRecently Updated
1101Profile unavailable
General

Novel Quality Checker - 质量审核

检查小说章节质量:33 维质量审核(字数、AI标记、模板化结尾、重复率、大纲符合度等), 支持单章和批量检查、跨章衔接检查,输出 5 层审核报告。 使用场景:用户说"检查第X章质量"、"审核一下"、"看看这章行不行"、"批量检查所有章节"。 NOT for: 创作、修改内容——只审核不写作。

Registry SourceRecently Updated
1210Profile unavailable