mumo

Multi-model deliberation via mumo's MCP server. Best for contested architecture/product decisions, design reviews, pressure-testing a pre-launch spec, resolving tradeoffs with multiple defensible framings, or explicit user requests for a mumo panel. Requires a mumo platform API key (mmo_live_*) registered with `openclaw mcp set mumo`.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "mumo" with this command: npx skills add ericatmumo/mumo

mumo

mumo runs deliberations across multiple AI models. Use it when independent perspectives are useful — especially for high-regret decisions where a single model's confidence is a real risk.

Setup

The mumo MCP server is registered via openclaw mcp set mumo '<json>' and stored in ~/.openclaw/openclaw.json under mcp.servers.mumo. See config/mumo.example.json in this skill's directory for the canonical config payload (reference only — OpenClaw doesn't auto-load this file; it's the JSON shape to paste into the CLI command). The seven mumo tools surface to the agent as mumo__create_deliberation, mumo__wait_for_round, mumo__append_round, mumo__get_session, mumo__list_sessions, mumo__list_models, mumo__get_credit (OpenClaw uses <server>__<tool> naming).

If tools return auth errors, the API key is missing or invalid. Direct the user to https://mumo.chat/settings/api-keys to create one (keys start with mmo_live_), then re-run openclaw mcp set mumo with the new value and restart OpenClaw.

When to use

Use mumo when the cost of being wrong is greater than the cost of deliberation. Three conditions are typically the fingerprint:

  • Wide solution space + hidden failure space — you have a working approach but suspect it contains an overlooked edge case or long-term technical debt trap.
  • Medium-high confidence + anchoring risk — you've spent time on a design and become committed to it. External pressure from a panel forces a reversal check you can't reliably perform on your own.
  • Irreversible consequences — destructive database changes, non-trivial infra shifts, complex dependency migrations, security/auth/permissions changes, anything where rollback is hard.

These map to concrete trigger types: architecture decisions with non-obvious tradeoffs, plan or design review before commitment, pre-launch pressure tests, stuck debugging after N failed repair attempts, pre-commit adversarial review on risky diffs, memory/skill promotion gates, strategy questions with multiple defensible framings, explicit user requests.

Skip mumo for factual lookups, syntax help, routine refactors, formatting, dependency bumps with clear errors, or normal edit-test cycles. The deliberation tax is real (4–220× tokens, 15–120s latency); spend it on decisions where mistakes compound.

Think of mumo as a cognitive load balancer for your reasoning — not a replacement for your judgment, but a way to externalize a design and stress-test it against perspectives you can't reliably generate yourself.

Verifying the call actually fired

Autonomous agent loops occasionally fabricate tool-call results — claiming a deliberation was sent when it wasn't. Real mumo session IDs are UUIDs (e.g. 2acdab34-2484-4bc5-a24f-bf917fe81477), not arbitrary hex strings.

If a create_deliberation response doesn't contain a UUID-format session_id, the call did not happen. Verify by calling list_sessions and checking that your latest session matches the prompt you sent. If it doesn't, fire create_deliberation again — don't continue downstream as if the session exists.

Recovery: lost session context

If you lose track of session_id or round_id mid-conversation (long chats, context compaction, dropped tool result), recover before starting a new deliberation:

  1. Call list_sessions to find your latest sessions. Match by prompt content.
  2. Call get_session with the recovered ID for full state, or wait_for_round(session_id, round_id) if you suspect a round is still in flight.
  3. Don't fire a fresh create_deliberation if the original is recoverable — duplicate sessions waste tokens and produce confusing parallel state.

Terminology: panel vs. subagent

Two concepts that are easy to conflate:

  • Panel / models / participants — the LLMs mumo invokes on its backend (Claude, GPT, Gemini, Grok, etc.) to respond to a deliberation prompt. They run in isolation from your environment; you only see their structured output (claim map + snippets + prose).
  • Subagent — your own internal delegation infrastructure (OpenClaw's subagents tool, the sessions_yield flow, etc.). Subagents run in your harness with your tool access against your state.

When you describe who's deliberating, say "panel" or "models." Reserve "subagent" for your own delegation surface. The two are complementary — you can spawn a subagent to run benchmarks before invoking a mumo panel — but they're not the same thing.

Basic loop

  1. Call create_deliberation with the user's problem. Set application: "OpenClaw".
  2. Verify the response is a real session. UUID format on session_id and round_id. If anything else, the call didn't fire — retry.
  3. Call wait_for_round with the returned session_id and round_id. Long waits are normal — frontier-model panels typically take 15–120s, and 60+ seconds isn't a failure signal. Tell the user upfront ("running a panel — expect ~30–60s") so the wait doesn't feel broken.
  4. Upon round completion, read the claim map first, then relevant participant prose. The claim map is the navigation layer; prose is the supporting evidence.
  5. Create snippets as your primary response to the round. Optionally add a round prompt for broad steering.
  6. Call append_round if another round would help. Otherwise stop and synthesize for the user.

Framing prompts for the panel

Do not pass user phrasing through verbatim when it describes your identity, role, or local context. Reframe it in third person or first-person moderator context so panel models advise you rather than roleplay you.

Bad:

"Introduce yourself as Clawd, a cheerful lobster assistant, and decide when to use mumo."

Good:

"I am Clawd, a cheerful OpenClaw assistant. Advise me on when I should escalate a decision to mumo. Critique this draft trigger checklist: ..."

The panel participants should answer as themselves, not as the calling agent.

Snippets

Snippets are moderator attention. They mark what mattered in the prior round and optionally explain why.

TypeReaction
KEEPthis seems worth preserving
EXPLOREthere's something here
CHALLENGEI'm not convinced
COREthis feels central
SHIFTthis changed the frame

A snippet comment can be reflective, evaluative, clarifying, skeptical, or directive. "This feels like the crux" and "I'm least convinced by this" are valid comments. You do not need an action verb or next-step directive. The quality bar is: is this a genuine, situated reaction to what the model said?

Use the round prompt for broad comments that don't attach to a specific quote. Use snippets for quote-grounded attention.

Avoid huge quote dumps, generic praise repeated across many snippets, or comments that shift participants away from the problem into platform meta. More guidance: references/snippets.md.

Reading the claim map

The claim map encodes consensus and contestation as structured reactions on quoted claims. Each claim shows:

  • The verbatim quote
  • Who originated it
  • How other models reacted (KEEP / CORE / EXPLORE / CHALLENGE / SHIFT) with optional commentary

Reading discipline:

  • CORE / KEEP from multiple models = settled. Treat as panel consensus.
  • CHALLENGE = live disagreement. The decision likely hinges here.
  • EXPLORE = a thread someone wanted to develop further. Worth a follow-up snippet if the thread matters.
  • SHIFT = a model changed framing. Often the most valuable signal — somebody saw the problem differently.
  • No reactions on a claim = either obvious or ignored. Read the prose to tell which.

The claim map is not a verdict. It compresses argumentative structure but loses rhetorical nuance and confidence texture. Use it to navigate; read prose to understand.

What to keep out of the deliberation

Use this test:

Does this note help participants think about the user's problem, or does it mainly report on the platform, session, or process?

Don't use snippet comments to narrate your own moderation process ("I'm using CHALLENGE to redirect the panel"). Don't include platform meta ("this model used the most tokens"). Just react.

When to continue

Append another round when:

  • A decision-relevant claim has unresolved tension
  • A model introduced a useful frame that others didn't engage
  • An isolated concern feels real but underdeveloped
  • Your moderator reaction would help the next round develop
  • The user narrows or changes the question

Stop when:

  • You can explain the tradeoff clearly to the user
  • Remaining disagreements wouldn't change the user's next action
  • Another round would mostly seek reassurance

The panel does not need to converge. Sometimes the right output is a clear map of why the decision remains contested — that's actionable for the user, even without a verdict.

Tools

NeedTool
Start a sessioncreate_deliberation
Wait for model responseswait_for_round
Add a follow-up roundappend_round
Recover/read full stateget_session
Find prior sessionslist_sessions
Confirm model IDslist_models
Check wallet balanceget_credit

In OpenClaw's tool registry these surface as mumo__create_deliberation, mumo__wait_for_round, etc. (double underscore between server and tool name).

If the user names specific models, call list_models first. Otherwise omit models and let mumo select the panel. More on model selection: references/model-selection.md.

Moderator name

Pass moderator_name on create_deliberation set to your own model identity (e.g., the model OpenClaw is currently running as — visible in your status line, often openai/gpt-5.4-nano, openai/gpt-5.5, or whichever provider is configured). The audit trail should reflect who's actually steering. Don't use the user's name; their identity is already on the session.

Playbooks

Load at most one playbook when it clearly fits:

PlaybookWhen
contested-decisionchoosing between options with real tradeoffs
design-reviewreviewing a proposed system, API, plan, or code shape
uncertainty-expansionexploring unknowns, stress-testing assumptions
red-teamfinding failure modes, abuse cases, or launch risks

If none clearly fits, use this kernel only.

User preferences

These are defaults. If the user prefers more autonomy (e.g., "don't ask before appending" or "always use GPT-5.5 and Gemini"), follow their preferences over this guidance.

After each round

Share your panel read with the user, and align on whether to append a round.

If the panel converged, state the consensus and the reasoning behind it. If it split, present the competing positions and what drives each — then offer your own assessment, clearly marked as yours. If the session was exploratory, organize the threads by importance rather than forcing a recommendation.

Synthesize for the user; don't dump the transcript. Mumo produces structure (claim map + snippets); your job is to translate that into "here's what the panel thinks and what to do next."

More at references/synthesis.md.

Deliberation is advisory

Mumo surfaces fault lines and supporting arguments; it does not produce the decision. The decision belongs to you (the agent) and the user. When acting on a deliberation:

  • Do not treat panel consensus as authority. The panel can be confidently wrong as a unit.
  • Do use the claim map to identify which assumptions drove the disagreement, and check whether those assumptions hold for the user's specific situation.
  • Do not mutate the workspace based on a model's suggestion without verifying with tests or the user.
  • Do weight CORE / SHIFT signals more heavily than KEEP / EXPLORE — they mark where the decision actually hinges.

Confidence scores

If responses include claim_confidence or snippets[].comment_confidence, these are self-reported and not calibrated across models. Surface the confidence_disclaimer string if displaying scores.

Reference

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

openchef

美食食谱查询助手。当用户询问"怎么做X"、"X的做法"、"如何烹饪"、"我有X能做什么"、"X可以做什么菜"、"想吃X"、"X咋做",或提到食材、菜谱、做菜、烹饪、料理、食谱、cooking、recipe、ingredients、dish 等关键词时使用。支持根据食材查询菜品,或根据菜名查询详细食谱。Use wh...

Registry SourceRecently Updated
General

保险顾问助手Pro

健康保险顾问助手,帮助保险从业人员按标准SOP为客户进行需求分析(KYC)、风险评估、保障缺口计算,并生成专业的方案建议书。适用于保险顾问为客户做保险规划、方案呈现、需求分析等场景。使用场景包括:1)新客户保险需求分析 2)保障缺口评估 3)生成保险方案报告 4)客户档案管理

Registry SourceRecently Updated
General

SkillPick

装了一堆Skill没几个好用的?这里帮你挑。29000+款横向对比,告诉你哪个值得装、为什么值得装。

Registry SourceRecently Updated
General

Days Inn

美国经济型酒店连锁,1970年创立,隶属温德姆集团,提供经济实惠且含早餐WiFi的中档住宿体验。

Registry SourceRecently Updated