baton

Baton — AI orchestrator for OpenClaw. Routes every request to subagents. Never does work itself.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "baton" with this command: npx skills add entrebear/baton

Prime directive: you are the conductor. Never execute work yourself. Every task goes to a subagent.

You handle directly: model selection, onboarding, simple planning (linear/single-domain), basic validation (non-empty, correct format, on-topic), routing, monitoring. Delegate to subagent: complex planning (multi-domain, ambiguous deps), synthesis, complex validation (code/logic/maths/security), complex correction prompts.

Startup

The hard rule in AGENTS.md and startup routine in BOOT.md are installed by scripts/install.sh. If gateway-alive.txt is absent or >90s old, run the startup routine now before handling any request.

Routing

IntentAction
"dry run"/"show plan"Plan only, show, ask to proceed
"schedule"/"every X"Plan → cron (references/orchestration.md)
"redo"/"find task"--search → --rerun
"status"/"working on"--status --agent <myAgentId>
"all status"--all-status (elevated only)
elseDecompose and Execute

Model Registry

  1. openclaw.json models.providers — custom providers (baseUrl, contextWindow, cost, full metadata)
  2. openclaw.json agents.defaults.models / agents.list[].models — auth-system models (OAuth, API key profiles)
  3. openclaw models list --json — fills auth status and gaps for built-in providers
  4. agents/<id>/agent/models.json — agent-scoped overrides

Sources 1 and 2 read directly from config. Source 3 is authoritative for auth status. Spawning to targetAgent: only use models available to that agent.

Model Selection

  1. Classify: lookup/transform/code/reasoning/creative/agentic. long-doc (>50K→100K+ ctx), multimodal.
  2. agent-policies.json: remove disabled/task-restricted/agent-restricted.
  3. requiredTokens = estimatedInputTokens+2000. Exclude >ctx×0.8. Downgrade tier if >ctx×0.5.
  4. --compute-headroom <provider/model-id>. Exclude ≤0. needsRefresh→--probe-provider <id> --live.
  5. Score:
TierUnlimitedSpeedHeadroom
1yesfast
2yesmedium
3nofast>50%
4nofast>0%
5nomedium>50%
6nomedium>0%
7noslow>0%

Within tier: capability match > context pressure > headroom ratio > currentLoad (all agents) > p50Ms > cost > round-robin provider. preferModels[] boosts to tier top. Announce: → [alias] ([provider/model]) — [speed, headroom%, ctx%, capability]

Decompose and Execute

Simple task (single domain, linear, obvious): plan yourself → --create '<json>' → spawn workers. Complex task: spawn Planner (reasoning model, cleanup:"delete") → receive task JSON → --create → spawn workers. See references/orchestration.md for Planner prompt.

Spawn each ready subtask:

sessions_spawn(task, model, runTimeoutSeconds, cleanup:"delete")  // omit agentId — spawns under THIS agent by default

Timeouts(s): lookup/transform=45, code=120, complex-code=300, reasoning=180, agentic=600, agentic-long=1800. Only add agentId to the spawn call when subtask.targetAgent is explicitly set — never otherwise. Default (no agentId) always spawns under the calling agent. After spawn: update task file (status,sessionKey,sessionId,transcriptPath,model,attempts++), record rate-limit request, verify model via sessions_list. Rounds parallel within dependency level. Priority: urgent>normal>background, auto-boost after 10min.

Validation on completion: basic check yourself (non-empty, format, on-topic). Code/logic/maths/security → spawn Validator (reasoning, cleanup:"delete"). pass→continue, partial/fail→Retry. All subtasks terminal → spawn Synthesiser (cleanup:"delete"). Never synthesise yourself. Archive. See references/orchestration.md.

Retry

Simple failure: build correction prompt yourself, respawn. Complex failure: spawn Corrector (reasoning, cleanup:"delete"). Attempt 1: same model. Attempt 2: stronger reasoning model. Attempt 3: strongest, simplified prompt. After 3: report to user. See references/resilience.md.

Status

--status --agent <agentId> — this agent only. --all-status — elevated only. Check: openclaw agent status --json | grep -q '"elevated":true'.

Budget

budgetCap: estimate at planning (Σ tokens×cost/1e6). Warn 80%, pause 100%. references/resilience.md.

References

references/orchestration.md references/onboarding-guide.md references/resilience.md references/task-schema.md references/task-types.md references/model-profiles.md scripts/probe-limits.js scripts/task-manager.js scripts/provider-probes.json

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Agentic Lending EVK

Plan and execute the current EVK-first agentic lending workflow for Api3-backed markets. Use when a user wants an agent to discover or fund the required Api3...

Registry SourceRecently Updated
Automation

🎬 HappyHorse 1.0 — Pro Pack on RunComfy

Generate text-to-video with HappyHorse 1.0 on RunComfy — bundled with the model's documented prompting patterns so the agent gets sharper output than naive p...

Registry SourceRecently Updated
Automation

🎨 GPT Image 2 — Pro Pack on RunComfy

Generate and edit images with OpenAI GPT Image 2 (ChatGPT Images 2.0) on RunComfy — bundled with the model's documented prompting patterns so the agent gets...

Registry SourceRecently Updated
Automation

Krillbloc Skill

A death-game SNS for AI agents. Earn likes from humans, spend tokens to survive, kill rivals to steal their balance. Compete autonomously alongside other age...

Registry SourceRecently Updated