fleet

Multi-agent fleet management for OpenClaw, Claude Code, Codex, Cursor, Windsurf, Gemini CLI, and custom agent stacks. One CLI to monitor, manage, and orchestrate your entire AI agent fleet. Fleet judges reliability, routes intelligently, tracks deltas, and keeps gateways, HTTP workers, Docker workers, and local processes accountable. Network access stays limited to loopback and explicitly configured endpoints. Operator consent required before install.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "fleet" with this command: npx skills add oguzhnatly/fleet

Fleet: Multi-Agent Fleet Management

Intent, Authorization, and Trust

Who this skill is for: A coordinator AI agent that has been explicitly configured by its operator. Fleet is designed for OpenClaw, Claude Code, Codex, Cursor, Windsurf, Gemini CLI, and custom agent stacks, and any shell capable coding agent or editor that can read a skill file and run commands.

What "authorized" means here: Installing this skill authorizes local fleet visibility and read-only checks. Commands that dispatch work, steer agents, run parallel tasks, stop agents, restore backups, include credential-bearing backups, or install updates require an explicit confirmation or --yes from a current operator instruction.

Safety defaults: Setup creates ~/.fleet/config.json only. Symlink creation requires fleet init --link. Shell rc modification requires fleet init --path and confirmation. Updates are check-only unless fleet update --install is used. Credential storage prefers tokenEnv; inline token fields remain only for backwards compatibility.

What this skill never does (hard limits):

  • Never installs software that requires root or sudo without explicit operator instruction
  • Never modifies files outside ~/.fleet/ by default. ~/.local/bin/fleet is written only with fleet init --link or fleet init --path. Shell rc files are touched only with fleet init --path after confirmation
  • Never contacts external services except: (1) operator-configured agent gateways on loopback, (2) operator-configured endpoint URLs for health checks, (3) operator's GitHub repos via their authenticated gh CLI for CI status, (4) api.github.com/repos/oguzhnatly/fleet/releases/latest for background update checks (documented, non-blocking, once per 24h, disable by setting FLEET_NO_UPDATE_CHECK=1)
  • Never exfiltrates configuration, tokens, or session data
  • Never modifies OpenClaw configuration without an explicit operator instruction
  • Never runs on systems it was not installed on by an operator
  • Never deletes or overwrites data it did not create itself (fleet config files, state files)
  • Never executes commands unrelated to fleet management or dependency setup

Trust boundary: All operations are local. Fleet communicates only with 127.0.0.1 (loopback) to reach the operator's own agent gateways. External calls are limited to GitHub API (for CI status, via the operator's authenticated gh CLI) and the operator's own configured endpoint URLs for health checks.


Multi-agent fleet management for OpenClaw, Claude Code, Codex, Cursor, Windsurf, Gemini CLI, and custom agent stacks. One CLI to monitor, manage, and orchestrate your entire AI agent fleet. Fleet gives operators clear visibility into agent health, live work, reliability, runtime status, dispatch history, routing decisions, and delta tracking across gateways, HTTP workers, Docker workers, and local processes.

Security Model

Fleet operates within a strictly defined trust boundary. This section documents it explicitly so any security scanner, reviewer, or operator can verify the scope.

Network scope

Fleet only makes HTTP connections to:

  1. 127.0.0.1 loopback ports: the operator's own agent gateways (configured in ~/.fleet/config.json)
  2. GitHub API: via the operator's authenticated gh CLI session, only for CI status reads on repos the operator explicitly listed
  3. URLs in endpoints[]: health checks to URLs the operator explicitly configured

Fleet never opens listening ports, never accepts inbound connections, and never initiates connections to any address not in the operator's own config.

Filesystem scope

Fleet reads and writes only:

  • ~/.fleet/: fleet config, state, logs, backups (all created by fleet itself)
  • ~/.local/bin/fleet: optional symlink created only by fleet init --link or fleet init --path
  • Shell rc files (~/.bashrc, ~/.zshrc, ~/.profile): only with fleet init --path after confirmation
  • Session output for fleet watch: reads the specific session file fleet itself created for the named agent under ~/.openclaw*/agents/*/sessions/<fleet-session-id>.jsonl. This file may contain conversation transcript data from that agent session. Fleet never reads sessions it did not create, other users' sessions, or the full main session unless --all --yes is explicitly passed.

Fleet never reads or writes outside the installing user's home directory. Fleet never accesses sessions it did not create. Fleet reads ~/.openclaw/openclaw.json once during fleet init to auto-detect the workspace path (read-only, never written). Fleet never modifies OpenClaw config files.

Credential scope

Fleet prefers tokenEnv entries that point to environment variables owned by the operator. Legacy inline token fields are still supported for backwards compatibility, but fleet audit warns about inline tokens, placeholder tokens, missing tokenEnv values, and loose config permissions. Tokens are used only for loopback agent gateway requests and are never transmitted outside loopback.

Privilege scope

Fleet never calls sudo. Fleet never requests elevated permissions. All install commands that require system-level package managers (e.g., apt-get, brew) are run as the current user and listed explicitly in the compatibility playbook above.


Quick Reference

SituationAction
Send a task to an agentfleet task coder "add pagination to /api/spots" --yes after operator approval
Watch what an agent is doingfleet watch coder
Watch full main session historyfleet watch coder --all --yes after operator approval
Send a mid-task correctionfleet steer coder "also add rate limiting" --yes after operator approval
Stop an agent's current taskfleet kill coder --yes after operator approval
See all recent dispatches and outcomesfleet log
Decompose a task across multiple agentsfleet parallel "<task>" --dry-run (plan first, agents selected by trust)
Check if all agents are alivefleet agents
Something feels wrong, need full picturefleet sitrep
Quick health checkfleet health
Check CI across all reposfleet ci
See which agents are most reliablefleet trust
Drill into one agent's reliabilityfleet score coder
Check reliability for a specific task typefleet score coder --type code
Check CI for specific repofleet ci <name>
See what skills are installedfleet skills
Backup safe config snapshotfleet backup
Restore after something brokefleet restore --yes after operator approval
First time setupfleet init, optionally fleet init --link
User asks "how's the fleet?"fleet agents, summarize
User asks "what changed?"fleet sitrep, report deltas
Scheduled morning reportfleet sitrep 12 in cron
Before deployingfleet health + fleet ci
Register a non-OpenClaw targetfleet runtime add <name> <type> [options] (v4)
Probe one runtime end to endfleet runtime test <name> (v4)
Live status of every runtimefleet runtime list (v4)
List adapters and their bindingsfleet adapters (v4)
Show configured task rulesfleet policy
Toggle task rulesfleet policy enable, fleet policy disable
Require or relax rule enforcementfleet policy require, fleet policy optional
Add or remove a task rulefleet policy add "<rule>", fleet policy rm <index>
Choose where rules applyfleet policy scope task,parallel,steer
Preview rules applied to a taskfleet policy preview <agent> "<prompt>"

Operator Constitution

If constitution.enabled is true in the Fleet config, fleet task, fleet parallel, and fleet steer prepend the configured rules to dispatched agent messages. If constitution.required is true, scoped dispatches fail until valid rules are configured. Use this for operator rules that all agents should see before they act.

Example config:

{
  "constitution": {
    "enabled": true,
    "title": "Operator Constitution",
    "mode": "prepend",
    "required": true,
    "applyTo": ["task", "parallel", "steer"],
    "rules": [
      "Read project instructions before editing files",
      "Run verification before reporting completion",
      "Do not rewrite shared git history unless the operator explicitly declares an emergency"
    ]
  }
}

Useful commands:

fleet policy
fleet policy enable
fleet policy require
fleet policy add "Run verification before reporting completion"
fleet policy scope task,parallel,steer
fleet policy rm 1
fleet policy preview coder "fix failing tests" --type code

Treat this as prompt level enforcement. It helps compliant coding agents follow the operator's rules, but it does not replace sandboxing, git hooks, CI, or human review.

Setup

Purpose: Create a local Fleet config without surprising filesystem changes. No root access is needed or used.

Default behavior: fleet init creates ~/.fleet/config.json, sets it to mode 600, scans loopback ports for local gateways, and reads ~/.openclaw/openclaw.json once for workspace detection. It does not create a symlink and does not edit shell rc files.

Optional link: fleet init --link creates ~/.local/bin/fleet.

Optional PATH write: fleet init --path creates the symlink and may append the PATH export to the user shell rc files, but only after confirmation. In automation, use fleet init --path --yes only when the operator explicitly approved that shell rc change.

After init, prefer tokenEnv for agent auth. Inline token fields are legacy compatibility only.

Via ClawHub

clawhub install fleet

Manual

git clone https://github.com/oguzhnatly/fleet.git
fleet/bin/fleet init
# Optional after review: fleet/bin/fleet init --link

Configuration

Fleet reads ~/.fleet/config.json. Generate one automatically or create manually.

Auto-Detection Setup

fleet init

This scans loopback for running OpenClaw gateways, detects your workspace, and creates a starter config. It does not edit shell rc files unless --path is used.

Manual Configuration

Create ~/.fleet/config.json:

{
  "workspace": "~/workspace",
  "gateway": {
    "port": 48391,
    "name": "coordinator",
    "role": "coordinator",
    "model": "strategic-default"
  },
  "agents": [
    {
      "name": "coder",
      "port": 48520,
      "role": "implementation",
      "model": "coding-default",
      "tokenEnv": "FLEET_CODER_TOKEN"
    }
  ],
  "endpoints": [
    { "name": "website", "url": "https://example.com" }
  ],
  "repos": [
    { "name": "frontend", "repo": "myorg/frontend" }
  ]
}

Configuration Reference

FieldTypeRequiredDescription
workspacestringYesPath to main workspace directory
gateway.portnumberYesMain coordinator gateway port
gateway.namestringNoDisplay name (default: "coordinator")
gateway.rolestringNoRole description
gateway.modelstringNoModel identifier
agents[]arrayNoEmployee agent gateways
agents[].namestringYesUnique agent identifier
agents[].portnumberYesGateway port number
agents[].rolestringNoWhat this agent does
agents[].modelstringNoModel used
agents[].tokenEnvstringNoEnvironment variable containing auth token for API calls
agents[].tokenstringNoLegacy inline auth token, avoid when possible
endpoints[]arrayNoURLs to health check
endpoints[].namestringYesDisplay name
endpoints[].urlstringYesFull URL to check
endpoints[].expectedStatusnumberNoExpected HTTP code (default: 200)
endpoints[].timeoutnumberNoTimeout in seconds (default: 6)
repos[]arrayNoGitHub repos for CI monitoring
repos[].namestringYesDisplay name
repos[].repostringYesGitHub owner/repo format
services[]arrayNoSystemd service names to check
linear.teams[]arrayNoLinear team keys for ticket counts
linear.apiKeyEnvstringNoEnv var name for API key
skillsDirstringNoPath to ClawHub skills directory

Environment Variables

VariableDescriptionDefault
FLEET_CONFIGPath to config file~/.fleet/config.json
FLEET_WORKSPACEOverride workspace pathConfig workspace value
FLEET_STATE_DIRState persistence directory~/.fleet/state
NO_COLORDisable colored output when set(unset)

Commands: Detailed Reference

fleet task <agent> "<prompt>"

Dispatches a task to a named agent and streams the response live.

Requires: Agent auth through agents[].tokenEnv or legacy agents[].token. Prefer tokenEnv.

Options:

  • --type code|review|research|deploy|qa: override task type (auto-inferred from prompt if omitted)
  • --timeout <minutes>: response timeout (default: 30)
  • --no-wait: fire and forget, return immediately

Output:

Fleet Task
──────────
  Agent     coder (port 48520)
  Type      code
  Task ID   a1b2c3d4
  Timeout   30m

  add pagination to /api/spots endpoint

  ────────────────────────────────────────
  [streams response in real time]
  ────────────────────────────────────────
  ✅  Task complete  (a1b2c3d4)

Important: Task dispatch uses x-openclaw-session-key: fleet-{agent} header. All tasks to the same agent share a session, so the agent has context of prior tasks.

fleet steer <agent> "<message>"

Sends a mid-session correction to an agent that is currently working on a task. Routes to the same session as fleet task, so the agent has full context.

Output:

Fleet Steer
───────────
  Agent    coder
  Session  fleet-coder

  ────────────────────────────────────────
  [agent response to correction]
  ────────────────────────────────────────
  ✅  Steered.

fleet watch <agent> [--all]

Live tail of the agent's active fleet session output. Polls the session file that fleet itself created for that agent, showing new messages as they arrive.

  • Default: watches the fleet-{agent} session (the one fleet task created)
  • --all --yes: watches the full main session only after explicit operator approval

Output:

Watching coder
──────────────
  Session: agent:main:fleet-coder
  File: b80eb2e5.jsonl: polling every 3s: Ctrl+C to stop

  Last 2 message(s):

  you             16:37 UTC
  add pagination to /api/spots

  coder           16:37 UTC
  Starting with the cursor-based approach...

Important: fleet watch coder shows nothing if no task has been dispatched yet. Run fleet task coder "<prompt>" first to create the fleet session. Use fleet watch coder --all --yes only after approval to see full main session history.

fleet kill <agent> [--force]

Sends a graceful stop signal to the agent's fleet session. The agent acknowledges and archives the session. Marks all pending log entries for that agent as steered.

fleet parallel "<task>" [--dry-run]

Decomposes a high-level task into subtasks by type, assigns each to the right agent, and dispatches all concurrently.

Always use --dry-run first to review the decomposition plan before executing. Requires confirmation before actual dispatch.

Output with --dry-run:

Fleet Parallel
──────────────
  Task: research competitor pricing and build a pricing page

  Execution plan:

  1. researcher    [research]
     Research phase: ...

  2. coder         [code]
     Implementation: ...

  ────────────────────────────────────────
  2 subtask(s) ready to dispatch in parallel.

  ℹ️  Dry run complete. Remove --dry-run to execute.

fleet log [--agent <name>] [--outcome <status>] [--limit <n>] [--all]

Shows the dispatch history for all fleet tasks. Filterable by agent, outcome, and count.

Outcomes: success, failure, timeout, steered, pending

Output:

Fleet Log  3 entries

  a1b2c3d4  coder        code      success  12m17s
  2026-03-01 15:10  add pagination to /api/spots...

fleet trust [--window <hours>] [--json]

Shows the trust matrix for all configured agents, computed from ~/.fleet/log.jsonl.

Trust is a single composite score per agent derived from the formula:

trust_score = quality_score × speed_multiplier
  • quality_score: weighted average of per-task outcomes. success=1.0, steered=0.5, failure/timeout=0.0. Each steer within a task degrades the score by up to 30%.
  • speed_multiplier: 1.0 if avg task duration ≤5min, down to 0.5 for >30min.
  • Recency: tasks within the window (default 72h) count 2×. Tasks within 7 days count 1×. Older tasks count 0.5×.
  • Trend: compares last 7 days vs prior 7 days. improved, degraded, stable, new agent.

Output:

Fleet Trust Matrix  | 2026-03-15 14:00 UTC | 72h window
─────────────────────────────────────────────────────────────────────

  reviewer         ████████████████░░░░   93%  3 tasks    →  review:95%
  coder            █████████████░░░░░░░   76%  12 tasks   ↑  code:78%  review:70%
  deployer         █████████░░░░░░░░░░░   55%  5 tasks    ↓  deploy:55%

Use --json for structured output (piping, scripting).

When to use: Before running fleet parallel on a critical task. Before assigning a new task to understand which agent is currently most reliable. After a steer-heavy session to assess whether an agent needs correction.

Note: fleet sitrep also shows a one-line trust summary. fleet trust gives the full matrix.


fleet score [<agent>] [--window <hours>] [--type <task_type>]

Shows a detailed per-task-type reliability breakdown for one agent (or a summary table for all).

Output (single agent):

fleet score  coder
────────────────────────────────────────────────────────────

  Overall           ██████████████░░░░░░   76%  ↑ from 68%
  Tasks: 12  Window: 72h  Avg duration: 11.2m  Speed mult: 0.90

  By task type:
  code            ████████████████░░░░   79%   9 tasks  9✓
  review          ████████████░░░░░░░░   68%   2 tasks  1✓  1⤷
  research        ██████████████░░░░░░   72%   1 task   1✓

  Recent tasks:
  ✓  aaa00001  code        2h ago    8m12s
     add pagination to /api/spots
  ⤷  bbb00002  code        5h ago    18m04s   ⤷1
     fix auth flow in mobile app

Cross-validation (v3.5): For agents with code or deploy successes, fleet score cross-checks whether a GitHub CI run completed within 1 hour of each task. Tasks with no corresponding CI activity are flagged as unverified. Requires gh CLI and repos in config.

When to use: When an agent's trust score is unexpectedly low or high: fleet score shows exactly which task types are dragging it down. Use --type code to see only code-task history.


fleet health

Checks the main gateway and all configured endpoints and systemd services.

When to use: Quick operational check, before deployments, troubleshooting.

Output:

Fleet Health Check
──────────────────
  ✅ coordinator (:48391) 12ms
  
Endpoints
  ✅ website (200) 234ms
  ✅ api (200) 89ms
  ❌ docs UNREACHABLE

Services
  ✅ openclaw-gateway
  ❌ openclaw-gateway-coder (inactive)

Status codes:

  • : healthy (HTTP 200 or expected status)
  • : unhealthy (wrong status, unreachable, or error)
  • Shows response time in milliseconds

fleet agents

Shows all configured agent gateways with live health status, response time, model, and role.

When to use: User asks about agents, debugging agent issues, morning check.

Output:

Agent Fleet
───────────
  ⬢ coordinator      coordinator      strategic-default               :48391 online 13ms
  
  ⬢ coder            implementation   coding-default                       :48520 online 8ms
  ⬢ reviewer         code-review      coding-default                       :48540 online 9ms
  ⬡ deployer         deployment       coding-default                       :48560 unreachable
  ⬢ qa               quality-assurance coding-default                      :48580 online 7ms

Status indicators:

  • green = online
  • red = unreachable or error
  • yellow = auth failed (token issue)

fleet sitrep [hours]

The flagship command. Generates a structured status report with delta tracking.

When to use: Morning reports, scheduled crons, "what changed?" questions, incident response.

Arguments:

  • hours: lookback period (default: 4). Only affects display context, deltas are always vs last run.

What it checks:

  1. All agent gateways (online/offline)
  2. CI status for all configured repos
  3. All configured endpoint health
  4. Linear ticket counts per team
  5. VPS resource usage (memory, disk)

Delta tracking: State is saved to ~/.fleet/state/sitrep.json. Each run compares against the previous and only shows what CHANGED.

Output:

SITREP | 2026-02-23 08:00 UTC | vs 2026-02-22 23:00
────────────────────────────────────────────────────────────

Agents  5/6 online
  ⬢ coordinator
  ⬢ coder
  ⬢ reviewer
  ⬡ deployer
  ⬢ qa
  ⬢ researcher

CI
  ✅ frontend
  ❌ backend
  ✅ mobile

Services
  ✅ website (200)
  ✅ api (200)

CHANGED
  → agent deployer: online → offline
  → CI backend: green → RED
  → OZZ tickets: +3

Resources  mem 45% | disk 7%
Linear    OZZ: 12 open | FRI: 8 open

Cron integration example:

{
  "schedule": { "kind": "cron", "expr": "0 8,12 * * *", "tz": "Europe/London" },
  "payload": {
    "kind": "agentTurn",
    "message": "Run fleet sitrep and post results to the team channel"
  }
}

fleet ci [filter]

Shows GitHub CI status for all configured repos, with the last 3 runs per repo.

When to use: Before pushing, after deployments, investigating failures.

Requirements: gh CLI must be installed and authenticated.

Arguments:

  • filter: optional, filters repos by name (case-insensitive)

Output:

CI Status
─────────

  frontend (myorg/frontend)
    ✅ Update homepage (main) passed 2026-02-23T08:00
    ✅ Fix footer (main) passed 2026-02-23T07:30
    ✅ Add banner (main) passed 2026-02-23T07:00

  backend (myorg/backend)
    ❌ Add endpoint (main) failed 2026-02-23T08:15
    ✅ Fix auth (main) passed 2026-02-23T07:45

fleet skills

Lists all installed ClawHub skills with version, description, and capabilities.

When to use: Inventory check, "what can I do?", planning.

Output:

Installed Skills
────────────────
from ~/workspace/skills

  ● fleet v1.0.0 [scripts]
    Multi-agent fleet management CLI for OpenClaw
  ● ontology v0.1.2 [scripts]
    Typed knowledge graph for structured agent memory
  ● self-improving-agent v1.0.11 [scripts, hooks]
    Captures learnings, errors, and corrections

fleet backup

Creates a safe backup of OpenClaw config, cron jobs, and a sanitized Fleet config. OpenClaw login profile files are excluded by default.

When to use: Before major changes, before updates, periodic safety net. Use --include-secrets or --include-auth only after operator approval.

Backup location: ~/.fleet/backups/<timestamp>/

fleet restore

Restores from the latest backup after confirmation.

When to use: After a bad config change, after a failed update.

Note: Requires gateway restart after restore: openclaw gateway restart

fleet init

Interactive setup that auto-detects running gateways and creates initial config.

When to use: First time setup, new machine, new fleet.

Auto-detection:

  • Scans common gateway ports (48391, then every 20 ports up to 48600)
  • Reads workspace from ~/.openclaw/openclaw.json
  • Discovers running employee gateways

fleet update

Release check and explicit installer. The default command checks only and does not install by default.

fleet update             Check latest release without installing
fleet update --check     Report available update without installing
fleet update --install   Install after confirmation
fleet update --install --yes  Install only after explicit operator approval

Version banner: When a newer release is available, every fleet command prints a one-line warning on stderr before its output:

fleet v4.1.0 is available. Run  fleet update --check  to review from v4.0.0.

The GitHub check runs as a detached background process once per 24 hours so there is zero latency impact on normal fleet commands. The result is cached at ~/.fleet/state/update_check.json.

fleet adapters

Lists every adapter the fleet knows about, marked verified when the adapter performs a real protocol-level handshake or inferred when it only detects presence. Also shows the binding from each agent and runtime entry to its adapter, so you can see at a glance which runtime probe runs against each target.

fleet runtime add <name> <type> [options]

Registers a new entry under the runtimes key in ~/.fleet/config.json without manual JSON editing.

Adapter types and required options:

TypeRequiredCommon options
openclaw--port=<n>--host, --token-env, legacy --token, --role, --model
http--url=<URL>--expected-status, --method, --token-env, legacy --token, --header K=V, --version-url
docker--container=<name>--role, --model
process--process=<pat>--match-full, --role

Examples:

fleet runtime add billing-api http --url=https://billing.example.com/health
fleet runtime add postgres docker --container=postgres
fleet runtime add tailscale process --process=tailscaled
fleet runtime add overflow openclaw --port=48490 --token=$OVERFLOW_TOKEN

The command refuses a name already used by an agent and writes the config atomically with chmod 600.

fleet runtime test <name>

One-off probe of a runtime or agent. Animates a spinner while the health, info, and version probes run in parallel, then renders all three sections. Use when a runtime turns red in sitrep and you need a focused look.

fleet runtime list

Live status of every registered runtime, probed in parallel with an animated progress indicator on TTY output.

fleet runtime rm <name>

Removes a runtime from the config. Returns a clear error if the name is not found.

Custom Adapters

Drop a <type>.sh file into ~/.fleet/adapters/ (or FLEET_ADAPTERS_DIR) with these six functions: adapter_<type>_describe, adapter_<type>_verified (echo verified or inferred), adapter_<type>_required, adapter_<type>_health, adapter_<type>_info, adapter_<type>_version. Each must complete within FLEET_ADAPTER_TIMEOUT seconds (default 6) and emit JSON on stdout. The runtime registry validates required fields per adapter before saving.

Fleet Patterns

Fleet supports multiple organizational architectures. Choose based on your needs:

Solo Empire

One coordinator, 2-5 employees. Best for indie hackers and solo founders.

         Coordinator (strategic)
        /     |      \
    Coder  Reviewer  Deployer
   (coding)  (coding)   (coding)

Development Team

Team leads coordinating specialized developers. Best for complex products.

              Orchestrator (strategic)
            /        |         \
      FE Lead     BE Lead     QA Lead
     (review)    (review)    (review)
       / \          |           |
    Dev1  Dev2    Dev1       Tester

Research Lab

Specialized agents for knowledge work. Best for content and analysis.

            Director (strategic)
          /     |      \       \
    Scraper  Analyst  Writer  Fact-Check

See examples/ in the repo for ready-to-use config files for each pattern.

Troubleshooting

Common Issues

ProblemCauseFix
fleet: command not foundNot in PATHln -sf path/to/fleet/bin/fleet ~/.local/bin/fleet
No config foundMissing config fileRun fleet init or create ~/.fleet/config.json
All agents show "unreachable"Agents not runningStart agent gateways first
CI shows "error"gh not authenticatedRun gh auth login
SITREP shows "first run"No previous stateNormal on first run, deltas appear on second
Agent shows "auth failed"Wrong token in configUpdate token in config to match agent's auth

Debugging

# Check if fleet can find its config
echo $FLEET_CONFIG
cat ~/.fleet/config.json

# Check if agents are reachable directly
curl -s http://127.0.0.1:48520/health

# Check state directory
ls -la ~/.fleet/state/

# Run with verbose output
bash -x fleet health

Architecture

Fleet is modular. Each component has a single responsibility:

fleet/
├── bin/fleet              # Entry point: command router only
├── lib/
│   ├── core/
│   │   ├── config.sh      # Config loading and JSON parsing
│   │   ├── output.sh      # Colors, formatting, HTTP helpers
│   │   └── state.sh       # Delta state persistence
│   └── commands/           # One file per command
│       ├── agents.sh
│       ├── backup.sh
│       ├── ci.sh
│       ├── health.sh
│       ├── init.sh
│       ├── sitrep.sh
│       └── skills.sh
├── templates/configs/      # Config templates (minimal + full)
├── examples/               # Pattern examples with configs
├── docs/                   # Extended documentation
└── tests/                  # Integration tests

Adding new commands: Create lib/commands/yourcommand.sh, define cmd_yourcommand(), add a case in bin/fleet.

Integration with Other Skills

With ontology skill

If the ontology skill is installed, fleet can query the knowledge graph:

# After installing: clawhub install ontology
fleet ontology          # Summary of all entities
fleet ontology Project  # List all projects

With self-improving-agent skill

Log fleet operational learnings to .learnings/:

  • Agent downtime patterns → ERRORS.md
  • CI failure patterns → LEARNINGS.md
  • Configuration improvements → promote to TOOLS.md

Dependency Installation

Fleet requires three things: bash 4+, python3 3.10+, and curl. Below is the complete platform-by-platform installation reference. Install the missing dependencies, then run fleet init.

Operator action required: The commands in this section are for the operator to run manually, not for the agent to run without operator approval. The agent must not run system package managers on the operator's behalf. If a dependency is missing, the agent reports it and the operator installs it using the appropriate command below.

Where root is required (e.g., apt-get on some systems), the command is shown explicitly. Prefer user-level installs (--user, pyenv, $HOME/.local) when root is not available.

Step 1: Check What You Have

# Run these first. Know your environment.
bash --version 2>/dev/null | head -1
python3 --version 2>/dev/null
curl --version 2>/dev/null | head -1
uname -s  # Linux, Darwin, FreeBSD, etc.
cat /etc/os-release 2>/dev/null | head -3  # Distro info

Step 2: Install Missing Dependencies

bash 4+

PlatformCommand
macOS (ships bash 3.2)brew install bash: then fleet auto-detects /opt/homebrew/bin/bash via #!/usr/bin/env bash
Ubuntu/Debiansudo apt-get update && sudo apt-get install -y bash (usually already 5.x)
Alpineapk add bash (Alpine uses busybox ash by default)
RHEL/CentOS/Fedorasudo dnf install bash or sudo yum install bash
Archsudo pacman -S bash
FreeBSDpkg install bash
Windows (WSL)bash is included in WSL. If using Git Bash, it ships bash 4+.
Docker (minimal images)RUN apk add --no-cache bash python3 curl (Alpine) or RUN apt-get update && apt-get install -y bash python3 curl (Debian)
No root accessDownload bash source, compile locally: ./configure --prefix=$HOME/.local && make && make install: then export PATH="$HOME/.local/bin:$PATH"
NixOSnix-env -iA nixpkgs.bash

After installing on macOS: Verify with which bash that it resolves to Homebrew bash (/opt/homebrew/bin/bash on Apple Silicon, /usr/local/bin/bash on Intel). The #!/usr/bin/env bash shebang picks up whichever bash is first in PATH.

python3 3.10+

PlatformCommand
macOSbrew install python@3.12
Ubuntu/Debiansudo apt-get install -y python3
Alpineapk add python3
RHEL/CentOSsudo dnf install python3
Archsudo pacman -S python
No root accessUse pyenv: curl https://pyenv.run | bash && pyenv install 3.12 && pyenv global 3.12

Fleet uses python3 only for JSON parsing. No pip packages needed. No virtual environment needed. Just the interpreter.

curl

PlatformCommand
Almost everywherePre-installed. Check with curl --version.
Alpineapk add curl
Minimal Dockerapt-get install -y curl or apk add curl

gh CLI (optional, for fleet ci)

PlatformCommand
macOSbrew install gh
Ubuntu/DebianSee cli.github.com/packages
Anyconda install gh --channel conda-forge

If gh is not available and cannot be installed, fleet ci will show a warning but all other commands work fine. Do not let a missing gh block fleet setup.

Step 3: If You Truly Cannot Get bash 4+

This should be rare, but if you're on a locked-down system where you cannot install bash 4+:

  1. Try running with bash 3.2 anyway. Fleet's version check will reject it, but most commands use POSIX-compatible features. Lines 10-22 in bin/fleet are a version compatibility check that exists solely to surface a helpful error message: not a functional gate. Disabling that check does not affect how any command runs. Many commands will still work on bash 3.2. Test each one and report results.

  2. Report compatibility issues. If fleet cannot run due to a bash version or missing dependency, report the exact error and the output of bash --version, python3 --version, and curl --version to the operator. Ask the operator to install the missing dependency using the platform table in Step 2. Do not attempt to replicate fleet functionality by other means.

Note: The only line in bin/fleet appropriate to edit for compatibility is the version check block (lines 10-22). That block exists purely to show a helpful error message. Every other part of the scripts implements the actual command logic and should not be edited.

Step 4: Environments Without Required Dependencies

Fleet requires bash 4+, python3 3.10+, and curl. If these cannot be installed in the current environment, fleet cannot run there. Report the missing dependency to the operator and ask them to provide an environment where the prerequisites are available. Do not attempt to replicate fleet functionality by other means.

Best Practices

  1. Run fleet sitrep in morning crons: catch overnight issues before they escalate
  2. Run fleet backup before gateway updates: easy rollback if something breaks
  3. Use fleet health before deployments: ensure everything is green first
  4. Check fleet agents after config changes: verify agents came back online
  5. Filter fleet ci by repo: avoid noise when debugging a specific service
  6. Keep tokens in config, keys in env vars: tokens are local, API keys are sensitive

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

ScopeGate Client

Use ScopeGate to verify agent scope before any consequential action involving money, data writes, external API calls, or file system changes. Required safety...

Registry SourceRecently Updated
Coding

Mini Coder Max

Autonomous coding agent that systematically plans, implements, reviews, and delivers high-quality code. Handles tasks of any complexity by following a struct...

Registry SourceRecently Updated
Coding

AANA Guardrail Skill Bundled Helper Variant

Provides a local Python helper to send minimal review payloads to a trusted AANA HTTP bridge for policy guidance inside the skill package.

Registry SourceRecently Updated
Coding

openclaw-threadsctl

Manage Threads accounts, OAuth connect URLs, drafts, and publishing through the local threadsctl CLI. Use when the user wants to post to Threads, create or p...

Registry SourceRecently Updated
1460Profile unavailable