llm-cost-tracker

Track OpenClaw LLM token usage and cost from OpenRouter API. Reports: last 24h, 7d, 30d, 365d with model breakdown and DB health footer. Run manually or schedule via `openclaw cron add`. Skills that trigger this: - "llm cost", "token usage", "openrouter cost", "llm spend", "daily cost report" → runs run_tracker.py (report) - "collect usage data" → runs scripts/collect_usage.py (populates DB silently, no output) IMPORTANT — report triggers: After running run_tracker.py --output telegram, you must forward the script's output EXACTLY as your final reply — do not summarise, reformat, shorten, omit sections, or add your own commentary. The script output IS the reply. When "collect usage data" is received (e.g. from a midnight cron job), run: python3 scripts/collect_usage.py and return "Done" — do NOT send any message to Telegram or any chat.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "llm-cost-tracker" with this command: npx skills add joeyiptk/llm-cost-tracker-openrouter-api

llm-cost-tracker

See CHANGELOG.md for version history.

Track and report LLM token usage and cost for OpenClaw sessions powered by OpenRouter.

Core Design

Source of truth: usage.cost.total from OpenRouter's API response — this is the actual billed amount after cache discounts, reasoning charges, and all other pricing adjustments. We never recompute it from token counts and price lists.

Append-only fact table: one row per completed OpenRouter request, keyed by openrouter_request_id (the generation/response ID). Duplicates from retries or stream reconnects are handled idempotently.

Token categories (from OpenRouter usage, kept SEPARATE):

  • prompt_tokens = usage.input — raw prompt tokens before cache discount
  • cached_tokens = usage.cacheRead — cached read tokens (discounted billing)
  • cache_write_tokens = usage.cacheWrite — cache build tokens (separate cost)
  • completion_tokens = usage.output — generated output tokens
  • reasoning_tokens = usage.reasoning — thinking tokens (if model exposes)
  • total_tokens = usage.totalTokens — sum of all above (should equal the sum)

Billing rules:

  • Billed cost = usage.cost.total directly — do NOT use prompt_tokens * input_price + completion_tokens * output_price as the primary formula
  • OpenRouter applies cache discounts, reasoning charges, and all other pricing adjustments automatically in cost.total
  • cached_tokens and cache_write_tokens are separate billing categories from prompt_tokens — do NOT add them together for a "total prompt" count
  • Only use cost.total; if absent, fall back to cost.input + cost.output + cost.cacheRead + cost.cacheWrite

Time windows:

  • last 24h = rolling, now_utc − 24 hours (UTC)
  • last 7d / 30d / 90d / 365d = calendar days (HKT), inclusive of today

Quick Start

cd skills/llm-cost-tracker

# First-time setup (creates DB, backfills, verifies) — run once on a new machine
python3 scripts/collect_usage.py --init

# Telegram-formatted report (default)
python3 scripts/run_tracker.py --output telegram

# Full terminal report
python3 scripts/run_tracker.py --output terminal

# Per-request debug view
python3 scripts/run_tracker.py --output debug --debug-hours 24

Optional: Scheduled Reports

If you want reports delivered automatically to Telegram every day:

# Midnight collection (silent, populates DB — run daily)
openclaw cron add \
  --name "llm-cost:collect" \
  --message "collect usage data" \
  --cron "5 0 * * *" \
  --tz "Asia/Hong_Kong" \
  --session isolated \
  --no-deliver \
  --description "Populate request_facts from session files"

# 9 AM report (delivers to Telegram)
openclaw cron add \
  --name "llm-cost:daily" \
  --message "llm cost" \
  --cron "0 9 * * *" \
  --tz "Asia/Hong_Kong" \
  --session isolated \
  --description "Daily LLM cost report to Telegram"

Report Layout

📊 LLM Cost Report — Apr 26, 2026 18:34 HKT

• Messages (24h): 124
• Est. Tokens (24h): 4.75M
• Est. Cost (24h): $0.4933
• Total Spend (API key): $93.67
• Limit Remaining: $45.74

🏆 Top Models (24h):
  1. minimax-m2.7: $10.7432
  2. minimax-m2.5:free: $0.0000
  3. delivery-mirror: $0.0000

📈 Trend:
• Last 24h: $0.4933
• Last 7 days: $0.8542
• Last 30 days: $10.7432
• Last 90 days: $10.7432
• Last 365 days: $10.7432

💾 DB: 1.6 MB · 1,872 rows · Since Apr 5, 2026

_Sent via llm-cost-tracker_

Note: "Last 7/30/90/365 days" uses calendar days (HKT), inclusive of today. "Last 24h" uses a rolling 24-hour window (UTC). The DB footer shows the oldest record in the DB — useful for deciding when to prune old data.

First-Time Setup

One command sets up everything — DB creation, backfill, and health check:

cd skills/llm-cost-tracker
python3 scripts/collect_usage.py --init

⚠️ Don't forget — set up your scheduled jobs next!
Without them, no new data gets collected after the backfill finishes. Run these two commands before you close this terminal:

# Midnight: collect usage data (silent, no output to Telegram)
openclaw cron add \
  --name "llm-cost:collect" \
  --message "collect usage data" \
  --cron "5 0 * * *" \
  --tz "Asia/Hong_Kong" \
  --session isolated \
  --no-deliver \
  --description "Populate request_facts from session files"

# 9 AM: daily cost report delivered to Telegram
openclaw cron add \
  --name "llm-cost:daily" \
  --message "llm cost" \
  --cron "0 9 * * *" \
  --tz "Asia/Hong_Kong" \
  --session isolated \
  --description "Daily LLM cost report to Telegram"

This runs:

  1. Creates config/usage.db with the correct schema (if it doesn't exist)
  2. Finds your OpenClaw sessions directory automatically
  3. Backfills all available session JSONL files
  4. Runs a health check to verify everything is working
  5. Prints DB summary (size, row count, date range)

Ongoing / step by step (alternative to --init):

# Backfill all historical sessions
python3 scripts/collect_usage.py --backfill

# Verify DB health
python3 scripts/run_tracker.py --health

# Run first report
python3 scripts/run_tracker.py --output telegram

--init is idempotent — re-running it is safe. It will append any new sessions since the last backfill rather than overwriting anything.

DB Maintenance

The report footer (💾 DB: ...) shows current DB size, row count, and the oldest record date. Over time the DB grows. To prune old data:

# Preview how many rows would be deleted (dry run)
python3 scripts/prune_usage.py 2024-04-25 --dry-run

# Delete rows older than 2024-04-25 and reclaim disk space
python3 scripts/prune_usage.py 2024-04-25 --vacuum

Retention tip: 2-year retention = delete rows older than ~730 days ago. A quarterly prune is recommended — pick a date, run dry-run first, then execute without --dry-run and add --vacuum to reclaim space.

Architecture

Session JSONL files  ──▶  collect_usage.py  ──▶  request_facts (SQLite)
                                                 │
                                                 └──▶  run_tracker.py  ──▶  Report
                                                                 │
OpenRouter API  ──▶  get_openrouter_total_usage()  ───────────────┘

Schema

Table: request_facts (one row per OpenRouter request)

ColumnTypeNotes
openrouter_request_idTEXT UNIQUEresponseId / generation ID
created_at_utcTEXTISO timestamp in UTC
modelTEXTmodel ID
providerTEXTprovider (openrouter, etc.)
statusTEXTstopReason (completed, toolUse, etc.)
prompt_tokensINTEGERusage.input
completion_tokensINTEGERusage.output
total_tokensINTEGERusage.totalTokens
reasoning_tokensINTEGERusage.reasoning
cached_tokensINTEGERusage.cacheRead
cache_write_tokensINTEGERusage.cacheWrite
billed_costREALusage.cost.total (canonical)
streamedINTEGER1=yes, 0=no
raw_usage_jsonTEXTfull usage object as JSON

Validation

To validate against OpenRouter's own total:

  1. OpenRouter dashboard → API Keys → check total_usage against run_tracker.py sum
  2. Run --output debug to inspect per-request billed_cost and raw_usage_json
  3. The OpenRouter API total (all-time) is shown in every report as a reference point

Known gap: session JSONL files only go back to ~April 5 2026. Earlier usage appears in OpenRouter's all-time total but not in the per-request breakdown.


Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

小红书竞品监控助手

小红书竞品监控 - 自动采集竞品笔记,推送飞书通知,写入数据看板

Registry SourceRecently Updated
2900Profile unavailable
Automation

Self-Repair System — Autonomous AI Automation

Automated self-diagnostics: restarts crashed Ollama, repairs broken configs, verifies workspace integrity, recovers from common failures.

Registry SourceRecently Updated
4320Profile unavailable
Automation

Overkill Mission Control

Comprehensive Mission Control dashboard for OpenClaw - monitor agents, automation, teams, documents, messages, and system metrics. Features real-time dashboa...

Registry SourceRecently Updated
4080Profile unavailable
Automation

Openclaw Router

Intelligent Model Routing - Save 60% on AI Costs / 智能路由系统 - 节省 60% 成本

Registry SourceRecently Updated
4280Profile unavailable