token-conservation

Token Conservation Workflow

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "token-conservation" with this command: npx skills add athola/claude-night-market/athola-claude-night-market-token-conservation

Token Conservation Workflow

When To Use

  • Run at the start of every session and whenever prompt sizes or tool calls begin to spike.

  • Mandatory before launching long-running analyses, wide diffs, or massive context loads.

When NOT To Use

  • Context-optimization already handles the scenario

  • Simple queries with minimal context

Required TodoWrite Items

  • token-conservation:quota-check

  • token-conservation:context-plan

  • token-conservation:delegation-check

  • token-conservation:compression-review

  • token-conservation:logging

Step 1 – Quota Check (quota-check )

  • Record current session duration and weekly usage (from /status or notebook). Note the 5-hour rolling cap + weekly cap highlighted in the Claude community notice.

  • Capture remaining budget and set a max token target for this task.

Step 2 – Context Plan (context-plan )

  • Decide exactly which files/snippets to expose. Prefer Read with offset /limit

params or Grep tool over loading whole files. Avoid cat /sed /awk via Bash — Claude Code 2.1.21+ steers toward native file tools (Read, Edit, Write, Grep, Glob).

  • PDFs (Claude Code 2.1.30+): Use Read with pages: "1-5" for targeted PDF reading instead of loading entire documents. Large PDFs (>10 pages) return a lightweight reference when @-mentioned — use the pages parameter to read specific sections. Hard limits: 100 pages max, 20MB max per PDF. Exceeding these previously locked sessions permanently (fixed in 2.1.31).

  • Convert prose instructions into bullet lists before prompting so only essential info hits the model.

Step 3 – Delegation Check (delegation-check )

  • Evaluate whether compute-intensive tasks can go to Qwen MCP or other external tooling (use qwen-delegation skill if needed).

  • For local work, favor deterministic scripts (formatters, analyzers) instead of LLM reasoning when possible.

Step 4 – Compression Review (compression-review )

  • Summarize prior steps/results before adding new context. Remove redundant history, collapse logs, and avoid reposting identical code.

  • Use prompt caching ideas: reference prior outputs instead of restating them when the model has already processed the information (cite snippet IDs).

  • Decide whether the current thread should be compacted:

  • If only recent context is stale, use "Summarize from here" (Claude Code 2.1.32+) via the message selector to partially summarize the conversation — this preserves recent context while compressing older portions

  • If the active workflow is finished and earlier context will not be reused, instruct the user to run /new

  • If progress requires the existing thread but the window is bloated, prompt them to run /compact before continuing

  • Automatic memory (Claude Code 2.1.32+): Claude now records and recalls session memories automatically. This adds minor token overhead but improves cross-session continuity. No action needed — be aware it contributes to baseline context usage.

Step 5 – Logging (logging )

Document the conservation tactics that were applied and note the remaining token budget. If the budget is low, explicitly warn the user and propose secondary plans. Record any recommendations made regarding the use of /new or /compact , or justify why neither was necessary, to inform future context-handling decisions.

Output Expectations

  • A short explanation of token-saving steps, delegated tasks, and remaining runway.

  • Concrete next-action list that keeps the conversation lean (e.g.):

  • "next turn: provide only failing test output lines 40-60"

  • Explicit reminder about /new or /compact whenever you determine it would save tokens (otherwise state that no reset/compaction is needed yet).

Troubleshooting

Common Issues

Command not found Ensure all dependencies are installed and in PATH

Permission errors Check file permissions and run with appropriate privileges

Unexpected behavior Enable verbose logging with --verbose flag

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

subagent-testing

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

workflow-monitor

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

workflow-setup

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

agent-teams

No summary provided by upstream source.

Repository SourceNeeds Review