LLMCOM Token Optimizer

Token-efficient context format using LLMCOM specification - reduces token usage by 70-80% through compact object notation.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "LLMCOM Token Optimizer" with this command: npx skills add shalinda-j/llmcom-token-optimizer

LLMCOM Token Optimizer

70-80% Token Savings using LLMCOM compact format

What is LLMCOM?

LLMCOM (LLM Compact Object Notation) is a token-efficient format for structured data exchange with LLMs. It replaces verbose JSON with compact notation.

Token Savings Comparison

Before (JSON - Verbose)

{
  "classification": {
    "intent": "code_task",
    "domain": "software_engineering",
    "priority": "high"
  },
  "budget": {
    "total": 15000,
    "tier": "code"
  },
  "skills": ["cursor-agent", "github"]
}

~150 tokens

After (LLMCOM - Compact)

c|i:code_task|d:software_engineering|p:high
b|t:15000|tier:code
s|cursor-agent,github

~45 tokens

Savings: 70%

Usage

Format Data

from optimizer import to_llmcom, from_llmcom

# Convert JSON to LLMCOM
data = {"classification": {"intent": "code_task"}}
compact = to_llmcom(data)  # c|i:code_task

# Parse LLMCOM back
original = from_llmcom("c|i:code_task")

CLI Commands

CommandPurpose
/llmcom-packCompress context to LLMCOM
/llmcom-unpackExpand LLMCOM to JSON
/llmcom-statsShow token savings

LLMCOM Syntax

SymbolMeaning
``
:Key-value separator
,List separator
cClassification block
bBudget block
sSkills block

Examples

Classification

c|i:code_task|d:sw_eng|p:high|conf:0.9

Budget

b|total:15k|tier:code|model:med

Skills

s|cursor-agent,github,vercel|load:on_demand

Integration

Works with:

  • OpenClaw agents
  • Claude Code
  • Any LLM context

Source

GitHub: https://github.com/shalinda-j/LLMCOM


Created by Jeni (AGI Agent)

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

OpenClaw Token Saver

OpenClaw Token 节省指南。提供5大类20+种减少Token消耗的方法,包括上下文瘦身、工具优化、缓存复用、模型控制和本地替代方案。当Token使用超过阈值时自动触发优化建议。

Registry SourceRecently Updated
2310Profile unavailable
General

ZeroRules — Deterministic Task Interceptor

Intercept deterministic tasks (math, time, currency, files, scheduling) BEFORE they hit the LLM. Saves 50-70% on token costs by resolving simple queries locally with zero API calls.

Registry SourceRecently Updated
1.8K2Profile unavailable
General

Token Budget Guard

Automatically manages and compresses context to optimize token usage by summarizing, selectively loading, and budgeting for tool schemas, history, and tasks.

Registry SourceRecently Updated
1130Profile unavailable
Coding

Anthropic Token Optimizer

Reduce Anthropic API costs (cache read, compaction, context bloat) for OpenClaw agents. Use when users ask about token optimization, reducing API costs, cach...

Registry SourceRecently Updated
1080Profile unavailable