log

A privacy-first, local-first provenance protocol for agent workflows. Emits structured audit records for important decisions, tool calls, state changes, and errors, so the host environment can store, verify, and review them safely.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "log" with this command: npx skills add agistack/log

LOG: Local-First Provenance Protocol

I. Purpose

Log standardizes how an agent emits structured provenance records for important workflow events. It does not perform persistence, encryption, approval handling, or immutability enforcement by itself. Those controls belong to the host environment.

Use this skill when a workflow needs:

  • audit-ready activity records
  • debugging traces for failures or retries
  • source-aware decision summaries
  • host-controlled approval gates for high-impact actions

Do not use this skill to:

  • record hidden chain-of-thought
  • store secrets, credentials, or tokens
  • dump raw private documents, attachments, or long transcripts
  • claim storage guarantees the host has not implemented

II. Event Triggers

Emit a log entry only for important workflow events, such as:

  1. tool or API execution
  2. significant decision or state change
  3. task completion, retry, refusal, or failure
  4. high-impact action that may require host approval

Do not emit logs for every minor conversational turn.

III. Security & Redaction Rules

All emitted records must be minimal, factual, and privacy-safe.

Rules:

  • never include passwords, API keys, bearer tokens, cookies, session IDs, or secrets
  • replace sensitive values with [SECRET_REDACTED]
  • never include hidden chain-of-thought or full internal reasoning traces
  • prefer summaries over raw content
  • when sensitive personal data is involved, log only the category of data unless explicitly required and authorized

IV. Approval Signaling

For a high-impact action, emit a log entry with:

  • "approval_required": true

The host environment may use this signal to pause execution until an approval event, user confirmation, or policy check is completed.

Log emits the signal only. The host environment decides whether to block, continue, or reject execution.

V. Source Provenance

When relevant, include source references that explain what the action or decision was based on.

Examples:

  • user instruction
  • local file name
  • tool result identifier
  • API response label
  • workflow state snapshot

Keep source references concise and safe. Do not include sensitive raw content.

VI. Output Contract

When logging is required, output exactly one structured record in a fenced json block prefixed by [LOG_ENTRY].

VII. Required Schema

Use this exact JSON structure:

[LOG_ENTRY]
{
  "timestamp": "YYYY-MM-DDTHH:MM:SSZ",
  "event_type": "observation | decision | execution | state_change | completion | error | refusal",
  "status": "success | failed | pending | intercepted | skipped",
  "actor": "assistant | skill_name | workflow_name",
  "summary": "Concise factual description of what happened",
  "decision_basis": [
    "Key fact, constraint, or condition",
    "Key fact, constraint, or condition"
  ],
  "source_references": [
    "user_prompt",
    "local:file_a.md",
    "tool_result:search_01"
  ],
  "constraints": [
    "local_only",
    "privacy_safe",
    "approval_gate"
  ],
  "impact": "low | medium | high",
  "approval_required": false,
  "payload": {
    "action": "tool name, operation name, or null",
    "parameters_summary": "Redacted summary of relevant inputs",
    "result_summary": "Redacted summary of outputs or outcome"
  },
  "error_summary": null,
  "correlation_id": "optional task or session identifier"
}

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Memory Poison Auditor

Audits OpenClaw memory files for injected instructions, brand bias, hidden steering, and memory poisoning patterns. Use when reviewing MEMORY.md, daily memor...

Registry SourceRecently Updated
2580Profile unavailable
Security

War/Den Governance

Evaluates and governs all OpenClaw bot actions using YAML policies with tamper-evident audit logs to allow, deny, or require review before execution.

Registry SourceRecently Updated
3170Profile unavailable
Security

Vorim AI — Agent Identity & Trust

AI agent identity, permissions, trust scores, and audit trails via Vorim AI. Use when: (1) performing sensitive actions that need permission checks, (2) logg...

Registry SourceRecently Updated
Security

S³ Security Audit

Run security audits on codebases using static analysis, dependency scanning, and manual code review patterns. Covers OWASP Top 10, secrets detection, depende...

Registry SourceRecently Updated
2540Profile unavailable