sealvera

Tamper-evident audit trail for AI agent decisions. Use when logging LLM decisions, setting up AI compliance, auditing agents for EU AI Act, HIPAA, GDPR or SOC 2, or when a user asks about AI decision audit trails, explainability, or SealVera.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "sealvera" with this command: npx skills add ahessami123/sealvera

SealVera Skill for OpenClaw

Cryptographically-sealed, tamper-evident audit trails for every AI decision your agents make.

SealVera is AI Decision Audit Infrastructure. This skill wires any OpenClaw agent into SealVera so every decision it makes is logged, cryptographically signed, chained, and monitored — automatically. Built for teams shipping AI in finance, healthcare, legal, and insurance.

EU AI Act · SOC 2 · HIPAA · GDPR · FINRA · ISO 42001


Why SealVera?

  • Tamper-evident — every decision is cryptographically hashed (RSA-2048) and chained; any alteration is detectable
  • Works in 2 linesinit() + one patch call and every LLM decision is logged
  • Full explainability — captures inputs, outputs, reasoning steps, confidence scores, model used
  • Real-time dashboard — search, filter, export at app.sealvera.com
  • Drift detection — alerts when agent behaviour deviates from its baseline
  • Any LLM — OpenAI, Anthropic, Gemini, Ollama, LangChain, CrewAI, AutoGen
  • Zero friction — no new infrastructure, installs in seconds

First time? Run setup

Check if SealVera is configured:

const fs = require('fs');
const path = require('path');
const configured = fs.existsSync(path.join(process.env.OPENCLAW_WORKSPACE || process.cwd(), '.sealvera.json'));

If .sealvera.json does NOT exist, run setup:

node <skill_dir>/scripts/setup.js

Setup will:

  1. Ask for your API key (get one free at app.sealvera.com)
  2. Verify connectivity and show org/plan info
  3. Copy sealvera-log.js into the workspace
  4. Patch AGENTS.md with the mandatory sub-agent audit rule
  5. Write .sealvera.json config
  6. Fire a test log to confirm end-to-end

One-time setup. After this, every sub-agent you spawn is audited automatically.

If the user says "install SealVera", "set up SealVera", or "connect SealVera" — run setup, don't just describe it.


How logging works

After setup, two logging paths are active:

1. Sub-agent logging (automatic via AGENTS.md)

Every sessions_spawn task prompt gets a MANDATORY footer — the sub-agent logs itself on completion.

2. Main-session logging (call after every significant task)

The agent running the main session must log its own work. This is what shows your day-to-day activity in the dashboard.

const log = require('./sealvera-log');
await log({
  action: 'fix_login_bug',          // short description of what you did
  decision: 'COMPLETED',            // COMPLETED | FAILED | ESCALATED
  input:  { task: '...' },
  output: { result: '...' },
  reasoning: [{ factor: 'outcome', value: 'ok', signal: 'safe', explanation: '...' }]
});

Log after: code written, bug fixed, feature deployed, investigation done, judgment call made. Don't log: routine chat, heartbeat pings, trivial lookups.

Setup patches SOUL.md with this rule so the agent follows it automatically.

Each log entry includes: input/output, reasoning steps, RSA-2048 signature, chain link, timestamp, model, and agent name.


Decision vocabulary

Task / agent execution:

DecisionMeaning
COMPLETEDTask finished successfully
RESPONDEDAnswered a question / returned output
FAILEDTask failed
ERRORUnexpected exception
ESCALATEDHanded off to human or higher-level agent
SKIPPEDTask intentionally not run
PARTIALCompleted with caveats

Approval / gating:

DecisionMeaning
APPROVEDRequest approved
REJECTEDRequest blocked
FLAGGEDNeeds human review

Tests:

DecisionMeaning
PASSEDTest passed
FAILEDTest failed
SKIPPEDTest skipped

Sub-agent audit rule

Append this footer to every sessions_spawn task:

---
MANDATORY: Before you finish, log your result to SealVera:
  const log = require('<workspace>/sealvera-log');
  await log({
    action: '<what you did>',
    decision: 'COMPLETED',
    input: { task: '<summary>' },
    output: { result: '<summary>' },
    reasoning: [{ factor: 'outcome', value: '<result>', signal: 'safe', explanation: '<why>' }]
  });
Vocabulary: task=COMPLETED/FAILED | test=PASSED/FAILED | gate=APPROVED/REJECTED | handoff=ESCALATED

Check status

node <skill_dir>/scripts/status.js

Get your API key

Sign up at app.sealvera.com — free tier includes 10,000 decisions/month.


Reference

See references/api.md for all SDK methods and log field schema. See references/compliance.md for regulation mapping (EU AI Act, FINRA, HIPAA, GDPR, SOC 2).

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Compliance Audit Generator

Generates detailed compliance audits with risk-prioritized findings and remediation plans for frameworks like SOC 2, ISO 27001, GDPR, HIPAA, and PCI DSS.

Registry SourceRecently Updated
1583
Profile unavailable
Security

Compliance & Audit Readiness Engine

Guides startups and scale-ups through SOC 2, ISO 27001, GDPR, HIPAA, and PCI DSS compliance to achieve audit readiness without external consultants.

Registry SourceRecently Updated
0419
Profile unavailable
Security

AuditClaw GRC

AI-native GRC (Governance, Risk, and Compliance) for OpenClaw. 97 actions across 13 frameworks including SOC 2, ISO 27001, HIPAA, GDPR, NIST CSF, PCI DSS, CI...

Registry SourceRecently Updated
0434
Profile unavailable