security-checker

Security scanner for Python skills before publishing to ClawHub. Use before publishing any skill to check for dangerous imports, hardcoded secrets, unsafe file operations, and dangerous functions like eval/exec/subprocess. Essential for maintaining trust and ensuring published skills are safe for others to install and run.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "security-checker" with this command: npx skills add johstracke/security-checker

Security Checker

Security scan Python skills before publishing to ensure code safety.

Quick Start

security_scan.py <file_or_directory>

Examples:

# Scan a single Python file
security_scan.py scripts/my_script.py

# Scan an entire skill directory
security_scan.py /path/to/skill-folder

# Scan multiple skills
security_scan.py skills/

What It Checks

Dangerous Imports

Detects imports that could be used maliciously:

  • os - System-level operations
  • subprocess - Command execution
  • shutil - File operations
  • socket - Network operations
  • urllib / requests - HTTP requests

Why dangerous? These imports enable system command execution, file manipulation, and network access that could be exploited.

Dangerous Functions

Detects potentially unsafe function calls:

  • os.system() - Executes shell commands
  • subprocess.call(), subprocess.run(), subprocess.Popen() - Command execution
  • eval() - Executes arbitrary code
  • exec() - Executes arbitrary code

Why dangerous? These can execute arbitrary commands or code, leading to remote code execution vulnerabilities.

Hardcoded Secrets

Detects tokens, keys, and passwords:

  • API keys
  • Auth tokens (including ClawHub tokens)
  • Passwords
  • Private keys
  • JWT-like tokens

Why dangerous? Secrets leaked in published code can be stolen and abused.

Unsafe File Operations

Detects risky file access patterns:

  • Absolute file paths outside expected directories
  • Parent directory traversal (..)
  • Writing to system directories

Why dangerous? Could lead to unintended file access, data loss, or system modification.

Usage Pattern: Pre-Publish Checklist

Before publishing any skill:

# 1. Run security scan
security_scan.py /path/to/skill

# 2. Review any warnings
# If warnings appear, fix the code or document why it's safe

# 3. Re-scan after fixes
security_scan.py /path/to/skill

# 4. Only publish if scan passes
clawhub publish /path/to/skill --slug my-skill ...

Interpretation of Results

✅ "No security issues found"

Code appears safe. Proceed with publishing.

⚠️ "Warning" (Yellow)

Potentially risky pattern detected. Review the specific line and decide:

  • Is it legitimate? Document why in code comments or SKILL.md
  • Can it be avoided? Refactor to safer alternatives
  • Is it necessary? Clearly document the risk and purpose

🔴 "Possible hardcoded secret"

Secret detected. Before publishing:

  • Remove the secret
  • Use environment variables instead: os.getenv('API_KEY')
  • Document required env variables in SKILL.md
  • Never commit real secrets

Examples

Legitimate os module usage (documented)

import os  # Used only for path.join() - safe file path construction
workspace = os.path.join(os.path.expanduser("~"), ".openclaw", "workspace")

Scan result: ⚠️ Warning about os import Action: Document safe usage pattern in code comments

Hardcoded secret (must fix)

API_KEY = "sk-1234567890abcdef"  # DON'T DO THIS

Scan result: 🔴 Possible hardcoded secret Action: Remove and use environment variable:

API_KEY = os.getenv("MY_SKILL_API_KEY")
# Document in SKILL.md: Requires MY_SKILL_API_KEY environment variable

Safe pattern (no issues)

# JSON storage for local data only
data = {"notes": [], "metadata": {}}
with open("data.json", "w") as f:
    json.dump(data, f)

Scan result: ✅ No issues

Best Practices

  1. Always scan before publishing - Make it part of your workflow
  2. Review warnings manually - The scanner can't judge context
  3. Use environment variables for secrets - Never hardcode
  4. Prefer json over eval - Safe parsing vs code execution
  5. Document necessary risks - If dangerous code is required, explain why
  6. Minimize dangerous imports - Only use what's truly necessary
  7. Keep code simple - Complex code is harder to audit

Integration with Development Workflow

Before committing to repo

# Pre-commit hook concept
python3 /path/to/security_scan.py scripts/
if [ $? -ne 0 ]; then
    echo "❌ Security scan failed. Fix issues before committing."
    exit 1
fi

Automated pre-publish check

#!/bin/bash
# publish-safe.sh

SKILL_PATH=$1

echo "🔒 Running security scan..."
python3 /path/to/security_scan.py "$SKILL_PATH"

if [ $? -ne 0 ]; then
    echo "❌ Cannot publish: Security scan failed"
    exit 1
fi

echo "✅ Security scan passed"
clawhub publish "$SKILL_PATH"

Limitations

This scanner:

  • Can't judge context - Some dangerous code may be legitimate
  • Static analysis only - Doesn't execute code
  • Python-focused - Other languages need different tools
  • Basic patterns - Sophisticated obfuscation may evade detection

Complement with:

  • Manual code review
  • Testing in isolated environment
  • Reading through all code before publishing
  • Using additional tools: bandit, safety

Trust Building

Publishing skills that pass security scans builds trust in the community:

  • Users know you care about safety
  • Your reputation improves
  • Skills get adopted more readily
  • ClawHub may highlight safe skills

Examples of Published Skills (All Scanned)

# research-assistant
security_scan.py /home/ubuntu/.openclaw/workspace/skills/research-assistant
# ✅ All clear

# task-runner  
security_scan.py /home/ubuntu/.openclaw/workspace/skills/task-runner
# ✅ All clear

# security-checker
security_scan.py /home/ubuntu/.openclaw/workspace/skills/security-checker
# ✅ All clear

All three skills passed security scans before publishing to ClawHub.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Claw Secure Auditor

Security audit tool for ClawHub/OpenClaw skills (static analysis + reputation scoring)

Registry SourceRecently Updated
3550Profile unavailable
Security

Security Check

🔒 Pre-installation security verification for external code and dependencies. Automated risk analysis for GitHub repos, npm packages, PyPI libraries, and she...

Registry SourceRecently Updated
3860Profile unavailable
Security

AxonFlow Governance Policies

Govern OpenClaw with AxonFlow — block dangerous commands, detect PII, prevent data exfiltration, protect agent config files, explain policy decisions, grant...

Registry SourceRecently Updated
2381Profile unavailable
Security

Skill Guard

Security scanner for OpenClaw agent skills. Pre-install check via ClawHub page, local pattern scanning via read tool (zero exec), integrity verification. Use...

Registry SourceRecently Updated
1772Profile unavailable