audit-code

Security-focused code review for hardcoded secrets, dangerous calls, and common vulnerabilities

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "audit-code" with this command: npx skills add itsnishi/audit-code

audit-code -- Project Code Security Review

Security-focused code review of project source code. Covers OWASP-style vulnerabilities, hardcoded secrets, dangerous function calls, and patterns relevant to AI-assisted development.

What to do

Run the auditor against the target path:

python3 "$SKILL_DIR/scripts/audit_code.py" "$ARGUMENTS"

If $ARGUMENTS is empty, default to $PROJECT_ROOT.

What it checks

  • Hardcoded secrets -- API keys (AWS, GitHub, Stripe, OpenAI, Slack), tokens, private keys, connection strings, passwords
  • Dangerous function calls -- eval, exec, subprocess with shell=True, child_process.exec, pickle deserialization, system(), gets(), etc.
  • SQL injection -- String concatenation/interpolation in SQL queries
  • Dependency risks -- Known hallucinated package names, unverified installations
  • Sensitive files -- .env files committed to git, credential files in repo
  • File permissions -- Overly permissive chmod patterns
  • Exfiltration patterns -- Base64 encode + network send, DNS exfiltration, credential file reads

Output

Structured report with severity-ranked findings, file locations, and actionable remediation steps.

When to use

  • Before committing or pushing code
  • When reviewing third-party contributions or PRs
  • As part of a periodic security audit of the codebase
  • After AI-assisted code generation to verify no secrets or vulnerabilities were introduced

Advisory hooks

The repository's .claude/settings.json includes PreToolUse hooks that warn on dangerous Bash and Write operations. These hooks are advisory only -- they produce warnings but do not block execution.

  • audit-code is the detection layer for source code security issues
  • The hooks provide supplementary runtime warnings during agent operation
  • To enforce blocking, hooks must return {"decision": "block"} instead of warning messages

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

xfire Security PR Review

Multi-agent adversarial security review — 3 AI agents debate every finding, only real vulnerabilities survive

Registry SourceRecently Updated
3610Profile unavailable
Security

Guardian Wall

Mitigate prompt injection attacks, especially indirect ones from external web content or files. Use this skill when processing untrusted text from the intern...

Registry SourceRecently Updated
3430Profile unavailable
Security

War/Den Governance

Evaluates and governs all OpenClaw bot actions using YAML policies with tamper-evident audit logs to allow, deny, or require review before execution.

Registry SourceRecently Updated
3180Profile unavailable
Security

Memory Guard

Monitors and verifies agent workspace files to detect unauthorized changes, injection attacks, personality drift, and cross-agent contamination.

Registry SourceRecently Updated
3550Profile unavailable