security-patterns

Security Pattern Detector Skill

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "security-patterns" with this command: npx skills add anton-abyzov/specweave/anton-abyzov-specweave-security-patterns

Security Pattern Detector Skill

Overview

This skill provides real-time security pattern detection based on Anthropic's official security-guidance plugin. It identifies potentially dangerous coding patterns BEFORE they're committed.

Detection Categories

  1. Command Injection Risks

GitHub Actions Workflow Injection

DANGEROUS - User input directly in run command

run: echo "${{ github.event.issue.title }}"

SAFE - Use environment variable

env: TITLE: ${{ github.event.issue.title }} run: echo "$TITLE"

Node.js Child Process Execution

// DANGEROUS - Shell command with user input exec(ls ${userInput}); spawn('sh', ['-c', userInput]);

// SAFE - Array arguments, no shell execFile('ls', [sanitizedPath]); spawn('ls', [sanitizedPath], { shell: false });

Python OS Commands

DANGEROUS

os.system(f"grep {user_input} file.txt") subprocess.call(user_input, shell=True)

SAFE

subprocess.run(['grep', sanitized_input, 'file.txt'], shell=False)

  1. Dynamic Code Execution

JavaScript eval-like Patterns

// DANGEROUS - All of these execute arbitrary code eval(userInput); new Function(userInput)(); setTimeout(userInput, 1000); // When string passed setInterval(userInput, 1000); // When string passed

// SAFE - Use parsed data, not code const config = JSON.parse(configString);

  1. DOM-based XSS Risks

React dangerouslySetInnerHTML

// DANGEROUS - Renders arbitrary HTML <div dangerouslySetInnerHTML={{ __html: userContent }} />

// SAFE - Use proper sanitization import DOMPurify from 'dompurify'; <div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(userContent) }} />

Direct DOM Manipulation

// DANGEROUS element.innerHTML = userInput; document.write(userInput);

// SAFE element.textContent = userInput; element.innerText = userInput;

  1. Unsafe Deserialization

Python Pickle

DANGEROUS - Pickle can execute arbitrary code

import pickle data = pickle.loads(user_provided_bytes)

SAFE - Use JSON for untrusted data

import json data = json.loads(user_provided_string)

JavaScript unsafe deserialization

// DANGEROUS with untrusted input const obj = eval('(' + jsonString + ')');

// SAFE const obj = JSON.parse(jsonString);

  1. SQL Injection

String Interpolation in Queries

// DANGEROUS const query = SELECT * FROM users WHERE id = ${userId}; db.query(SELECT * FROM users WHERE name = '${userName}');

// SAFE - Parameterized queries const query = 'SELECT * FROM users WHERE id = $1'; db.query(query, [userId]);

  1. Path Traversal

Unsanitized File Paths

// DANGEROUS const filePath = ./uploads/${userFilename}; fs.readFile(filePath); // User could pass "../../../etc/passwd"

// SAFE const safePath = path.join('./uploads', path.basename(userFilename)); if (!safePath.startsWith('./uploads/')) throw new Error('Invalid path');

Pattern Detection Rules

Pattern Category Severity Action

eval(

Code Execution CRITICAL Block

new Function(

Code Execution CRITICAL Block

dangerouslySetInnerHTML

XSS HIGH Warn

innerHTML =

XSS HIGH Warn

document.write(

XSS HIGH Warn

exec(

  • string concat Command Injection CRITICAL Block

spawn(

  • shell:true Command Injection HIGH Warn

pickle.loads(

Deserialization CRITICAL Warn

${{ github.event

GH Actions Injection CRITICAL Warn

Template literal in SQL SQL Injection CRITICAL Block

Response Format

When detecting a pattern:

⚠️ Security Warning: [Pattern Category]

File: path/to/file.ts:123 Pattern Detected: eval(userInput) Risk: Remote Code Execution - Attacker-controlled input can execute arbitrary JavaScript

Recommendation:

  1. Never use eval() with user input
  2. Use JSON.parse() for data parsing
  3. Use safe alternatives for dynamic behavior

Safe Alternative:

// Instead of eval(userInput), use:
const data = JSON.parse(userInput);

## Integration with Code Review

This skill should be invoked:
1. During PR reviews when new code is written
2. As part of security audits
3. When flagged by the code-reviewer skill

## False Positive Handling

Some patterns may be false positives:
- `dangerouslySetInnerHTML` with DOMPurify is safe
- `eval` in build tools (not user input) may be acceptable
- `exec` with hardcoded commands is lower risk

Always check the context before blocking.

## Project-Specific Learnings

**Before starting work, check for project-specific learnings:**

```bash
# Check if skill memory exists for this skill
cat .specweave/skill-memories/security-patterns.md 2>/dev/null || echo "No project learnings yet"

Project learnings are automatically captured by the reflection system when corrections or patterns are identified during development. These learnings help you understand project-specific conventions and past decisions.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

k8s-security-policies

No summary provided by upstream source.

Repository SourceNeeds Review
Security

security

No summary provided by upstream source.

Repository SourceNeeds Review
General

technical-writing

No summary provided by upstream source.

Repository SourceNeeds Review