finance-research-agent

# SKILL.md — NeuroSec Technical Modules

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "finance-research-agent" with this command: npx skills add codexclawd/codexclawd/codexclawd-codexclawd-finance-research-agent

SKILL.md — NeuroSec Technical Modules

Module 1: Secret Scanner

Purpose

Detect exposed secrets, credentials, and sensitive tokens in files and git history.

Capabilities

Entropy Analysis

# Shannon entropy calculation
# Flag strings with entropy >4.5 (high randomness = potential secret)
def calculate_entropy(string):
    import math
    prob = [float(string.count(c)) / len(string) for c in dict.fromkeys(list(string))]
    entropy = - sum([p * math.log(p) / math.log(2.0) for p in prob])
    return entropy

Known Pattern Detection

PatternRegexSeverity
AWS Access KeyAKIA[0-9A-Z]{16}CRITICAL
AWS Secret Key[0-9a-zA-Z/+]{40} (contextual)CRITICAL
GitHub Tokengh[pousr]_[A-Za-z0-9_]{36,}CRITICAL
GitHub Classicghp_[a-zA-Z0-9]{36}CRITICAL
Slack Tokenxox[baprs]-[0-9]{10,13}-[0-9]{10,13}-[a-zA-Z0-9]{24}CRITICAL
Slack Webhookhttps://hooks.slack.com/services/T[a-zA-Z0-9_]{8}/B[a-zA-Z0-9_]{10,}/[a-zA-Z0-9_]{24}HIGH
Generic API Key[aA][pP][iI][_-]?[kK][eE][yY][\s]*[=:]+[\s]*['"][a-zA-Z0-9]{16,}['"]HIGH
Private Key PEM-----BEGIN (RSA |EC |OPENSSH |PGP )?PRIVATE KEY-----CRITICAL
Database URL(postgres|mysql|mongodb|redis)://[^:]+:[^@]+@CRITICAL
JWT TokeneyJ[a-zA-Z0-9_-]*\.eyJ[a-zA-Z0-9_-]*\.[a-zA-Z0-9_-]*MEDIUM
Bearer Token[Bb]earer\s+[a-zA-Z0-9_\-\.=]+MEDIUM

Git History Scanning

# Scan git history for secrets
git log --all --full-history --pretty=format:'%H' | \
  while read commit; do
    git show $commit --no-patch --format='%H %an %ae %ad'
    git show $commit -p | grep -E '(AKIA|ghp_|xoxb-|-----BEGIN)'
  done

Scan Targets

  • Files modified since last scan (mtime comparison)
  • All text files in workspace (Deep Audit)
  • Git commit messages and diffs
  • Environment files (.env, .env.local, .env.production)
  • Configuration files (config.json, secrets.yaml)
  • Source code files (contextual scanning for embedded secrets)

Output

  • Alert with secret type, location, partial redaction
  • Entropy score for high-entropy strings
  • Git commit hash if found in history

Module 2: Permission Auditor

Purpose

Monitor file permissions and detect dangerous configurations.

Capabilities

Critical File Permission Baselines

File PatternExpected ModeSeverity if Violated
~/.ssh/*600CRITICAL
~/.ssh/ directory700HIGH
~/.env*600CRITICAL
~/.aws/credentials600CRITICAL
~/.docker/config.json600HIGH
*.pem600CRITICAL
*.key600CRITICAL
id_rsa*600CRITICAL
.htpasswd600HIGH

Dangerous Permission Detection

# Find world-writable files
find /workspace -type f -perm -002

# Find SUID/SGID binaries
find /workspace -type f \( -perm -4000 -o -perm -2000 \)

# Find group-writable sensitive files
find /workspace -type f -perm -020

# Find files with no owner
find /workspace -type f -nouser

SSH Directory Audit

# Check SSH directory permissions
ls -la ~/.ssh/
# Expected: drwx------ (700) for directory
# Expected: -rw------- (600) for private keys
# Expected: -rw-r--r-- (644) for public keys

Baseline Comparison

# Compare current permissions to baseline
def audit_permissions(current, baseline):
    deviations = []
    for filepath, current_mode in current.items():
        expected_mode = baseline.get(filepath)
        if expected_mode and current_mode != expected_mode:
            deviations.append({
                'file': filepath,
                'current': current_mode,
                'expected': expected_mode
            })
    return deviations

Output

  • Alert with file path, current mode, expected mode
  • Risk assessment based on file type
  • Recommended chmod command

Module 3: Network Monitor

Purpose

Detect network exposure, unexpected listeners, and communication anomalies.

Capabilities

Listening Port Detection

# Capture listening ports
netstat -tlnp 2>/dev/null || ss -tlnp

# Output format: Proto Recv-Q Send-Q Local Address State PID/Program
# 0.0.0.0:* = binding all interfaces (HIGH risk)
# 127.0.0.1:* = localhost only (acceptable)
# 10.0.0.x:* = private network (MEDIUM risk)

Binding Address Risk Matrix

Bind AddressRisk LevelNotes
0.0.0.0HIGHAccessible from all interfaces
::HIGHIPv6 all interfaces
127.0.0.1INFOLocalhost only (acceptable)
10.x.x.xMEDIUMPrivate network
172.16-31.x.xMEDIUMPrivate network
192.168.x.xMEDIUMPrivate network

Docker Exposure Check

# Check for Docker socket access
test -S /var/run/docker.sock && ls -la /var/run/docker.sock

# Expected: srw-rw---- 1 root docker
# World-accessible Docker socket = CRITICAL

# Check container runtime exposure
curl -s --unix-socket /var/run/docker.sock http://localhost/containers/json

Connection Tracking

# Active connections (read-only)
netstat -tn 2>/dev/null || ss -tn

# ESTABLISHED connections to external IPs
# Flag connections to unexpected countries/TOR exit nodes

Baseline Deviation Detection

# Compare current listeners to baseline
def check_listeners(current_listeners, baseline):
    alerts = []
    for listener in current_listeners:
        if listener not in baseline:
            alerts.append({
                'port': listener.port,
                'bind': listener.address,
                'process': listener.process,
                'status': 'NEW_LISTENER'
            })
    return alerts

Output

  • Alert with port, bind address, process name
  • Risk level based on binding address
  • Recommended remediation (bind to 127.0.0.1)

Module 4: Dependency Checker

Purpose

Audit package manifests for supply chain risks and security issues.

Capabilities

Manifest Parsing

EcosystemFileLockfile
Node.jspackage.jsonpackage-lock.json, yarn.lock
Pythonrequirements.txt, pyproject.tomlpoetry.lock, Pipfile.lock
RustCargo.tomlCargo.lock
Gogo.modgo.sum
RubyGemfileGemfile.lock
Javapom.xmlN/A (embedded)
PHPcomposer.jsoncomposer.lock

Lockfile Verification

# Check lockfile existence
find /workspace -name "package-lock.json" -o -name "yarn.lock" -o -name "poetry.lock"

# Verify lockfile is up-to-date
# (Compare hash of manifest with hash in lockfile)

Unpinned Version Detection

# Dangerous version patterns
dangerous_patterns = [
    r'^\^',      # Caret (^1.0.0) - allows minor updates
    r'^~',       # Tilde (~1.0.0) - allows patch updates
    r'>=',       # Greater than or equal
    r'>',        # Greater than
    r'latest',   # Latest tag
    r'\*',       # Wildcard
    r'x\.x\.x',  # Wildcard versions
]

def check_pinned(version):
    for pattern in dangerous_patterns:
        if re.match(pattern, version):
            return False  # Not pinned
    return True  # Pinned (exact version)

Git URL Detection

# Detect dependencies from git repositories
git_url_patterns = [
    r'git\+https?://',
    r'git@github\.com:',
    r'github:[^/]+/[^/]+',
    r'^git://',
]

# Risk: No integrity verification, mutable refs

Local Path Detection

# Detect local path dependencies
local_path_patterns = [
    r'^file:',
    r'^\.\./',
    r'^\.\/',
    r'^[a-zA-Z]:',  # Windows paths
]

# Risk: Non-portable, potential path traversal

Output

  • Alert with manifest file, dependency name
  • Risk type (unpinned, git URL, local path)
  • Recommended remediation (exact version, registry package)

Module 5: Process Watcher

Purpose

Monitor running processes for suspicious activity and resource abuse.

Capabilities

Process Tree Capture

# Capture process tree with relationships
ps auxf

# Key fields: USER, PID, %CPU, %MEM, COMMAND

Suspicious Pattern Detection

PatternRiskDescription
/bin/sh -iEMERGENCYInteractive shell (reverse shell)
nc -e /bin/shEMERGENCYNetcat backdoor
python.*pty.spawnHIGHPTY upgrade technique
socat.*execHIGHSocat backdoor
/dev/tcp/EMERGENCYBash network redirection
bash -c.*base64HIGHEncoded payload
curl.*|.*bashHIGHPipe to shell
wget.*|.*shHIGHPipe to shell
perl.*-e.*socketHIGHPerl reverse shell
ruby.*-rsocketHIGHRuby reverse shell

Resource Abuse Detection

# High CPU processes (>80% for >60 seconds)
ps aux --sort=-%cpu | head -10

# High memory processes
ps aux --sort=-%mem | head -10

# Processes in temporary directories
ls -la /proc/*/cwd 2>/dev/null | grep -E '/tmp\|/dev/shm\|/var/tmp'

Network-Enabled Process Detection

# Processes with network connections
lsof -i 2>/dev/null | grep -v COMMAND

# Processes listening on ports
ss -tnlp 2>/dev/null

Shell Process Detection

# Shell processes without TTY
ps aux | grep -E 'bash\|sh\|zsh\|fish' | grep -v 'tty'

# Suspicious: Shell without terminal = potential backdoor

Output

  • Alert with PID, command, parent process
  • Risk level based on pattern match
  • Recommended action (investigate, terminate)

Module 6: Alert Writer

Purpose

Format and write alerts to filesystem with atomic guarantees.

Capabilities

Alert Formatting

  • YAML frontmatter generation
  • Content redaction for secrets
  • Markdown body formatting
  • SHA256 digest calculation

Atomic Write Protocol

def write_alert_atomic(alert_content, filepath):
    """
    Atomic write using temp file + rename.
    Prevents partial/corrupted alerts.
    """
    import tempfile
    import os
    
    # Create temp file in same directory
    temp_fd, temp_path = tempfile.mkstemp(
        dir=os.path.dirname(filepath),
        prefix='.tmp-alert-'
    )
    
    try:
        # Write content
        with os.fdopen(temp_fd, 'w') as f:
            f.write(alert_content)
            f.flush()
            os.fsync(f.fileno())
        
        # Atomic rename
        os.rename(temp_path, filepath)
        
    except Exception as e:
        # Cleanup on failure
        os.unlink(temp_path)
        raise e

Fallback to stdout

def write_alert(alert_content, filepath):
    """
    Write alert with fallback to stdout on failure.
    """
    try:
        write_alert_atomic(alert_content, filepath)
    except Exception as e:
        # Fallback: output to stdout for capture
        print("--- ALERT_WRITE_FAILED ---")
        print(f"Target: {filepath}")
        print(f"Error: {e}")
        print("--- ALERT CONTENT ---")
        print(alert_content)
        print("--- END ALERT ---")

Content Redaction

def redact_secret(content, secret_type):
    """
    Redact sensitive content while preserving structure.
    """
    redaction_patterns = {
        'aws_key': (r'(AKIA[0-9A-Z]{16})', r'\1<SECRET_KEY>'),
        'github_token': (r'(ghp_[a-zA-Z0-9]{36})', r'\1<SECRET_KEY>'),
        'generic': (r'([a-zA-Z0-9]{8})[a-zA-Z0-9]{20,}([a-zA-Z0-9]{8})', r'\1<SECRET>\2'),
    }
    
    pattern, replacement = redaction_patterns.get(secret_type, redaction_patterns['generic'])
    return re.sub(pattern, replacement, content)

Alert Deduplication

def is_duplicate(alert_key, known_alerts, window_seconds=3600):
    """
    Check if alert is duplicate within time window.
    """
    import time
    import hashlib
    
    # Generate unique key
    alert_hash = hashlib.sha256(alert_key.encode()).hexdigest()[:16]
    
    current_time = time.time()
    
    if alert_hash in known_alerts:
        last_seen = known_alerts[alert_hash]
        if current_time - last_seen < window_seconds:
            return True  # Duplicate
    
    # Update timestamp
    known_alerts[alert_hash] = current_time
    return False

Digest Generation

def generate_daily_digest(alerts, date):
    """
    Generate daily digest from alert list.
    """
    severity_counts = {'CRITICAL': 0, 'HIGH': 0, 'MEDIUM': 0, 'LOW': 0}
    for alert in alerts:
        severity_counts[alert.severity] += 1
    
    digest = f"""---
digest_date: "{date}"
alert_count: {len(alerts)}
critical: {severity_counts['CRITICAL']}
high: {severity_counts['HIGH']}
medium: {severity_counts['MEDIUM']}
low: {severity_counts['LOW']}
---

# NeuroSec Daily Digest — {date}

## Executive Summary

{len(alerts)} alerts generated across {sum(1 for c in severity_counts.values() if c > 0)} severity levels.
...
"""
    return digest

Output

  • Atomic file write to alerts/ directory
  • Fallback stdout output on failure
  • Deduplication tracking
  • Digest generation for periodic summaries

Module Integration

Execution Flow

1. Trigger (Heartbeat/PostWrite/DeepAudit)
   ↓
2. Secret Scanner → Alerts
3. Permission Auditor → Alerts
4. Network Monitor → Alerts
5. Dependency Checker → Alerts (DeepAudit only)
6. Process Watcher → Alerts
   ↓
7. Alert Writer → File output
   ↓
8. Update alert_history.log

Resource Management

  • Each module has independent timeout
  • Sequential execution (single-threaded)
  • Partial results on timeout
  • Memory limit enforced per module

Last updated: 2026-02-08 Version: NeuroSec v1.0 Classification: INTERNAL USE ONLY

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

newsclawd

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

CP2K Cross-Code Input Studio

Generate, refine, explain, and cross-convert CP2K-centered input drafts for computational chemistry and materials workflows. Use when a user wants a CP2K .in...

Registry SourceRecently Updated
Coding

PDF to markdown converter

Convert PDF and image documents to clean Markdown via the PDF2Markdown CLI. Use when the user wants to extract text from PDFs, convert PDFs to markdown, pars...

Registry SourceRecently Updated