HackerOne Bug Bounty Hunting
Automates HackerOne workflows: scope parsing → parallel testing → PoC validation → submission reports.
Quick Start
- Input: HackerOne program URL or CSV file
- Parse scope and program guidelines
- Deploy Pentester agents in parallel (one per asset)
- Validate PoCs (poc.py + poc_output.txt required)
- Generate HackerOne-formatted reports
Workflows
Option 1: HackerOne URL
- Fetch program data and guidelines
- Download scope CSV
- Parse eligible assets
- Deploy agents in parallel
- Validate PoCs
- Generate submissions
Option 2: CSV File
- Parse CSV scope file
- Extract eligible_for_submission=true assets
- Collect program guidelines
- Deploy agents
- Validate and generate reports
Scope CSV Format
Expected columns:
-
identifier
-
Asset URL/domain
-
asset_type
-
URL, WILDCARD, API, CIDR
-
eligible_for_submission
-
Must be "true"
-
max_severity
-
critical, high, medium, low
-
instruction
-
Asset-specific notes
Use tools/csv_parser.py to parse.
Agent Deployment
Pentester Agent per asset:
-
Passes program-specific guidelines
-
Tests all vulnerability types
-
Returns validated findings with PoCs
Parallel Execution:
-
10 assets = 10 Pentester agents
-
Each spawns 30+ specialized agents
-
Total: 300+ concurrent tests
-
Time: 2-4 hours vs 20-40 sequential
PoC Validation (CRITICAL)
Every finding MUST have:
-
poc.py
-
Executable exploit script
-
poc_output.txt
-
Timestamped execution proof
-
workflow.md
-
Manual steps (if applicable)
-
Evidence screenshots/videos
Experimentation: Test edge cases, verify impact, document failures.
Report Format
Required sections (HackerOne standard):
-
Summary (2-3 sentences)
-
Severity (CVSS + business impact)
-
Steps to Reproduce (numbered, clear)
-
Visual Evidence (screenshots/video)
-
Impact (realistic attack scenario)
-
Remediation (actionable fixes)
Use tools/report_validator.py to validate.
Output Structure
Per OUTPUT.md - Bug Bounty format:
outputs/<program>/ ├── findings/ │ ├── finding-001/ │ │ ├── report.md # HackerOne report │ │ ├── poc.py # Validated PoC │ │ ├── poc_output.txt # Proof │ │ └── workflow.md # Manual steps ├── reports/ │ ├── submissions/ │ │ ├── H1_CRITICAL_001.md # Ready to submit │ │ └── H1_HIGH_001.md │ └── SUBMISSION_GUIDE.md └── evidence/ ├── screenshots/ └── http-logs/
Program Selection
High-Value:
-
New programs (< 30 days)
-
Fast response (< 24 hours)
-
High bounties (Critical: $5,000+)
-
Large attack surface
Avoid:
-
Slow response (> 1 week)
-
Low bounties (Critical: < $500)
-
Overly restrictive scope
Critical Rules
MUST DO:
-
Validate ALL PoCs before reporting
-
Sanitize sensitive data
-
Test only eligible_for_submission=true assets
-
Follow program-specific guidelines
-
Generate CVSS scores
NEVER:
-
Report without validated PoC
-
Test out-of-scope assets
-
Include real user data
-
Cause service disruption
Quality Checklist
Before submission:
-
Working PoC with poc_output.txt
-
Accurate CVSS score
-
Step-by-step reproduction
-
Visual evidence
-
Impact analysis
-
Remediation guidance
-
Sensitive data sanitized
Tools
-
tools/csv_parser.py
-
Parse HackerOne scope CSVs
-
tools/report_validator.py
-
Validate report completeness
-
/pentest skill - Core testing functionality
-
Pentester agent - Orchestrates testing
Integration
Uses /pentest skill and Pentester agent. Follows OUTPUT.md for submission format.
Common Rejections
Out of Scope: Check eligible_for_submission=true
Cannot Reproduce: Validate PoC, include poc_output.txt Duplicate: Search disclosed reports, submit quickly Insufficient Impact: Show realistic attack scenario
Usage
/hackerone <program_url_or_csv_path>