researchclaw

Autonomous research pipeline skill for Claude Code. Given a research topic, orchestrates 23 stages end-to-end: literature review, hypothesis generation, experiment design, code generation & sandbox execution, statistical analysis, paper writing, multi-agent peer review, and conference-ready LaTeX export. No babysitting required.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "researchclaw" with this command: npx skills add dongsheng123132/researchclaw

ResearchClaw — Autonomous Research Pipeline Skill

Description

Run ResearchClaw's 23-stage autonomous research pipeline. Given a research topic, this skill orchestrates the entire research workflow: literature review → hypothesis generation → experiment design → code generation & execution → result analysis → paper writing → peer review → final export.

Trigger Conditions

Activate this skill when the user:

  • Asks to "research [topic]", "write a paper about [topic]", or "investigate [topic]"
  • Wants to run an autonomous research pipeline
  • Asks to generate a research paper from scratch
  • Mentions "ResearchClaw" by name

Instructions

Prerequisites Check

  1. Verify config file exists:
    ls config.yaml || ls config.researchclaw.example.yaml
    
  2. If no config.yaml, create one from the example:
    cp config.researchclaw.example.yaml config.yaml
    
  3. Ensure the user's LLM API key is configured in config.yaml under llm.api_key or via llm.api_key_env environment variable.

Running the Pipeline

Option A: CLI (recommended)

researchclaw run --topic "Your research topic here" --auto-approve

Options:

  • --topic / -t: Override the research topic from config
  • --config / -c: Config file path (default: config.yaml)
  • --output / -o: Output directory (default: artifacts/rc-YYYYMMDD-HHMMSS-HASH/)
  • --from-stage: Resume from a specific stage (e.g., PAPER_OUTLINE)
  • --auto-approve: Auto-approve gate stages (5, 9, 20) without human input

Option B: Python API

from researchclaw.pipeline.runner import execute_pipeline
from researchclaw.config import RCConfig
from researchclaw.adapters import AdapterBundle
from pathlib import Path

config = RCConfig.load("config.yaml", check_paths=False)
results = execute_pipeline(
    run_dir=Path("artifacts/my-run"),
    run_id="research-001",
    config=config,
    adapters=AdapterBundle(),
    auto_approve_gates=True,
)

# Check results
for r in results:
    print(f"Stage {r.stage.name}: {r.status.value}")

Option C: Iterative Pipeline (multi-round improvement)

from researchclaw.pipeline.runner import execute_iterative_pipeline

results = execute_iterative_pipeline(
    run_dir=Path("artifacts/my-run"),
    run_id="research-001",
    config=config,
    adapters=AdapterBundle(),
    max_iterations=3,
    convergence_rounds=2,
)

Output Structure

After a successful run, the output directory contains:

artifacts/<run-id>/
├── stage-1/                # TOPIC_INIT outputs
├── stage-2/                # PROBLEM_DECOMPOSE outputs
├── ...
├── stage-10/
│   └── experiment.py       # Generated experiment code
├── stage-12/
│   └── runs/run-1.json     # Experiment execution results
├── stage-14/
│   ├── experiment_summary.json  # Aggregated metrics
│   └── results_table.tex        # LaTeX results table
├── stage-17/
│   └── paper_draft.md      # Full paper draft
├── stage-22/
│   └── charts/             # Generated visualizations
│       ├── metric_trajectory.png
│       └── experiment_comparison.png
└── pipeline_summary.json   # Overall pipeline status

Experiment Modes

ModeDescriptionConfig
simulatedLLM generates synthetic results (no code execution)experiment.mode: simulated
sandboxExecute generated code locally via subprocessexperiment.mode: sandbox
ssh_remoteExecute on remote GPU server via SSHexperiment.mode: ssh_remote

Troubleshooting

  • Config validation error: Run researchclaw validate --config config.yaml
  • LLM connection failure: Check llm.base_url and API key
  • Sandbox execution failure: Verify experiment.sandbox.python_path exists and has numpy installed
  • Gate rejection: Use --auto-approve or manually approve at stages 5, 9, 20

Tools Required

  • File read/write (for config and artifacts)
  • Bash (for CLI execution)
  • No external MCP servers required for basic operation

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Arxiv Paper Writer

Use this skill whenever the user wants Claude Code to write, scaffold, compile, debug, or review an arXiv-style academic paper, especially survey papers with...

Registry SourceRecently Updated
570Profile unavailable
Coding

pdf-to-latex-mineru

Convert PDF documents to LaTeX source code using MinerU AI extraction. Designed for researchers, academics, and scientists who need to re-edit, re-typeset, o...

Registry SourceRecently Updated
1090Profile unavailable
Research

Beamer Pipeline Public

Convert academic papers or notes into Chinese academic Beamer slides using a seven-phase local pipeline with pluggable agent execution and LaTeX compilation.

Registry SourceRecently Updated
310Profile unavailable
Research

Auto Research Pipeline

OpenClaw 原生的自动化研究 pipeline。从一个研究 topic 出发,经过 23 个 stage 产出完整论文。 每个 Phase 由独立 sub-agent 执行(context 隔离),Phase 间通过文件系统传递产出。 触发词:Research X、跑研究、文献调研、写论文、研究 pipel...

Registry Source
950Profile unavailable