openclaw-ultra-scraping

Powerful web scraping, crawling, and data extraction with stealth anti-bot bypass (Cloudflare Turnstile, CAPTCHAs). Use when: (1) scraping websites that block normal requests, (2) extracting structured data from web pages, (3) crawling multiple pages with concurrency, (4) taking screenshots of web pages, (5) extracting links, (6) any web scraping task that needs stealth/anti-detection, (7) user asks to scrape/crawl/extract from URLs, (8) need to bypass Cloudflare or other bot protection. Supports CSS/XPath selectors, adaptive element tracking (survives site redesigns), multi-session spiders, pause/resume crawls, proxy rotation, and async operations. Powered by MyClaw.ai.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "openclaw-ultra-scraping" with this command: npx skills add leoyeai/openclaw-ultra-scraping

OpenClaw Ultra Scraping

Powered by MyClaw.ai — the AI personal assistant platform that gives every user a full server with complete code control. Part of the MyClaw open skills ecosystem.

Handles everything from single-page extraction to full-scale concurrent crawls with anti-bot bypass.

Setup

Run once before first use:

bash scripts/setup.sh

This installs Scrapling + all browser dependencies into /opt/scrapling-venv.

Quick Start — CLI Script

The bundled scripts/scrape.py provides a unified CLI:

PYTHON=/opt/scrapling-venv/bin/python3

# Simple fetch (JSON output)
$PYTHON scripts/scrape.py fetch "https://example.com" --css ".content"

# Extract text
$PYTHON scripts/scrape.py extract "https://example.com" --css "h1"

# Stealth mode (bypass Cloudflare)
$PYTHON scripts/scrape.py fetch "https://protected-site.com" --stealth --solve-cloudflare --css ".data"

# Dynamic (full browser rendering)
$PYTHON scripts/scrape.py fetch "https://spa-site.com" --dynamic --css ".product"

# Extract links
$PYTHON scripts/scrape.py links "https://example.com" --filter "\.pdf$"

# Multi-page crawl
$PYTHON scripts/scrape.py crawl "https://example.com" --depth 2 --concurrency 10 --css ".item" -o results.json

# Output formats: json, jsonl, csv, text, markdown, html
$PYTHON scripts/scrape.py fetch "https://example.com" -f markdown -o page.md

Quick Start — Python

For complex tasks, write Python directly using the venv:

#!/opt/scrapling-venv/bin/python3
from scrapling.fetchers import Fetcher, StealthyFetcher

# Simple HTTP
page = Fetcher.get('https://example.com', impersonate='chrome')
titles = page.css('h1::text').getall()

# Bypass Cloudflare
page = StealthyFetcher.fetch('https://protected.com', headless=True, solve_cloudflare=True)
data = page.css('.product').getall()

Fetcher Selection Guide

ScenarioFetcherFlag
Normal sites, fast scrapingFetcher(default)
JS-rendered SPAsDynamicFetcher--dynamic
Cloudflare/anti-bot protectedStealthyFetcher--stealth
Cloudflare Turnstile challengeStealthyFetcher--stealth --solve-cloudflare

Selector Cheat Sheet

page.css('.class')                    # CSS
page.css('.class::text').getall()     # Text extraction
page.xpath('//div[@id="main"]')      # XPath
page.find_all('div', class_='item')  # BS4-style
page.find_by_text('keyword')         # Text search
page.css('.item', adaptive=True)     # Adaptive (survives redesigns)

Advanced Features

  • Adaptive tracking: auto_save=True on first run, adaptive=True later — elements are found even after site redesign
  • Proxy rotation: Pass proxy="http://host:port" or use ProxyRotator
  • Sessions: FetcherSession, StealthySession, DynamicSession for cookie/state persistence
  • Spider framework: Scrapy-like concurrent crawling with pause/resume
  • Async support: All fetchers have async variants

For full API details: read references/api-reference.md

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Self-Improving Compound

Capture durable lessons from debugging, user corrections, missing capabilities, and repeated workflow friction so future sessions avoid the same mistakes. Hy...

Registry SourceRecently Updated
Automation

Agent Anchor

Agent Anchor is a crash-proof dashboard for OpenClaw that continuously saves your AI agent's state, enabling seamless recovery and task tracking after interr...

Registry SourceRecently Updated
Automation

Meyo Community

觅游社区(meyo123.com)AI Agent 社区操作技能。发帖、查互动、成长日记、查询技能市场。当用户需要操作觅游社区时使用此技能。触发词:觅游、meyo、发帖到社区、觅游社区、社区互动、成长日记。

Registry SourceRecently Updated
Automation

Yahoo Mail IMAP Export

Export large Yahoo Mail archives with the folder-rotation IMAP workflow, resumable downloads, and safe delete-after-verify handling.

Registry SourceRecently Updated