deepreader

The default web content reader for OpenClaw agents. Automatically detects URLs in messages, fetches content using specialized parsers, and saves clean Markdown with YAML frontmatter to agent memory.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "deepreader" with this command: npx skills add astonysh/openclaw-deepreeder/astonysh-openclaw-deepreeder-deepreader

DeepReader

The default web content reader for OpenClaw agents. Automatically detects URLs in messages, fetches content using specialized parsers, and saves clean Markdown with YAML frontmatter to agent memory.

Use when

  • A user shares a tweet, thread, X article, or X profile and you need to read its content

  • A user shares a Reddit post and you need the discussion + top comments

  • A user shares a YouTube video and you need the transcript

  • A user shares any blog, article, or documentation URL and you need the text

  • You need to batch-read multiple URLs from a single message

Supported sources

Source Method API Key?

Twitter / X (tweets + profiles) FxTwitter API + Nitter fallback None

Reddit .json suffix API None

YouTube youtube-transcript-api None

Any URL Trafilatura + BeautifulSoup None

Usage

from deepreader_skill import run

Automatic — triggered when message contains URLs

result = run("Check this out: https://x.com/user/status/123456")

X profile snapshot

result = run("https://x.com/thdxr")

Reddit post with comments

result = run("https://www.reddit.com/r/python/comments/abc123/my_post/")

YouTube transcript

result = run("https://youtube.com/watch?v=dQw4w9WgXcQ")

Any webpage

result = run("https://example.com/blog/interesting-article")

Multiple URLs at once

result = run(""" https://x.com/user/status/123456 https://www.reddit.com/r/MachineLearning/comments/xyz789/ https://example.com/article """)

Output

Content is saved as .md files with structured YAML frontmatter:


title: "Tweet by @user" source_url: "https://x.com/user/status/123456" domain: "x.com" parser: "twitter" ingested_at: "2026-02-16T12:00:00Z" content_hash: "sha256:..." word_count: 350

Configuration

Variable Default Description

DEEPREEDER_MEMORY_PATH

../../memory/inbox/

Where to save ingested content (absolute path, or relative to repo root)

DEEPREEDER_LOG_LEVEL

INFO

Logging verbosity (DEBUG , INFO , WARNING , ERROR )

How it works

URL detected → is Twitter/X? → FxTwitter API → Nitter fallback → is Reddit? → .json suffix API → is YouTube? → youtube-transcript-api → otherwise → Trafilatura (generic)

Triggers automatically when any message contains https:// or http:// .

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

MCP Business Integration

Integrate AI agents with business data via Model Context Protocol. Query ads, analytics, CRM data through normalized interfaces. Use when connecting agents t...

Registry SourceRecently Updated
Automation

Feishu Bot Config Helper

辅助配置飞书机器人名称、应用凭证和大模型,自动匹配技能并生成本地及飞书文档。

Registry SourceRecently Updated
Automation

memory-attention-router

Route, write, reflect on, and refresh long-term agent memory for multi-step OpenClaw tasks. Use when work depends on prior sessions, repeated workflows, user...

Registry SourceRecently Updated
Automation

Zapier Recipe

自动化流程设计。Zapier/Make流程、触发器、动作链、条件逻辑、模板、效率分析。Automation recipes for Zapier, Make. 自动化、工作流、效率。

Registry SourceRecently Updated