crypto-news-http-api

Use when calling the Crypto News Analyzer HTTP API for async analysis jobs, semantic search, datasource management, or health checks from OpenClaw.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "crypto-news-http-api" with this command: npx skills add laceletho/crypto-news-http-api

Crypto News HTTP API Skill

Use this skill to call the Crypto News Analyzer HTTP API from OpenClaw.

When to Use

Use this skill when you need to call https://news.tradao.xyz or a compatible private deployment.

Typical triggers:

  • Run asynchronous crypto news analysis over a time window
  • Run asynchronous semantic search for a freeform topic query
  • Poll an API job until it finishes and then fetch the final result
  • Create, list, or delete datasources through the HTTP API
  • Query hidden-channel intelligence entries (canonical knowledge, semantic search, raw evidence)
  • Check service health before or after an API workflow

Quick Reference

Authentication is Bearer token style: send Authorization: Bearer <API_KEY> with every request.

POST /analyze creates a job and returns immediately. It does not return the final report. Poll status, then fetch the result.

Workflow: POST /analyze -> GET /analyze/{job_id} -> GET /analyze/{job_id}/result

Jobs move through these states: queued, running, completed, failed.

POST /semantic-search creates a job, returns 202 Accepted, and includes status_url, result_url, plus a Retry-After header.

Semantic workflow: POST /semantic-search -> GET /semantic-search/{job_id} -> GET /semantic-search/{job_id}/result

Semantic search requires PostgreSQL with pgvector. SQLite runtime is unsupported.

For detailed guides, see:

OpenClaw Runtime

This skill declares metadata.openclaw.primaryEnv: API_KEY. In OpenClaw, inject the bearer token through ~/.openclaw/openclaw.json:

{
  skills: {
    entries: {
      "crypto-news-http-api": {
        enabled: true,
        apiKey: "YOUR_API_KEY"
      }
    }
  }
}

If apiKey is unavailable, do not send unauthenticated requests. Ask the operator to configure the token first.

If you are using a non-production deployment, replace https://news.tradao.xyz with the correct base URL before issuing requests.

Analyze Workflow

Create an analysis job by posting to /analyze with hours and user_id. The server responds with 202 Accepted, a job_id, status_url, and result_url.

Poll the status endpoint until the job reaches completed or failed. Do not expect the analysis report in the initial POST response. Once completed, fetch the result URL.

Semantic Search

Create a semantic search job by posting to /semantic-search with hours, query, and user_id. The server responds with 202 Accepted, a job_id, status_url, and result_url. Semantic search job IDs start with semantic_search_job_.

Poll the status endpoint until the job reaches completed or failed, then fetch the report from the result URL. Use the status field as the source of truth for lifecycle state; success becomes true only when the job is completed successfully.

Request rules:

  • hours must be a positive integer
  • query is required, trimmed, and capped at 300 characters
  • query cannot be blank or whitespace-only
  • user_id must match ^[A-Za-z0-9_-]{1,128}$

Operational constraints:

  • Semantic search is PostgreSQL-only and returns 503 when the backend does not support pgvector
  • The API uses vector similarity over stored content embeddings and may combine that with keyword retrieval
  • Query decomposition is capped at 4 subqueries
  • Final retained results are capped at 200 unique items
  • Embedding generation requires OPENAI_API_KEY; query planning and report synthesis require KIMI_API_KEY or GROK_API_KEY

The result body returns a Markdown report with query, normalized_intent, matched_count, retained_count, time_window_hours, and report.

Datasource Management

Configure news sources through the datasource API. Create sources with POST /datasources, list them with GET /datasources, and remove them with DELETE /datasources/{id}. All datasource routes require Bearer auth.

Tags help organize sources. Each datasource accepts up to 16 unique tags. Each tag is capped at 32 characters. Tags are normalized to lowercase and deduplicated automatically.

List responses include only safe summaries. For rest_api type datasources, secrets are redacted and counts replace raw credential fields. This prevents accidental credential exposure when reviewing configurations.

Intelligence Query

Query the hidden-channel intelligence knowledge base built by the private ingestion pipeline. All intelligence routes require Bearer auth.

Five synchronous endpoints provide access to canonical knowledge entries, searchable labels, and raw evidence:

  • GET /intelligence/entries — List canonical entries with time window, type, and label filters, paginated
  • GET /intelligence/labels — List searchable primary labels
  • GET /intelligence/entries/{entry_id} — Get a single entry with optional raw evidence text (include_raw=true)
  • GET /intelligence/search — Semantic search using q parameter, ranked by vector similarity + recency + confidence, paginated
  • GET /intelligence/raw/{raw_item_id} — Get original raw text for a collected item (30-day TTL)

These endpoints are synchronous — no async job/poll flow. Results return immediately.

Raw text is byte-for-byte original within 30-day TTL. After TTL, raw_text is null and is_expired is true. Canonical structured knowledge remains queryable indefinitely.

The window parameter accepts <N>h (hours) or <N>d (days), e.g. 7d, 24h. Entry types are channel and slang. Primary labels: AI, crypto, plus several Chinese-category labels (see reference for full enum).

For full parameter details, response schemas, and examples, see the Intelligence Query Reference.

Telegram Webhook

The webhook endpoint exists for maintainer-level Telegram integration. It is not the primary path for day-to-day operators. Regular users should interact through the API routes or Telegram slash commands instead.

When processing webhook updates, validate the X-Telegram-Bot-Api-Secret-Token header to confirm the request originates from Telegram.

Endpoint Index

Supported HTTP routes:

  • GET /health - Service health check
  • POST /analyze - Create an analysis job (async, returns 202)
  • GET /analyze/{job_id} - Check job status
  • GET /analyze/{job_id}/result - Retrieve completed job results
  • POST /semantic-search - Create a semantic search job (async, returns 202)
  • GET /semantic-search/{job_id} - Check semantic search job status
  • GET /semantic-search/{job_id}/result - Retrieve completed semantic search results
  • POST /datasources - Create a datasource
  • GET /datasources - List all datasources
  • DELETE /datasources/{id} - Delete a datasource
  • POST /telegram/webhook - Telegram webhook receiver
  • GET /intelligence/entries - List intelligence entries (synchronous, Bearer-protected)
  • GET /intelligence/labels - List searchable intelligence primary labels
  • GET /intelligence/entries/{entry_id} - Get intelligence entry detail with optional raw evidence
  • GET /intelligence/search - Semantic search across intelligence entries (paginated with page/page_size)
  • GET /intelligence/raw/{raw_item_id} - Get raw intelligence item by ID

Non-Goals

This skill does not cover:

  • Telegram slash commands (use the Telegram bot directly)
  • Autogenerated documentation routes (/docs, /redoc, /openapi.json)
  • Deprecated compatibility aliases (api-server, crypto-news-api)
  • Direct embedding backfill operations beyond pointing you to the documented command

These surfaces exist but are intentionally excluded from this API-focused skill.

Updating

Keep this skill aligned with the live HTTP routes in api_server.py, the AI Analyze API Guide at docs/AI_ANALYZE_API_GUIDE.md, and the semantic search guide at docs/SEMANTIC_SEARCH_API_GUIDE.md.

When documentation disagrees with implementation, trust the code and tests over prose docs. Source precedence: code and tests first, then reference files, then guides.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Web3

Nm Tome Papers

>- Search academic literature via arXiv, Semantic Scholar, and open-access discovery chains. Fetches and parses PDFs for key findings

Registry SourceRecently Updated
2000athola
Web3

Nm Conserve Mcp Code Execution

Optimize multi-tool workflow chains via MCP server integration for processing large datasets, files, or complex pipelines

Registry SourceRecently Updated
1860Profile unavailable
Web3

Nm Leyline Markdown Formatting

>- Canonical markdown formatting conventions for diff-friendly documentation. Consult this skill when generating, editing, or reviewing markdown prose. Defin...

Registry SourceRecently Updated
1960Profile unavailable
Web3

Nm Cartograph Data Flow

Generate Mermaid data flow diagrams showing how data moves between components. Use for tracing requests and API call chains

Registry SourceRecently Updated
2010Profile unavailable