Crypto News HTTP API Skill
Use this skill to call the Crypto News Analyzer HTTP API from OpenClaw.
When to Use
Use this skill when you need to call https://news.tradao.xyz or a compatible private deployment.
Typical triggers:
- Run asynchronous crypto news analysis over a time window
- Run asynchronous semantic search for a freeform topic query
- Poll an API job until it finishes and then fetch the final result
- Create, list, or delete datasources through the HTTP API
- Query hidden-channel intelligence entries (canonical knowledge, semantic search, raw evidence)
- Check service health before or after an API workflow
Quick Reference
Authentication is Bearer token style: send Authorization: Bearer <API_KEY> with every request.
POST /analyze creates a job and returns immediately. It does not return the final report. Poll status, then fetch the result.
Workflow: POST /analyze -> GET /analyze/{job_id} -> GET /analyze/{job_id}/result
Jobs move through these states: queued, running, completed, failed.
POST /semantic-search creates a job, returns 202 Accepted, and includes status_url, result_url, plus a Retry-After header.
Semantic workflow: POST /semantic-search -> GET /semantic-search/{job_id} -> GET /semantic-search/{job_id}/result
Semantic search requires PostgreSQL with pgvector. SQLite runtime is unsupported.
For detailed guides, see:
- Analyze Workflow Reference
- Semantic Search Reference
- Datasource Management Reference
- Intelligence Query Reference
- Operations and Maintenance Reference
OpenClaw Runtime
This skill declares metadata.openclaw.primaryEnv: API_KEY. In OpenClaw, inject the bearer token through ~/.openclaw/openclaw.json:
{
skills: {
entries: {
"crypto-news-http-api": {
enabled: true,
apiKey: "YOUR_API_KEY"
}
}
}
}
If apiKey is unavailable, do not send unauthenticated requests. Ask the operator to configure the token first.
If you are using a non-production deployment, replace https://news.tradao.xyz with the correct base URL before issuing requests.
Analyze Workflow
Create an analysis job by posting to /analyze with hours and user_id. The server responds with 202 Accepted, a job_id, status_url, and result_url.
Poll the status endpoint until the job reaches completed or failed. Do not expect the analysis report in the initial POST response. Once completed, fetch the result URL.
Semantic Search
Create a semantic search job by posting to /semantic-search with hours, query, and user_id. The server responds with 202 Accepted, a job_id, status_url, and result_url. Semantic search job IDs start with semantic_search_job_.
Poll the status endpoint until the job reaches completed or failed, then fetch the report from the result URL. Use the status field as the source of truth for lifecycle state; success becomes true only when the job is completed successfully.
Request rules:
hoursmust be a positive integerqueryis required, trimmed, and capped at 300 charactersquerycannot be blank or whitespace-onlyuser_idmust match^[A-Za-z0-9_-]{1,128}$
Operational constraints:
- Semantic search is PostgreSQL-only and returns
503when the backend does not support pgvector - The API uses vector similarity over stored content embeddings and may combine that with keyword retrieval
- Query decomposition is capped at 4 subqueries
- Final retained results are capped at 200 unique items
- Embedding generation requires
OPENAI_API_KEY; query planning and report synthesis requireKIMI_API_KEYorGROK_API_KEY
The result body returns a Markdown report with query, normalized_intent, matched_count, retained_count, time_window_hours, and report.
Datasource Management
Configure news sources through the datasource API. Create sources with POST /datasources, list them with GET /datasources, and remove them with DELETE /datasources/{id}. All datasource routes require Bearer auth.
Tags help organize sources. Each datasource accepts up to 16 unique tags. Each tag is capped at 32 characters. Tags are normalized to lowercase and deduplicated automatically.
List responses include only safe summaries. For rest_api type datasources, secrets are redacted and counts replace raw credential fields. This prevents accidental credential exposure when reviewing configurations.
Intelligence Query
Query the hidden-channel intelligence knowledge base built by the private ingestion pipeline. All intelligence routes require Bearer auth.
Five synchronous endpoints provide access to canonical knowledge entries, searchable labels, and raw evidence:
GET /intelligence/entries— List canonical entries with time window, type, and label filters, paginatedGET /intelligence/labels— List searchable primary labelsGET /intelligence/entries/{entry_id}— Get a single entry with optional raw evidence text (include_raw=true)GET /intelligence/search— Semantic search usingqparameter, ranked by vector similarity + recency + confidence, paginatedGET /intelligence/raw/{raw_item_id}— Get original raw text for a collected item (30-day TTL)
These endpoints are synchronous — no async job/poll flow. Results return immediately.
Raw text is byte-for-byte original within 30-day TTL. After TTL, raw_text is null and is_expired is true. Canonical structured knowledge remains queryable indefinitely.
The window parameter accepts <N>h (hours) or <N>d (days), e.g. 7d, 24h. Entry types are channel and slang. Primary labels: AI, crypto, plus several Chinese-category labels (see reference for full enum).
For full parameter details, response schemas, and examples, see the Intelligence Query Reference.
Telegram Webhook
The webhook endpoint exists for maintainer-level Telegram integration. It is not the primary path for day-to-day operators. Regular users should interact through the API routes or Telegram slash commands instead.
When processing webhook updates, validate the X-Telegram-Bot-Api-Secret-Token header to confirm the request originates from Telegram.
Endpoint Index
Supported HTTP routes:
GET /health- Service health checkPOST /analyze- Create an analysis job (async, returns 202)GET /analyze/{job_id}- Check job statusGET /analyze/{job_id}/result- Retrieve completed job resultsPOST /semantic-search- Create a semantic search job (async, returns 202)GET /semantic-search/{job_id}- Check semantic search job statusGET /semantic-search/{job_id}/result- Retrieve completed semantic search resultsPOST /datasources- Create a datasourceGET /datasources- List all datasourcesDELETE /datasources/{id}- Delete a datasourcePOST /telegram/webhook- Telegram webhook receiverGET /intelligence/entries- List intelligence entries (synchronous, Bearer-protected)GET /intelligence/labels- List searchable intelligence primary labelsGET /intelligence/entries/{entry_id}- Get intelligence entry detail with optional raw evidenceGET /intelligence/search- Semantic search across intelligence entries (paginated withpage/page_size)GET /intelligence/raw/{raw_item_id}- Get raw intelligence item by ID
Non-Goals
This skill does not cover:
- Telegram slash commands (use the Telegram bot directly)
- Autogenerated documentation routes (
/docs,/redoc,/openapi.json) - Deprecated compatibility aliases (
api-server,crypto-news-api) - Direct embedding backfill operations beyond pointing you to the documented command
These surfaces exist but are intentionally excluded from this API-focused skill.
Updating
Keep this skill aligned with the live HTTP routes in api_server.py, the AI Analyze API Guide at docs/AI_ANALYZE_API_GUIDE.md, and the semantic search guide at docs/SEMANTIC_SEARCH_API_GUIDE.md.
When documentation disagrees with implementation, trust the code and tests over prose docs. Source precedence: code and tests first, then reference files, then guides.