deep-researcher

Domain Research Report Generator — runs multi-angle deep research on any topic and produces a structured, decision-ready report with evidence-scored consensus, divergences, key numbers, failed paths, and decision readiness. 深度调研报告生成器,自动研究任意领域并输出结构化决策报告,包含共识、分歧、关键数字、失败路径和决策准备度分析。

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "deep-researcher" with this command: npx skills add zhihua-yang/deep-researcher-pro

SKILL: Deep Researcher v1 — Domain Research Report Generator

Trigger: Triggered automatically when user expresses research intent in natural language.

Also invokable explicitly with /deep-researcher [anything in natural language]

No flags or options — everything is inferred from the query semantically.

Token budget: ~900


STARTUP CHECK

Before doing anything else, verify that a web search capability is available.

Step 1 — Probe for a working search tool Attempt a minimal test search (e.g. "deep researcher test") using whatever search tool is available in the current environment:

  • Built-in platform search: WebSearch
  • MCP search tools: serper__search, tavily__search, brave_search, exa__search, web_search, search, or any tool whose name contains "search"

If the search succeeds (returns any result, even empty): proceed normally.

Step 2 — If no working search tool is found Do NOT attempt to run the research. Instead, notify the user:


⚠️ Deep Researcher needs a web search tool to function, but none was found in this environment.

To fix this, configure one of the following search providers (pick any one):

OptionSetupFree tierBest for
Serper ⭐ (recommended)Get key at serper.dev → SERPER_API_KEY=xxx2,500 searches/monthGeneral research — real Google results
Tavily (research-optimized)Get key at tavily.com → TAVILY_API_KEY=xxx1,000 searches/monthDeep research
Brave SearchGet key at brave.com/search/api → BRAVE_API_KEY=xxx2,000 queries/monthPrivacy-first
Gemini (Google AI)Get key at aistudio.google.com → GEMINI_API_KEY=xxx500 req/day, no card neededNo credit card
Kimi / Moonshotplatform.moonshot.cn → KIMI_API_KEY=xxxPaid, Alipay acceptedAlternative provider

Set the API key as an environment variable and restart your agent, then try again.

Step 3 — WebFetch-only fallback If no search tool is available but the user has explicitly provided source files (e.g. "based on this document..."), proceed in file-only mode — skip the startup check and go directly to SEMANTIC INTENT ANALYSIS.


ROLE

Domain research analyst — deep research mode. Turn messy information into a structured research report with enforced source traceability. Do not produce a general summary. Produce a decision-ready research report — independently usable, no downstream routing required.


HARD RULES

  • Every FACT must be falsifiable
  • Every claim must be decision-relevant
  • FAILED_PATHS must include why it failed
  • Every key section must distinguish evidence strength (see references/evidence-scoring.md)
  • If evidence is weak, say weak; do not smooth uncertainty
  • Unfillable sections → mark [UNKNOWN] and state what would resolve it
  • Output must be a ready-to-save .md file, nothing else
  • Output language: detect from the user's query language.
    • Query primarily in English (multiple English words) → English output
    • Query primarily in Chinese → Chinese output
    • Query in other languages → that language
    • Ambiguous (single technical term like "React" or "AWS" with no surrounding words) → default to Chinese Output language is independent of research language — all output sections are written in the output language regardless of what languages were used during research.
  • Research language: MUST use the official local language(s) of the domain's region for at least 2 of the 5 search angles. This is a hard rule, not a preference. See STEP 1 for region detection and STEP 2 for language enforcement.
  • Source enforcement: every key claim must be accompanied by a source URL or citation
  • Anti-hallucination: it is forbidden to present speculation as fact without source support
  • Full-text before inference: when search snippets are available but full text has not been fetched, do not infer complete content from snippets alone — fetch the full text first
  • Numeric source tagging: any specific number cited must have its source explicitly noted; forecasts/predictions must be marked [ESTIMATED]; vendor self-reported numbers must be marked [ESTIMATED] and start at LOW evidence level

EXECUTION GOAL

Your task is not:

  • to sound comprehensive
  • to collect as much as possible
  • to write a market overview

Your task is:

  • to identify what is known
  • what is contested
  • what has already failed
  • what constraints cannot be negotiated
  • what unknown, if resolved, most changes the decision
  • to ensure every finding is traceable to its source

SEMANTIC INTENT ANALYSIS

Before doing anything, read the entire query and infer everything from meaning. Answer:

Q1 — How many distinct research subjects? Count independently researchable entities the user wants information about.

  • "Southeast Asia e-commerce logistics" → 1 subject
  • "AWS and Alibaba Cloud" → 2 subjects
  • "Research AWS, Alibaba Cloud and Huawei Cloud" → 3 subjects
  • "Pros and cons of React" → 1 subject (pros/cons are aspects, not separate subjects)

Q2 — What is the primary intent?

  • research: understand a domain or subject in depth ("research", "analyze", "what is the state of", "deep dive")
  • compare: evaluate options to make a choice ("which is better", "difference between", "how to choose", "vs")
  • update: refresh prior research with new information ("latest on", "what changed", "update my notes on")

"Difference between A and B" is compare. "Research A and B" is research (parallel, not head-to-head).

Q3 — What region and language? Detect the domain's region from the topic itself — no explicit instruction needed. Apply language rules from strategies/search-protocol.md.

Q4 — Are there existing files to work from? If the user references files, documents, or notes (e.g. "based on this doc", "using my notes"), treat them as source material and skip web research.

Q5 — Where should the output go?

  • If user mentions a path or filename, save there.
  • If user says "update" an existing report, read it first and add to it.
  • Otherwise, default to context/[domain].md.

Record all inferred decisions in RESEARCH_LOG: [HH:MM:SS] Intent analysis — subjects: N, intent: [research/compare/update], mode: [brief/compare/brief-multi/compare-multi], region: [X], output-lang: [L], source-files: [none/listed], output-path: [path] — Rationale: [one sentence]


MODE ROUTING

SubjectsIntentMode
1researchbrief — standard research report
1comparebrief — 1 subject; frame internal tension in CORE_TENSION
2researchbrief-multi — shared report, each subject gets its own sub-sections
2comparecompare — head-to-head using comparison.md template
3+researchbrief-multi — shared report, each subject gets its own sub-sections
3+comparecompare-multi — multi-column comparison matrix
anyupdateupdate — read existing file, add new findings

Mode: brief

Standard research report. Uses template: templates/decision-brief.md Flow: SCOPE → COLLECT → SCORE → EXTRACT → OUTPUT

Mode: compare (2 subjects, compare intent)

Head-to-head analysis. Uses template: templates/comparison.md Collect evidence for each subject independently, then compare. Flow: SCOPE → COLLECT (A) → COLLECT (B) → SCORE → COMPARE → OUTPUT

Mode: brief-multi (2+ subjects, research intent)

Parallel research. Uses templates/decision-brief.md as base structure. Shared SCOPE and CONSTRAINTS. Each subject gets its own sub-sections within CONSENSUS, DIVERGENCES, KEY NUMBERS. Collect evidence for each subject independently.

Mode: compare-multi (3+ subjects, compare intent)

Multi-way comparison. Extend comparison.md with N columns in the matrix. Collect evidence for each subject independently, then compare across all.

Mode: update (any, update intent)

Read the referenced existing report first. Preserve sections that do not need updating. Add new findings; mark updated content with [Updated: date]. Do not remove historical context. Update DECISION_READINESS based on new information.

Path resolution for update mode:

  • If user specifies an explicit path, use it.
  • If user references a domain name (e.g. "update my research on Vietnam e-commerce"), search context/ for a filename that matches (fuzzy match against domain name). If one match found, use it. If multiple matches found, list them and ask the user to confirm. If no file found, ask: "No existing report found for [domain]. Start a fresh research report instead?"

Mode: file-only (user provided source files, no web research requested)

When the user references existing files as source material: Treat those files as the ONLY source — do NOT run web searches. Cross-verify claims across files; flag contradictions. Annotate each claim with [source: filename]. Mark sections with insufficient evidence as [WEAK: recommend: ___].

In all multi-subject modes: apply per-subject region detection and language assignment independently. See search-protocol.md for per-subject language rules.


EXECUTION STEPS

STEP 1 — DEFINE SCOPE

→ Clarify what is inside this domain and what is outside → Detect the domain region (always inferred from topic — never requires explicit instruction):

  • If the topic inherently involves a specific country/region (e.g. "Japan SaaS market", "German logistics"), treat that as the region
  • If truly global with no regional focus, region = "global" and no local-language requirement applies → Record the detected region in the file header as Region FocusDetermine research languages based on detected region (see STEP 2 language enforcement) → For compare modes: define comparison dimensions and scope boundary

STEP 2 — COLLECT EVIDENCE

Follow strategies/search-protocol.md for search protocol. All modes use deep research protocol: 3-5 searches from different angles, top 3-5 full text fetches, cross-verify all claims, seek contradictory evidence.

Priority order for source types:

  1. authoritative reports / primary materials
  2. practitioner sources
  3. credible public internet
  4. user/community signals

Search angles (always use all 5):

  • Angle 1: Direct/factual — "What is [domain]? overview/status"
  • Angle 2: Critical/contrarian — "[domain] problems/challenges/risks"
  • Angle 3: Practitioner/insider — "[domain] experience/case study/best practice"
  • Angle 4: Forward-looking — "[domain] trends/future/predictions"
  • Angle 5: Cross-domain — "[domain] vs [related domain] comparison/alternative"

Language enforcement (hard rule — applies to all modes):

  • When a region is detected in STEP 1 (not "global"), angles 1 and 2 MUST use the official local language.
  • Full language assignment rules, fallback inference (Steps A–D), multi-language/supranational regions, and multi-country region allocation are defined in strategies/search-protocol.md → Search Language and Region.
  • Do NOT default to English for a non-English region without following the fallback rules.
  • Record the actual search language for each angle in RESEARCH_LOG.

Cross-verification:

  • Any claim used in CONSENSUS or DIVERGENCES must be verified from at least 2 independent sources
  • If only 1 source found, mark evidence as LOW and note "single source"
  • Seek contradictory evidence actively — do not stop at confirming sources
  • In compare modes: collect evidence for each subject independently using the same deep protocol

Scarce information handling:

  • If all 5 search angles return fewer than 3 valid sources total, the domain is "information-scarce"
  • Mark the file's information confidence as LOW
  • In SCOPE, note: "Information-scarce domain — insufficient search results for high-confidence judgment"
  • Prioritize what few sources exist for CONSENSUS, move most claims to DIVERGENCES or [UNKNOWN]
  • In DECISION_READINESS, set confidence index ≤ 3

STEP 3 — SCORE EVIDENCE

See references/evidence-scoring.md for detailed criteria. For each key claim, assign:

  • HIGH = repeated across strong or primary sources (2+ independent authoritative sources)
  • MEDIUM = plausible, partially supported, limited contradiction
  • LOW = anecdotal, emerging, weakly verified, or single source only

STEP 4 — EXTRACT CONSENSUS

Max 5 items. Only include facts that can affect a decision. Each item must have at least one source reference.

STEP 5 — EXTRACT DIVERGENCES

Max 5 items. For each divergence, preserve both sides with direct reasoning. Do not resolve disagreement unless evidence decisively closes it. Each side must reference its source. Even when evidence appears one-sided, the minority position must be presented in DIVERGENCES. Do not "balance" by upgrading weak minority evidence, but also do not suppress it. Asymmetric evidence quality is a valid finding — present it honestly.

STEP 5.5 — OPTIONAL SECTIONS (include when relevant)

These three sections are not always required — include them when the evidence and topic warrant it. Record in RESEARCH_LOG which optional sections were included and why.

A. LANDSCAPE — Include when the domain involves a market, technology ecosystem, or competitive field. Synthesize the competitive/ecosystem structure from evidence already collected:

  • Key players: who are the main actors, what is their positioning
  • Market structure: fragmented / consolidated / duopoly / emerging
  • Notable recent moves: acquisitions, launches, exits, regulatory actions (with dates and sources)
  • Do NOT list players without evidence — if a player is mentioned but not researched, omit or mark [UNVERIFIED]

B. IMPLICATIONS — Include when the domain has clear stakeholder impact (policy, market shift, technology change). For each relevant stakeholder group, state the direct consequence and one second-order effect:

  • Keep to 2-4 stakeholder groups maximum
  • Each implication must be traceable to a CONSENSUS or DIVERGENCES item
  • Avoid generic statements ("this will change everything") — be specific

C. SCENARIOS — Include when the domain has high uncertainty or multiple plausible futures. Define 2-3 scenarios based on the key variable identified in CORE_TENSION:

  • Base case: most likely path given current trajectory
  • Upside / opportunity scenario: what would need to be true
  • Downside / risk scenario: what would need to be true
  • The key variable that drives divergence between scenarios (should map to an OPEN_QUESTION)

STEP 6 — EXTRACT FAILED_PATHS

What has already been tried and failed? Why did it fail? Under what conditions would it fail again? Cite sources for failure evidence.

STEP 7 — IDENTIFY CORE_TENSION

Compress the entire domain into one irreducible contradiction. One sentence only. One main-subject-predicate structure. Keep it tight — expansion belongs in other sections. Good: "Overseas warehouses deliver next-day but require heavy capital; direct cross-border shipping is cheap but too slow for live-commerce conversion." Bad: a multi-clause paragraph that explains the tension from three angles. (Over-expanded)

STEP 8 — IDENTIFY CONSTRAINTS

Separate:

  • hard constraints (physical, legal, irreversible)
  • soft constraints (economic, social, potentially changeable)
  • self-imposed constraints (assumptions, preferences, organizational) Do not mix them.

STEP 9 — OPEN QUESTIONS

List 2-3 questions whose answers would most change what to do next.

STEP 10 — DECISION READINESS

State:

  • biggest evidence gap
  • what can still be decided despite uncertainty
  • what must not be decided yet
  • next concrete action (1-2 specific steps the reader can take immediately)
  • decision confidence index (0-10)
  • time sensitivity (high / medium / low)
  • reversibility (reversible / partially reversible / irreversible)

STEP 11 — COMPILE SOURCES

Aggregate all sources referenced throughout the document. Each source entry: URL or citation | source type | access time | access status Source types: Authoritative Report | Practitioner | Public Web | Community Access status: ✅ Full text fetched | ⚠️ Snippet only (fetch failed) | ❌ Unreachable (403/404/paywall) This allows readers to assess which claims are based on full-text verification vs. snippets only.

STEP 12 — COMPILE RESEARCH LOG

Track the research process for reproducibility:

  • [HH:MM:SS] Intent analysis — all inferred decisions (see SEMANTIC INTENT ANALYSIS)
  • [HH:MM:SS] Optional sections included: [LANDSCAPE / IMPLICATIONS / SCENARIOS / none] — [rationale]
  • [HH:MM:SS] Search: "keyword" → N results
  • [HH:MM:SS] Full text fetch: URL → key finding / FAILED (reason)
  • [HH:MM:SS] Cross-verify: fact X → confirmed / disputed Use second-precision timestamps for reproducibility.

TEMPLATE REFERENCE

  • brief / brief-multi: templates/decision-brief.md
  • compare / compare-multi: templates/comparison.md

Load the appropriate template at output time. The template defines the output structure; all analysis logic, hard rules, and scoring criteria remain in this SKILL.md.

Section title localization (hard rule): The template uses English section titles as canonical identifiers. When generating the final output, translate ALL section titles, field labels, and structural keywords into the output language. Do NOT leave English titles in a non-English report.

Use the following translation table. For languages not listed, translate naturally:

English (canonical)中文日本語EspañolFrançaisDeutsch
Research Report研究报告調査レポートInforme de InvestigaciónRapport de RechercheForschungsbericht
Comparative Analysis对比分析比較分析Análisis ComparativoAnalyse ComparativeVergleichsanalyse
SCOPE研究范围調査範囲ALCANCEPORTÉEUMFANG
Domain boundary领域边界領域の境界Límite del dominioLimite du domaineDomänengrenze
Out of scope不在范围内対象外Fuera del alcanceHors périmètreAußerhalb des Rahmens
CONSENSUS共识コンセンサスCONSENSOCONSENSUSKONSENS
DIVERGENCES分歧見解の相違DIVERGENCIASDIVERGENCESDIVERGENZEN
Divergence N分歧 N見解の相違 NDivergencia NDivergence NDivergenz N
Supporting支持方支持側A favorPourDafür
Opposing反对方反対側En contraContreDagegen
Evidence strength证据强度証拠の強度Solidez de la evidenciaForce des preuvesBeweisqualität
KEY NUMBERS关键数字主要数値CIFRAS CLAVECHIFFRES CLÉSKENNZAHLEN
FAILED_PATHS失败路径失敗パスCAMINOS FALLIDOSVOIES ÉCHOUÉESGESCHEITERTE WEGE
Why it failed失败原因失敗の理由Por qué fallóRaison de l'échecGrund des Scheiterns
Conditions for recurrence重犯条件再発条件Condiciones de recurrenciaConditions de récurrenceWiederholungsbedingungen
CORE_TENSION核心矛盾コアの緊張TENSIÓN CENTRALTENSION CENTRALEKERNSPANNUNG
CONSTRAINTS约束条件制約RESTRICCIONESCONTRAINTESEINSCHRÄNKUNGEN
Hard constraints硬约束ハード制約Restricciones rígidasContraintes duresHarte Einschränkungen
Soft constraints软约束ソフト制約Restricciones blandasContraintes souplesWeiche Einschränkungen
Self-imposed constraints自设约束自己設定制約Restricciones autoimpuestasContraintes auto-imposéesSelbst gesetzte Einschränkungen
OPEN_QUESTIONS待解问题未解決の問題PREGUNTAS ABIERTASQUESTIONS OUVERTESOFFENE FRAGEN
DECISION_READINESS决策准备度意思決定準備度PREPARACIÓN PARA DECIDIRMATURITÉ DÉCISIONNELLEENTSCHEIDUNGSREIFE
Can decide now现在可以决定今すぐ決定可能Se puede decidir ahoraPeut être décidé maintenantJetzt entscheidbar
Cannot decide yet现在不能决定まだ決定不可No se puede decidir aúnNe peut pas encore être décidéNoch nicht entscheidbar
Single most valuable piece of missing information最值得补的一条信息最も重要な未収集情報Información faltante más valiosaInformation manquante la plus précieuseWertvollste fehlende Information
Decision confidence index决策信心指数意思決定信頼度指数Índice de confianza en la decisiónIndice de confiance décisionnelleEntscheidungsvertrauensindex
Time sensitivity时间敏感度時間的緊急性Sensibilidad temporalSensibilité temporelleZeitkritikalität
Reversibility可逆性可逆性ReversibilidadRéversibilitéUmkehrbarkeit
SOURCES来源情報源FUENTESSOURCESQUELLEN
Source来源情報源FuenteSourceQuelle
Type类型タイプTipoTypeTyp
Retrieved获取时间取得日時RecuperadoRécupéréAbgerufen
Access Status访问状态アクセス状態Estado de accesoStatut d'accèsZugriffsstatus
Full text全文获取全文取得Texto completoTexte completVolltext
Snippet only仅摘要スニペットのみSolo fragmentoFragment seulementNur Ausschnitt
Unreachable不可达アクセス不可InaccesibleInaccessibleNicht erreichbar
RESEARCH_LOG研究日志調査ログREGISTRO DE INVESTIGACIÓNJOURNAL DE RECHERCHEFORSCHUNGSPROTOKOLL
Region Focus地域聚焦地域フォーカスEnfoque RegionalFocus RégionalRegionaler Fokus
Research Languages研究语言調査言語Idiomas de InvestigaciónLangues de RechercheForschungssprachen
Information Confidence信息置信度情報信頼度Confianza en la InformaciónConfiance InformationnelleInformationsvertrauen
Biggest Knowledge Gap最大知识缺口最大の知識ギャップMayor Brecha de ConocimientoPlus Grande LacuneGrößte Wissenslücke
Actionable Decision Scope当前可决策范围意思決定可能範囲Alcance de Decisión AccionablePortée DécisionnelleEntscheidungsrahmen
COMPARISON MATRIX对比矩阵比較マトリクスMATRIZ DE COMPARACIÓNMATRICE DE COMPARAISONVERGLEICHSMATRIX
Dimension维度次元DimensiónDimensionDimension
Evidence Strength证据强度証拠の強度Solidez de evidenciaForce des preuvesBeweisqualität
STRENGTHS优势強みFORTALEZASFORCESSTÄRKEN
Strengths优势強みFortalezasForcesStärken
RISKS风险リスクRIESGOSRISQUESRISIKEN
Risks风险リスクRiesgosRisquesRisiken
WHEN TO CHOOSE适用场景選択シナリオCUÁNDO ELEGIRQUAND CHOISIRWANN WÄHLEN
Choose [X] when选择 [X] 的场景[X] を選ぶシナリオElige [X] cuandoChoisir [X] quand[X] wählen wenn
Neither fits when两者都不适合时どちらも適さない場合Ninguno sirve cuandoNi l'un ni l'autre quandKeiner passt wenn
RECOMMENDATION建议选择推奨RECOMENDACIÓNRECOMMANDATIONEMPFEHLUNG
Recommended推荐推奨RecomendadoRecommandéEmpfohlen
Rationale理由根拠JustificaciónJustificationBegründung
Preconditions前提条件前提条件Condiciones previasConditions préalablesVorbedingungen
LANDSCAPE竞争格局競争環境PANORAMAPAYSAGEWETTBEWERBSLANDSCHAFT
Key players主要玩家主要プレイヤーActores claveActeurs clésHauptakteure
Market structure市场结构市場構造Estructura del mercadoStructure du marchéMarktstruktur
Notable recent moves近期重要动态最近の重要な動きMovimientos recientes notablesMouvements récents notablesBemerkenswerte Entwicklungen
IMPLICATIONS影响与意涵含意IMPLICACIONESIMPLICATIONSIMPLIKATIONEN
SCENARIOS情景分析シナリオ分析ESCENARIOSSCÉNARIOSSZENARIEN
Base case基准情景ベースケースCaso baseScénario de baseBasisszenario
Upside scenario乐观情景アップサイドシナリオEscenario optimistaScénario optimistePositivszenario
Downside scenario悲观情景ダウンサイドシナリオEscenario pesimistaScénario pessimisteNegativszenario
Key variable driving divergence关键分叉变量分岐を決める主要変数Variable clave de divergenciaVariable clé de divergenceSchlüsselvariable
Next concrete action下一步具体行动次の具体的なアクションPróxima acción concretaProchaine action concrèteNächste konkrete Maßnahme
File文件路径ファイルArchivoFichierDatei
Generated生成时间生成日時GeneradoGénéréErstellt
Mode研究模式モードModoModeModus

For languages not in this table: translate naturally into the target language. The goal is that a native reader of the output language sees all structural labels in their own language. RESEARCH_LOG entries may remain in English (they are technical audit records, not reader-facing content).


OUTPUT PERSISTENCE

After generating the output:

  1. Determine the output path from SEMANTIC INTENT ANALYSIS Q5 (user-mentioned path, or default context/[domain].md)
  2. Print confirmation: "✅ Research output saved to: [path]"
  3. Before writing, check if the target file already exists
  4. If file exists AND the intent is NOT "update":
    • STOP and ask the user: "File [path] already exists. Overwrite, append, or cancel?"
    • "Overwrite" → replace the entire file
    • "Append" → add new content after existing content
    • "Cancel" → abort, do not write
    • Only proceed if user explicitly confirms
  5. If file does not exist: create parent directories and write normally

EXTERNAL ENDPOINTS

This skill uses the following external services:

ServiceURLData sentPurpose
Web search providerDepends on configuration (e.g. api.serper.dev, api.tavily.com, api.search.brave.com)Search query stringsFetching search results for research
Web pagesAny URL returned by search resultsHTTP GET request only (no user data sent)Full-text fetch for evidence verification

No user data beyond the research query is sent to any external service. The research query itself is sent to the configured search provider as a plain search string.


SECURITY & PRIVACY

  • What leaves your device: The research topic/query (as a search string), and HTTP GET requests to web pages found in search results.
  • What stays local: All generated reports are saved to the local filesystem only. No report content is sent to any external service.
  • API keys: Search provider API keys are read from environment variables and sent only to the corresponding provider's API endpoint. They are never logged or included in output files.
  • No telemetry: This skill collects no usage data, analytics, or telemetry of any kind.

TRUST STATEMENT

When you use deep-researcher:

  1. Your research query is sent to the web search provider you configured (Serper, Tavily, Brave, Gemini, or similar).
  2. The skill fetches full text from web pages identified in search results — these are standard HTTP GET requests, no different from opening a page in a browser.
  3. All analysis, synthesis, and report generation happen locally within the AI agent's context.
  4. The final report is written to your local filesystem at the path you specify (or context/[domain].md by default).
  5. Nothing else leaves your machine.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

深度研究 Deep Research

深度研究工作流,支持多源搜索、事实验证、专业报告生成。Use when user needs comprehensive research, market analysis, competitive analysis, or professional research reports. Supports web...

Registry SourceRecently Updated
2090Profile unavailable
Research

Research Logger Pro

Auto-saves deep search results to SQLite and Langfuse. Combines search with persistent logging — every research query is saved with topic tags, timestamps, a...

Registry SourceRecently Updated
6850Profile unavailable
Research

Research Logger

AI research pipeline with automatic SQLite logging and Langfuse tracing

Registry SourceRecently Updated
6820Profile unavailable
Research

Deep Search

3-tier Perplexity AI search routing with auto model selection

Registry SourceRecently Updated
7220Profile unavailable