🧪 Scientific Inquiry
Triggers
This skill MUST be triggered when the user asks any of the following:
- Fact-checking: "Is X true?", "Is X reliable?", "Is this claim correct?"
- Data research: "What's the trend/change/distribution of X?", "Look up data on X", "What's the price of X right now?"
- Industry research: "How's the X market?", "Analyze the X industry", "Current state of X"
- Claim verification: "I heard X, do you think it's reliable?", "Does this conclusion hold up?"
- Comparative evaluation: "Which is better, X or Y?", "What's the difference between X and Y?"
- User explicitly mentions "verify", "research", "investigate", "analyze", "validate", "look into"
Even if the question seems simple (e.g., "look up a data point"), trigger this skill whenever systematic information gathering is involved.
Core Workflow
Step 0: Problem Framing → User Confirmation (Critical! Prevents misalignment)
Upon receiving a question, do NOT search yet. Output a research plan template first:
📋 Research Plan
Question: [Restate the original proposition to ensure alignment]
Research type: Fact-check / Data research / Industry research / Comparative evaluation / Trend analysis
Sub-question breakdown:
- [Sub-proposition A] — Verifiability: High/Medium/Low → [Expected evidence source]
- [Sub-proposition B] — Verifiability: High/Medium/Low → [Expected evidence source]
- [Sub-proposition C] — Verifiability: High/Medium/Low → [Expected evidence source]
Research method:
- Primary search path: [Specific search tool / API / database]
- Keyword strategy: [Key search terms]
- Fallback if critical data unavailable: [Alternative approach / estimation method]
Expected output:
- Anticipated confidence level: High / Medium / Low
- Main uncertainty factors: [Expected blind spots]
✅ Ready to proceed with this direction? ↳ Waiting for your confirmation before executing Steps 1–4
Do not invoke any search tools until the user confirms.
Step 1: Decompose into Sub-questions
Break down vague questions into verifiable atomic statements. Label each sub-proposition with:
- Verifiability: High (public data / literature exists) / Medium (indirect evidence available) / Low (almost no public information)
- Evidence type: Quantitative (specific numbers) / Qualitative (trend assessment)
- Source direction: e.g., customs data, academic papers, media reports, forum discussions
Step 2: Evidence Search (Classify & Grade)
Every piece of evidence found must be labeled with source and grade:
S-tier: Primary academic literature / Official statistics / Raw data APIs
A-tier: Authoritative media reports / Think-tank reports / Well-cited secondary sources
B-tier: Industry analysis / Forum discussions / Indirect data
C-tier: Self-media / Single samples / Non-expert interpretations
D-tier: No source / Hearsay / Obvious conflicts of interest
Prioritize S/A-tier evidence during search. B/C-tier is supplementary only. After searching, compile an evidence table with: Evidence content | Source | Source link (full URL) | Grade | Related sub-proposition
Source links are mandatory — do not omit them. Even for web search results, provide the source page URL. If multiple items point to different articles on the same site, give each its own independent URL.
Step 3: Cross-Validation
For each sub-proposition:
- At least 2 independent evidence sources
- Label relationships between evidence: Consistent / Contradictory / Complementary
- If contradictory, analyze possible causes (methodology differences / conflicts of interest / different time windows / sample bias)
Step 4: Conclusion Output
Proposition: [One-sentence restatement]
Conclusion:
High confidence (≥80%) → Direct assertion
Medium confidence (50–80%) → Main judgment + key uncertainties
Low confidence (<50%) → Best guess + what information is needed to reach a conclusion
Key evidence (Top-3, with source links):
1. [Evidence A] — S-tier — [Source Name](full URL)
2. [Evidence B] — A-tier — [Source Name](full URL)
3. [Evidence C] — B-tier — [Source Name](full URL)
Core uncertainties (if any):
- [Uncertainty 1]
- [Uncertainty 2]
Self-Evolution Rules
After each use of this skill, proactively check for the following triggers and update immediately:
-
Direction correction: "This sub-question isn't the priority", "Different angle" → Update the decomposition template, adding the new angle as an example
-
Evidence standards: "This source isn't enough", "I want more official data", "Don't use X" → Update evidence grading criteria or preferred search paths
-
Format preferences: "The conclusion is too long", "Give me a short conclusion first", "Output in X format" → Update the conclusion output template, add format options
-
New scenarios: "Not just fact-checking, also data research", "Also X type" → Update trigger conditions and scenario type table
-
Methodology improvements: "You should present a plan first before executing", "You missed X" → Update workflow description
-
User repeatedly corrects the same type of issue (≥2 occurrences of the same mistake) → Add to "Common Pitfalls"
How to update: Use
skill_manage(action='patch', name='scientific-inquiry', old_string=..., new_string=...)to patch directly. Never make the same mistake twice. After updating, record the change in the changelog at the bottom of this file for traceability.
Scenario Types (expand as needed)
| Scenario | Characteristics | Watch out for |
|---|---|---|
| Fact-check | Verify the truth of a specific claim | Prioritize original sources; beware of second-hand retellings |
| Trend analysis | Assess the future direction of a metric | Distinguish short-term fluctuation from long-term trend; note data windows |
| Comparative analysis | Compare merits of two options | Ensure comprehensive comparison dimensions; avoid survivorship bias |
| Attribution analysis | Determine whether A caused B | Distinguish correlation from causation; watch for confounding variables |
Quality Checklist
- Step 0 research plan output and user confirmation obtained?
- Each sub-proposition has at least one evidence source?
- Every piece of evidence labeled with a grade?
- Contradictory evidence analyzed for possible causes?
- Conclusion includes confidence level and uncertainties?
- User corrections from this session have been applied to the skill?
Common Pitfalls
- Never skip the confirmation step: Even if the path seems obvious, Step 0 is mandatory. A wrong direction wastes more time than a quick confirmation.
- Don't only seek supporting evidence: Actively search for opposing views to avoid confirmation bias.
- Distinguish "no evidence found" from "evidence disproves": Not finding something doesn't mean it doesn't exist. Label as "not found" rather than "disproven".
- Mind data timeliness: Especially for prices, policies, and time-sensitive data — always note when the data was collected.
- Report progress during long searches: If a search exceeds 5 steps, sync progress with the user after each step. Don't go silent.
Usage Examples
(To be expanded — add typical workflows here after each use for reference)
Changelog
| Date | Change | Reason |
|---|---|---|
| 2026-05-09 | Initial creation | User requested systematic inquiry capability |
| 2026-05-09 | Added Step 0 (plan confirmation phase) | User feedback: "align direction before executing" |
| 2026-05-09 | Expanded trigger conditions: data research, industry research, etc. | User feedback: "not just fact-checking, also data research" |
| 2026-05-09 | Refined research plan template: research types, search paths, fallback methods | User requested clearer research methodology description |
| 2026-05-09 | Strengthened self-evolution rules: 6 trigger types + concrete update methods | User requested: "proactively evolve the skill with each session's iterative optimizations" |