reasoning techniques

Reasoning techniques (like Chain-of-Thought, Tree-of-Thought) force the LLM to show its work. Large Language Models are statistical, not logical. By making them output a step-by-step reasoning path before the final answer, you allow the model to provide context to itself, significantly reducing logic errors and "hallucinations of calculation".

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "reasoning techniques" with this command: npx skills add lauraflorentin/skills-marketplace/lauraflorentin-skills-marketplace-reasoning-techniques

Reasoning Techniques

Reasoning techniques (like Chain-of-Thought, Tree-of-Thought) force the LLM to show its work. Large Language Models are statistical, not logical. By making them output a step-by-step reasoning path before the final answer, you allow the model to provide context to itself, significantly reducing logic errors and "hallucinations of calculation".

When to Use

  • Math & Logic: Word problems, puzzles, navigation tasks.

  • Complex Planning: "How do I move this couch through this door?"

  • Legal/Medical reasoning: Deriving a conclusion from a set of rules and facts.

  • Debugging: Asking the model to explain why code is failing before fixing it.

Use Cases

  • Zero-Shot CoT: Simply adding "Let's think step by step" to the prompt.

  • Few-Shot CoT: Providing examples of [Question -> Reasoning -> Answer] to guide the model.

  • Self-Consistency: Generating 5 different Chain-of-Thought paths and picking the answer that appears most frequently (Majority Voting).

  • Tree of Thoughts: Exploring multiple possible reasoning branches and backtracking if one leads to a dead end.

Implementation Pattern

def chain_of_thought_prompt(question): prompt = f""" Question: {question}

Instruction: Answer the question by reasoning step-by-step. 
Format your answer as:

Reasoning:
1. [First Step]
2. [Second Step]
...

Final Answer: [Answer]
"""

return llm.generate(prompt)

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Web3

prompt chaining

No summary provided by upstream source.

Repository SourceNeeds Review
General

human-in-the-loop

No summary provided by upstream source.

Repository SourceNeeds Review
General

planning

No summary provided by upstream source.

Repository SourceNeeds Review
General

reflection

No summary provided by upstream source.

Repository SourceNeeds Review