Advanced Elicitation
Overview
Meta-cognitive reasoning applied to AI outputs. Makes AI reconsider its own work through 15+ systematic methods.
Core Principle: First-pass responses are often good but not great. Elicitation forces deeper thinking.
When to Use
Use when:
-
Making important decisions (architecture, security, major features)
-
Solving complex problems (multiple stakeholders, unclear requirements)
-
Producing critical outputs (specs, plans, designs)
-
Quality matters more than speed
Don't use when:
-
Simple queries ("What is X?")
-
Routine tasks (formatting, simple refactoring)
-
Time-sensitive (emergency fixes)
-
Budget-constrained (2x cost)
How It Works
-
Generate Initial Response: Agent produces first-pass answer
-
Apply Elicitation Method: Pick 1-3 methods based on context
-
Reconsider: Agent re-evaluates using method
-
Synthesize: Combine insights, produce improved output
Elicitation Methods
- First Principles Thinking
Description: Break down to fundamental truths, rebuild reasoning from ground up
When to Use:
-
Complex system design
-
Architecture decisions
-
Innovation challenges
Prompt Template:
You are applying First Principles Thinking to:
{content}
Steps:
- List all underlying assumptions
- Question each assumption: "Is this fundamentally true?"
- Identify fundamental truths (cannot be broken down further)
- Rebuild solution from fundamentals only
- Compare rebuilt solution to original - what changed?
Output:
First Principles Analysis
Fundamental Truths:
- [Truth 1]
- [Truth 2]
Assumptions Challenged:
- [Assumption] - [Why it might be wrong]
Improvements:
- [Improvement based on fundamentals]
Confidence Level: [HIGH/MEDIUM/LOW]
- Pre-Mortem Analysis
Description: Imagine the solution failed. Work backward to identify causes.
When to Use:
-
Planning major changes
-
Risk mitigation
-
Launch preparations
Prompt Template:
You are applying Pre-Mortem Analysis to:
{content}
Steps:
- Fast-forward 6 months: the solution has failed spectacularly
- List 5 reasons why it failed
- For each reason, assess likelihood (Low/Medium/High)
- For each high-likelihood failure, propose mitigation
- Revise original solution with mitigations
Output:
Pre-Mortem Analysis
Failure Scenarios:
- [Scenario] - Likelihood: [L/M/H]
- [Scenario] - Likelihood: [L/M/H]
Mitigations:
- [Mitigation for high-likelihood failures]
Revised Solution:
- [Changes to prevent failures]
Confidence Level: [HIGH/MEDIUM/LOW]
- Socratic Questioning
Description: Challenge every assumption with "why?" until reaching bedrock.
When to Use:
-
Requirements analysis
-
Specification review
-
Clarifying ambiguity
Prompt Template:
You are applying Socratic Questioning to:
{content}
Steps:
- Identify 5 key claims in the content
- For each claim, ask "Why is this true?"
- For the answer, ask "Why?" again
- Repeat until you hit a contradiction or fundamental truth
- Revise claims that don't survive questioning
Output:
Socratic Analysis
Claim 1: [Claim]
- Why? [Answer]
- Why? [Answer]
- Why? [Answer]
- Verdict: [Survives/Needs revision]
Improvements:
- [Changes after questioning]
Confidence Level: [HIGH/MEDIUM/LOW]
- Red Team vs Blue Team
Description: Attack the solution (Red Team), defend it (Blue Team), synthesize improvements.
When to Use:
-
Security reviews
-
Risk assessment
-
Adversarial testing
Prompt Template:
You are applying Red Team vs Blue Team to:
{content}
Steps:
- Red Team: List 5 ways to attack/break this solution
- Blue Team: For each attack, propose a defense
- Red Team: For each defense, find the weakness
- Blue Team: Strengthen defenses
- Synthesize: What changes make the solution more robust?
Output:
Red Team vs Blue Team
Attack 1: [How to break it]
- Defense: [Blue team response]
- Counter-attack: [Red team finds weakness]
- Final defense: [Blue team strengthens]
Improvements:
- [Robust changes from adversarial testing]
Confidence Level: [HIGH/MEDIUM/LOW]
- Inversion
Description: Instead of "How to succeed?", ask "How to fail?" and avoid those.
When to Use:
-
Risk identification
-
Avoiding common pitfalls
-
Negative space analysis
Prompt Template:
You are applying Inversion to:
{content}
Steps:
- Invert the goal: "How could we make this FAIL?"
- List 5 ways to guarantee failure
- For each failure mode, identify the opposite (success mode)
- Check if original solution addresses success modes
- Revise to explicitly avoid failure modes
Output:
Inversion Analysis
How to Fail:
- [Failure mode]
- [Failure mode]
How to Succeed (inverses):
- [Success mode]
Improvements:
- [Changes to avoid failures]
Confidence Level: [HIGH/MEDIUM/LOW]
- Second-Order Thinking
Description: Consider consequences of consequences. Long-term effects.
When to Use:
-
Strategic decisions
-
Long-term planning
-
Trade-off analysis
Prompt Template:
You are applying Second-Order Thinking to:
{content}
Steps:
- Identify immediate consequences (1st order)
- For each consequence, identify follow-on effects (2nd order)
- For each 2nd order effect, identify further effects (3rd order)
- Assess whether long-term effects align with goals
- Revise solution to optimize for 2nd/3rd order effects
Output:
Second-Order Analysis
1st Order: [Immediate effect]
- 2nd Order: [Consequence of consequence]
- 3rd Order: [Further consequence]
Long-Term Implications:
- [Good/Bad long-term effects]
Improvements:
- [Changes optimizing for long-term]
Confidence Level: [HIGH/MEDIUM/LOW]
- SWOT Analysis
Description: Strengths, Weaknesses, Opportunities, Threats.
When to Use:
-
Strategic planning
-
Competitive analysis
-
Decision-making
Prompt Template:
You are applying SWOT Analysis to:
{content}
Steps:
- Strengths: What are the advantages?
- Weaknesses: What are the disadvantages?
- Opportunities: What external factors could help?
- Threats: What external factors could harm?
- Synthesize: How to leverage S+O, mitigate W+T?
Output:
SWOT Analysis
Strengths:
- [Internal advantage]
Weaknesses:
- [Internal disadvantage]
Opportunities:
- [External positive factor]
Threats:
- [External negative factor]
Strategy:
- [Leverage strengths/opportunities, mitigate weaknesses/threats]
Confidence Level: [HIGH/MEDIUM/LOW]
- Opportunity Cost Analysis
Description: What are we NOT doing? What are we giving up?
When to Use:
-
Prioritization
-
Resource allocation
-
Trade-off decisions
Prompt Template:
You are applying Opportunity Cost to:
{content}
Steps:
- List what this solution requires (time, money, people)
- List 3 alternative uses for those resources
- For each alternative, estimate value
- Compare: Is this solution the highest-value use?
- If not, propose reallocation
Output:
Opportunity Cost Analysis
Resources Required:
- [Time/Money/People]
Alternatives:
- [Alternative use] - Estimated value: [X]
- [Alternative use] - Estimated value: [Y]
Verdict:
- [Is this the best use? Why/why not?]
Improvements:
- [Reallocations or justifications]
Confidence Level: [HIGH/MEDIUM/LOW]
- Analogical Reasoning
Description: How have others solved similar problems? Learn from analogies.
When to Use:
-
Innovation
-
Learning from history
-
Cross-domain insights
Prompt Template:
You are applying Analogical Reasoning to:
{content}
Steps:
- Identify the core problem (abstract it)
- Find 3 analogous situations (other domains/times)
- How was the analogous problem solved?
- What lessons transfer to this situation?
- Adapt the solution based on analogies
Output:
Analogical Analysis
Core Problem: [Abstract problem statement]
Analogy 1: [Domain/situation]
- How they solved it: [Solution]
- Lesson: [What transfers]
Improvements:
- [Adapted solution from analogies]
Confidence Level: [HIGH/MEDIUM/LOW]
- Constraint Relaxation
Description: What if constraint X didn't exist? How would that change the solution?
When to Use:
-
Innovation
-
Breaking assumptions
-
Finding creative solutions
Prompt Template:
You are applying Constraint Relaxation to:
{content}
Steps:
- List all constraints (explicit and implicit)
- For each constraint, ask: "What if this wasn't true?"
- Design solution without that constraint
- Assess: Can we actually relax this constraint?
- If yes, propose new solution. If no, learn from the thought experiment.
Output:
Constraint Relaxation
Constraint: [Constraint]
- If removed: [Solution without constraint]
- Can we actually relax it? [Yes/No + reasoning]
Improvements:
- [Creative solutions from relaxation]
Confidence Level: [HIGH/MEDIUM/LOW]
- Failure Modes and Effects Analysis (FMEA)
Description: What could go wrong? How likely? How bad? Prioritize fixes.
When to Use:
-
Engineering design
-
Risk assessment
-
Safety-critical systems
Prompt Template:
You are applying FMEA to:
{content}
Steps:
- List all components/steps in the solution
- For each, identify potential failure modes
- Rate each: Severity (1-10), Likelihood (1-10)
- Calculate Risk Priority Number (RPN = Severity × Likelihood)
- Address high-RPN failures first
Output:
FMEA
Failure Mode 1: [What fails]
- Severity: [1-10]
- Likelihood: [1-10]
- RPN: [Product]
- Mitigation: [How to prevent/detect/recover]
Improvements:
- [Prioritized mitigations for high-RPN failures]
Confidence Level: [HIGH/MEDIUM/LOW]
- Bias Check
Description: What cognitive biases might affect this? Correct for them.
When to Use:
-
Decision-making
-
Review processes
-
Self-critique
Prompt Template:
You are applying Bias Check to:
{content}
Steps:
- Review common cognitive biases (confirmation, anchoring, sunk cost, availability, etc.)
- For each bias, ask: "Is this affecting my reasoning?"
- Find evidence of bias in the original content
- Correct for identified biases
- Re-evaluate the solution bias-free
Output:
Bias Check
Bias Detected: [Bias name]
- Evidence: [Where it appears]
- Correction: [Adjusted reasoning]
Improvements:
- [Bias-free solution]
Confidence Level: [HIGH/MEDIUM/LOW]
- Base Rate Thinking
Description: What usually happens in similar situations? Are we being overconfident?
When to Use:
-
Estimation
-
Risk assessment
-
Reality-checking optimism
Prompt Template:
You are applying Base Rate Thinking to:
{content}
Steps:
- Identify the reference class (similar past situations)
- What's the base rate (average outcome for reference class)?
- Why might this case be different?
- Adjust estimates toward base rate (Bayesian update)
- Revise solution with realistic expectations
Output:
Base Rate Analysis
Reference Class: [Similar situations]
- Base Rate: [Typical outcome]
- Our Estimate: [Original estimate]
- Adjusted Estimate: [Reality-checked estimate]
Improvements:
- [More realistic solution]
Confidence Level: [HIGH/MEDIUM/LOW]
- Steelmanning
Description: What's the strongest version of an opposing view? Address that, not a strawman.
When to Use:
-
Proposal review
-
Debate preparation
-
Intellectual honesty
Prompt Template:
You are applying Steelmanning to:
{content}
Steps:
- Identify the opposing view (or alternative approach)
- Strengthen it: What's the BEST argument against your solution?
- Address the strong version (not a weak strawman)
- If the steelman wins, adopt that approach
- If your solution survives, it's stronger
Output:
Steelman Analysis
Opposing View: [Alternative]
- Strongest Argument: [Best case for alternative]
- Response: [Addressing the strong version]
- Verdict: [Which approach is better?]
Improvements:
- [Refined solution after facing steelman]
Confidence Level: [HIGH/MEDIUM/LOW]
- Time Horizon Shift
Description: How does this look in 1 hour? 1 day? 1 month? 1 year? 5 years?
When to Use:
-
Long-term planning
-
Trade-off analysis
-
Strategy evaluation
Prompt Template:
You are applying Time Horizon Shift to:
{content}
Steps:
- Evaluate solution at 1 hour: [Impact]
- Evaluate at 1 day: [Impact]
- Evaluate at 1 month: [Impact]
- Evaluate at 1 year: [Impact]
- Evaluate at 5 years: [Impact]
- Identify time-horizon-dependent trade-offs
- Optimize for the right time horizon
Output:
Time Horizon Analysis
1 Hour: [Short-term effect] 1 Day: [Effect] 1 Month: [Effect] 1 Year: [Effect] 5 Years: [Long-term effect]
Trade-Offs:
- [Short vs long-term conflicts]
Improvements:
- [Optimized for appropriate horizon]
Confidence Level: [HIGH/MEDIUM/LOW]
Usage Patterns
Pattern 1: Single Method (Quick)
Skill({ skill: 'advanced-elicitation', args: 'first-principles' });
Pattern 2: Multiple Methods (Thorough)
Skill({ skill: 'advanced-elicitation', args: 'first-principles,pre-mortem,red-team-blue-team' });
Pattern 3: Auto-Select (Recommended)
Skill({ skill: 'advanced-elicitation', args: 'auto' }); // Automatically picks 2-3 methods based on content analysis
Integration with spec-critique
Advanced Elicitation can enhance spec-critique:
// After generating spec Skill({ skill: 'spec-critique', args: 'with-elicitation' }); // Applies elicitation to critique process
Cost Control (per ADR-053)
-
Opt-in only: Never applied automatically
-
Budget limit: Configurable via ELICITATION_BUDGET_LIMIT
-
Cost tracking: Integrates with cost-tracking hook
Config:
features: advancedElicitation: enabled: true costBudget: 10.0 # USD per session minConfidence: 0.7 # Skip if confidence high maxMethodsPerInvocation: 5 # SEC-AE-001 maxInvocationsPerSession: 10 # SEC-AE-003
Security Controls
SEC-AE-001: Input Validation
-
Method names must match /^[a-z][a-z0-9-]*$/
-
Max 5 methods per invocation
-
Invalid methods rejected with error
SEC-AE-002: Cost Budget Enforcement
-
Check session budget before elicitation
-
Track cumulative cost
-
Fail gracefully if budget exceeded
SEC-AE-003: Rate Limiting
-
Max 10 elicitations per session
-
Prevent runaway elicitation loops
-
Clear error message on limit
Examples
Example 1: Architecture Decision
Before Elicitation:
We should use microservices with 12 services communicating via REST.
After First Principles:
Fundamental truths: Services must communicate, data must be consistent. Challenged assumption: "12 services" - is this the right granularity? Could 6 bounded contexts suffice?
Improvement: Consolidate to 6-8 services by bounded context. Use gRPC internally (40% latency reduction vs REST).
Example 2: Security Review
Before Elicitation:
JWT tokens for authentication across services.
After Red Team/Blue Team:
Red Team Attack: Token theft via XSS, JWT validation on every call (latency). Blue Team Defense: HttpOnly cookies, service mesh mTLS instead of JWT propagation.
Improvement: Use service mesh (Istio) for security instead of JWT propagation.
Example 3: Spec Validation
Before Elicitation:
Feature: User can delete their account.
After Pre-Mortem:
Failure Scenario: 6 months later, GDPR compliance audit fails. Cause: Deletion didn't cascade to all systems (analytics, backups).
Improvement: Add "Data Retention Audit" requirement. Specify cascade delete to all systems within 30 days.
Performance
-
Quality Improvement: +30% (measured on critical decisions)
-
Cost: 2x LLM usage
-
Time: +50% (worth it for important work)
Memory Protocol (MANDATORY)
Before starting:
cat .claude/context/memory/learnings.md
After completing:
-
New pattern → .claude/context/memory/learnings.md
-
Issue found → .claude/context/memory/issues.md
-
Decision made → .claude/context/memory/decisions.md
ASSUME INTERRUPTION: If it's not in memory, it didn't happen.
Iron Laws
-
NEVER apply elicitation automatically — it is always opt-in. Never invoke without explicit user request or clear agent intent signal.
-
ALWAYS emit a confidence level for every method output — HIGH / MEDIUM / LOW is mandatory. Outputs without calibration are not actionable.
-
NEVER exceed 5 methods per invocation (SEC-AE-001) — over-elicitation produces noise, not signal. Select 1-3 most relevant methods.
-
ALWAYS check session budget before invoking — fail gracefully with a clear message when ELICITATION_BUDGET_LIMIT is exceeded (SEC-AE-002).
-
NEVER treat elicitation as a substitute for evidence — it refines reasoning; it does not produce facts. Always ground conclusions in codebase evidence.
Anti-Patterns
Anti-Pattern Why It Fails Correct Approach
Auto-applying to every response 2× cost with no benefit for simple tasks Opt-in only for important/complex decisions
Running all 15 methods at once Diminishing returns, token explosion Select 1–3 most relevant methods
Skipping confidence rating Evaluation without calibration is useless Always emit Confidence Level: HIGH/MEDIUM/LOW
Elicitation replaces evidence Reasoning without facts is speculation Pair with grounded codebase evidence before eliciting
No budget check Session cost spirals undetected Always verify ELICITATION_BUDGET_LIMIT before invoking
Running after deadline/emergency High cost with no time to act on improvements Skip for time-critical fixes; use for strategic work
Related Skills
-
spec-critique
-
Specification validation (can invoke elicitation)
-
security-architect
-
Security reviews (can use elicitation methods)
-
verification-before-completion
-
Pre-completion checks
Assigned Agents
This skill can be used by:
-
planner
-
For strategic decisions
-
architect
-
For architecture review
-
security-architect
-
For threat modeling
-
developer
-
For complex technical decisions
-
pm
-
For product strategy
Version: 1.0.0 Status: Production Author: developer agent (Task #6) Date: 2026-01-28