goals

Optimize prompts via process goals (controllable behavioral instructions) rather than outcome goals (sparse end-result demands). Grounded in sports psychology meta-analysis showing process goals (d=1.36) vastly outperform outcome goals (d=0.09). Use when designing prompts, optimizing LLM steering, implementing CoT/decomposition patterns, or building automatic prompt optimization pipelines. Instantiates surrogate loss paradigm for discrete prompt space.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "goals" with this command: npx skills add zpankz/mcp-skillset/zpankz-mcp-skillset-goals

Process Goals in Prompt Optimization

Core Principle

Process goals (controllable intermediate actions) provide dense feedback signals; outcome goals (end-result demands) provide sparse, delayed feedback. This asymmetry explains why behavioral prompting dominates direct output demands.

Mechanism: Dense intermediate supervision → stable gradients → reliable optimization
Failure mode: Sparse outcome signal → high variance → reward hacking / hallucination

Goal Typology

TypeEffect SizePrompt AnalogSignal DensityFailure Mode
Outcomed=0.09"Give the correct answer"SparseHallucination, reward hacking
Performanced=0.44"Achieve high accuracy"ProxyGoodhart's Law misalignment
Processd=1.36"Think step-by-step"DenseOver-specification (rare)

λ-Instantiations

Chain-of-Thought (CoT)

# Outcome (weak): "What is 247 × 38?"
# Process (strong):
prompt = """
Solve 247 × 38.
Think step-by-step:
1. Break into partial products
2. Show each multiplication
3. Sum the results
4. State final answer
"""

Mechanism: Mandates controllable decomposition → self-supervision at each step → error detection before propagation.

Variants: Zero-shot CoT ("Let's think step by step"), Auto-CoT (automated exemplar generation), Faithful CoT (enforced structure).

Decomposition & Sub-Goals

# Tree-of-Thoughts pattern
decompose = """
Generate 3 possible approaches to this problem.
For each approach:
  - State the sub-goals required
  - Identify potential failure points
  - Estimate confidence
Select the approach with highest expected success.
"""

# ReAct pattern
react = """
Thought: [Analyze current state]
Action: [Select tool/operation]
Observation: [Record result]
... repeat until solved ...
"""

Mechanism: Explicit sub-goal enumeration → local optimization per sub-problem → composition into global solution.

Auxiliary Tasks

# Direct (weak): "Write a function to sort this list"
# With auxiliary (strong):
aux_prompt = """
Before writing the function:
1. State the input/output types
2. Identify edge cases (empty, single element, duplicates)
3. Choose algorithm and justify complexity
4. Write the function
5. Trace execution on a small example
"""

Mechanism: Forces deeper processing via intermediate outputs → surfaces implicit assumptions → catches errors early.

Structured Output Constraints

# Unstructured (weak): "Analyze this data"
# Structured (strong):
structured = """
Analyze the data. Output as:

## Summary Statistics
[numerical summary]

## Key Findings
1. [finding with evidence]
2. [finding with evidence]

## Confidence Assessment
- High confidence: [claims]
- Uncertain: [claims requiring verification]
"""

Mechanism: Format constraints → consistent reasoning patterns → verifiable outputs.

Automatic Optimization Paradigm

Why Process Goals Emerge

Search space: discrete prompt tokens
Objective: maximize downstream performance
Challenge: non-differentiable, combinatorial

Solution: Search for PROCESS INSTRUCTIONS
  → Dense intermediate feedback enables gradient estimation
  → Behavioral prompts transfer across tasks
  → Compositional structure reduces search dimensionality

Optimization Methods

MethodMechanismProcess Goal Discovery
APELLM generates candidates, scores on held-outDiscovers zero-shot CoT variants
OPROMeta-prompt + performance trajectoryEvolves process instructions iteratively
TextGradGradient through text feedbackOptimizes behavioral descriptions
DEEVOMulti-agent debateConverges on robust process formulations

DSPy Integration

import dspy

class ProcessOptimizedModule(dspy.Module):
    """Process goals as learnable signatures."""

    def __init__(self):
        # Process-oriented signatures
        self.decompose = dspy.ChainOfThought("problem -> subgoals, approach")
        self.execute = dspy.ReAct("subgoals, context -> intermediate_results")
        self.synthesize = dspy.Predict("intermediate_results -> final_answer")

    def forward(self, problem):
        # Explicit process steps
        plan = self.decompose(problem=problem)
        results = self.execute(subgoals=plan.subgoals, context=plan.approach)
        return self.synthesize(intermediate_results=results)

# Optimizer learns to refine process instructions
optimizer = dspy.MIPROv2(metric=task_metric, num_threads=4)
optimized = optimizer.compile(ProcessOptimizedModule(), trainset=examples)

Implementation Patterns

Pattern 1: Process Scaffolding

def scaffold_prompt(task: str, domain: str) -> str:
    """Wrap any task in process scaffolding."""
    return f"""
Task: {task}

Before responding:
1. Identify the key requirements
2. Consider potential approaches
3. Select approach and justify
4. Execute step-by-step
5. Verify output meets requirements

Domain context: {domain}
"""

Pattern 2: Progressive Disclosure

def progressive_process(complexity: int) -> str:
    """Scale process detail to task complexity."""

    if complexity < 2:  # Trivial
        return ""  # No scaffolding needed

    elif complexity < 4:  # Simple
        return "Think through this step by step."

    elif complexity < 8:  # Moderate
        return """
Break this into steps:
1. Understand the problem
2. Plan your approach
3. Execute and verify
"""

    else:  # Complex
        return """
Use systematic analysis:

## Problem Decomposition
- Core requirements:
- Constraints:
- Success criteria:

## Approach Selection
- Option A: [describe] - Pros/Cons
- Option B: [describe] - Pros/Cons
- Selected: [justify]

## Execution Trace
[step-by-step with intermediate validation]

## Verification
- Requirements met: [checklist]
- Confidence: [with justification]
"""

Pattern 3: Self-Critique Integration

critique_process = """
After your initial response:

CRITIQUE:
- What assumptions did I make?
- Where might I be wrong?
- What would a skeptic object to?

REVISION:
- Address each critique
- Strengthen weak points
- Explicitly note remaining uncertainty
"""

Empirical Calibration

BenchmarkOutcome PromptProcess PromptΔ Relative
GSM8K45%68%+51%
Big-Bench Hard38%57%+50%
MMLU (hard)52%61%+17%
Coding (HumanEval)64%78%+22%

Efficiency: Process prompting often reduces total tokens via early error detection and structured reasoning.

Risk Mitigation

RiskMechanismMitigation
Over-specificationRigid process constrains valid alternativesUse minimal scaffolding for simple tasks
Process driftSteps followed without achieving goalInclude explicit goal-checking at each step
VerbosityExcessive intermediate outputCompress after verification, emit summary
False confidenceStructured output mimics rigorRequire explicit uncertainty quantification

Integration with Holonic Architecture

# Process goals as λ-transforms in skill composition
process_transform = {
    "ρ.parse": "Decompose input into components",
    "ρ.branch": "Generate alternative approaches",
    "ρ.reduce": "Select optimal path with justification",
    "ρ.ground": "Execute with intermediate verification",
    "ρ.emit": "Synthesize with confidence bounds"
}

# Validation: process goal adherence
def validate_process(response: str, expected_steps: List[str]) -> bool:
    """Verify process scaffolding was followed."""
    return all(
        step_marker in response
        for step_marker in expected_steps
    )

Quick Reference

ALWAYS: Behavioral instructions > outcome demands
SCALE: Process detail ∝ task complexity
VERIFY: Include self-check at each process step
OPTIMIZE: Use APE/OPRO to discover domain-specific process formulations

CoT: "Think step by step" → d=1.36 equivalent
Decomposition: Sub-goals + local optimization
Auxiliary: Intermediate outputs force deep processing
Structure: Format constraints enable verification

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

network-meta-analysis-appraisal

No summary provided by upstream source.

Repository SourceNeeds Review
Research

csv-analysis

No summary provided by upstream source.

Repository SourceNeeds Review
Research

data-schema-knowledge-modeling

No summary provided by upstream source.

Repository SourceNeeds Review
Research

research

No summary provided by upstream source.

Repository SourceNeeds Review