output-wrong-task

The model produces correct-looking output that addresses a different task than the one requested — typically a related but distinct interpretation of an ambiguous prompt.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "output-wrong-task" with this command: npx skills add mvogt99/output-wrong-task

output-wrong-task

The output is well-formed and internally coherent but answers the wrong question. The model resolved an ambiguous prompt toward the most common interpretation rather than the one the user intended, or it latched onto a salient keyword and addressed that instead of the full request. The result can look convincing enough to pass a quick read.

Symptoms

  • The deliverable matches the topic of the request but misses its purpose — e.g., "explain this function" gets documentation instead of the debugging analysis asked for.
  • A code task produces something runnable but solving a simpler or adjacent problem than specified.
  • The model answers the first clause of a multi-part question and silently drops the rest.
  • The output would be correct for a different, more common prompt that shares keywords with this one.
  • Asking the model to verify what it just did reveals that it believed it was solving a different problem.

What to do

  • Restate the concrete deliverable, not just the topic. Instead of "help me with authentication," say "write a middleware function that checks for a valid JWT in the Authorization header and returns 401 if missing or invalid — nothing else."
  • Break compound tasks apart. If the prompt has multiple independent requirements, submit them one at a time and verify each before continuing.
  • Anchor the output format explicitly. Specifying the expected structure (function signature, JSON schema, number of steps, file to modify) gives the model less room to substitute a related but wrong output.
  • Before accepting the output, map it back to the original requirement: does this output satisfy the stated goal, not just a plausible-sounding version of it?
  • If the wrong-task output keeps recurring on the same prompt, the prompt likely has a latent ambiguity. Identify which interpretation the model chose and add a clause that explicitly rules it out.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Byd Company

Chinese EV leader BYD integrates battery production, vehicle manufacturing, and semiconductors to dominate global plug-in sales and technology innovation.

Registry SourceRecently Updated
General

Polymarket Ladder Nhl Hockey Trader

Trades monotonicity violations in NHL hockey O/U market ladders and spread-vs-total consistency on Polymarket. Each game spawns multiple O/U lines that must...

Registry SourceRecently Updated
General

Polymarket 48h Precipitation Range Trader

Trades mispricings in precipitation-range markets by reconstructing the implied probability distribution across bins for the same city and period, detecting...

Registry SourceRecently Updated
General

City Of London

Global financial hub with a $900B+ economy, known for fintech innovation, diverse industries, top universities, and a key role in global trade and finance.

Registry SourceRecently Updated