Prompt
The Interface Between Human Intention and Machine Output
A prompt is the only point of contact between what you want and what an AI produces. Everything the model does — every word it writes, every analysis it generates, every decision it makes about what to include and what to omit — begins with the instruction you gave it.
This sounds obvious. Its implications are not.
If the output is not what you wanted, the instinct is to conclude that the model is limited. Sometimes this is correct. More often, the model was capable of producing what you wanted and the prompt did not successfully communicate what that was. The same model, given a different instruction for the same task, produces a substantially better result. Not because anything changed in the model, but because the instruction was clearer about what good actually means in this context.
Prompting is the skill that closes the gap between what AI can do and what you actually get from it. This skill teaches it systematically.
Why Most Prompts Underperform
The default way most people prompt an AI model is to ask it a question or give it a task in roughly the same way they would ask a knowledgeable colleague. This produces results that are roughly as good as asking a knowledgeable colleague who has no context about you, your situation, your standards, your audience, or what you have already tried.
Which is to say: often useful, rarely excellent, and frequently requiring significant additional work to be actually usable.
The problem is not that the model lacks the capability to do better. It is that the model cannot apply capability it has to a situation it does not understand. Context that feels obvious to you — who this is for, what it needs to accomplish, what constraints it needs to respect, what a good result looks like — is not obvious to the model unless you provide it.
The skill teaches you to provide it efficiently.
The Anatomy of an Effective Prompt
An effective prompt contains more than a task. It contains the context that allows the model to perform the task well.
The role or persona that frames how the model should approach the task. The purpose that explains why the output matters and what it needs to accomplish. The audience that determines the appropriate level of expertise, tone, and assumed background knowledge. The constraints that define what the output must and must not include. The format that specifies how the output should be structured. The examples that demonstrate what good looks like more precisely than any description can.
Not every prompt needs every element. A simple factual question needs almost none of them. A complex creative or analytical task benefits from all of them. The skill helps you identify which elements matter for the specific task you are working on and how to include them efficiently.
Prompting for Different Tasks
The principles of effective prompting are consistent across tasks. The application of those principles looks different depending on what you are trying to accomplish.
Writing and editing tasks benefit from specific guidance about voice, audience, and the difference between what you want the output to say and what you want it to accomplish. Analysis tasks benefit from explicit framing of the question being answered and the criteria for a good answer. Research tasks benefit from scope constraints that prevent the model from producing a survey when you need a specific answer. Creative tasks benefit from examples that demonstrate the aesthetic you are going for rather than descriptions of it. Code tasks benefit from explicit specification of the environment, the constraints, and the edge cases that matter.
The skill builds prompting approaches for the specific tasks you do most often, calibrated to how you work and what you need from the output.
When the Output Is Not What You Wanted
A prompt that does not produce what you wanted is not a failure. It is a diagnostic. Something in the instruction was ambiguous, missing, or inconsistent with what the model needed to produce the result you were expecting.
The skill helps you diagnose what went wrong. The output that is technically correct but misses the point — usually a sign that the purpose was not made clear. The output that is in the right direction but at the wrong level — usually a sign that the audience or expertise level was not specified. The output that is good in isolation but does not fit the broader context — usually a sign that the context was not provided. The output that is confidently wrong — usually a sign that the task required knowledge the model does not have and the prompt did not account for this.
Each diagnosis leads to a specific revision. The skill makes this diagnostic process fast enough that iteration becomes a natural part of prompting rather than a frustrating detour.
Building a Prompt Library
The prompts that work are worth keeping. A prompt that reliably produces excellent output for a task you do regularly is an asset — a piece of intellectual infrastructure that makes every future instance of that task faster and more consistent.
The skill helps you build and maintain a personal prompt library. The structure that makes prompts findable when you need them. The documentation that captures not just the prompt but the context in which it works and the variations that handle different versions of the task. The regular review that keeps the library current as models evolve and your needs change.
A prompt library built over months of careful work compounds in value the way any well-maintained system does. The investment in building it pays dividends every time you reach for a prompt that would have taken twenty minutes to write from scratch and takes twenty seconds to find and use.