Lesson 7: Build In Constraints That Reduce Drift
Lesson overview
Open-ended requests give AI too much latitude — and AI uses that latitude to produce the most statistically probable response, which is usually the most average, most padded, most generically structured version of what you asked for. This lesson teaches how to use constraints deliberately to narrow that latitude and produce more focused, more relevant output.
What this means
A constraint is anything that limits what AI should or should not do with your request.
Length limits, format rules, topic exclusions, priority rankings, vocabulary restrictions, structural requirements — these are all constraints. They are not just stylistic preferences. They are guardrails that push the output toward something more specific and more useful.
The instinct to leave things open is understandable. It feels like more options mean better results. In practice, fewer well-placed constraints usually produce better output than unlimited latitude.
Why it matters
When a request is too open-ended, AI fills the space with whatever is most common, most expected, and most generically appropriate. That produces output that is safe, complete, and dull.
Constraints are what force AI to make choices — to prioritize some things over others, to cut rather than pad, to be specific rather than broad. The output feels less like a template and more like a real decision was made.
Constraints also prevent drift: the tendency for AI output to wander off-topic, over-explain things you already know, or include sections that serve no one in your situation.
What most people do wrong
Leaving scope wide open
"Write about the benefits of remote work" could become a 100-word summary, a 1500-word guide, a persuasive essay, or a comparison table. Without scope, AI picks something in the middle that serves nothing in particular.
Not specifying what to leave out
What to exclude is as important as what to include. "Do not cover implementation details — focus only on business outcomes" is a useful constraint that changes the content substantially.
Treating length as optional
Length is one of the easiest constraints to set and one of the most effective. "Under 250 words" and "exactly three bullet points" force AI to make decisions about what matters most, which improves output quality.
Ignoring priority
"Cover the three most important risks" is more useful than "cover all risks" if you want focused output. Telling AI to prioritize changes what gets emphasis and what gets left out.
Types of useful constraints
Scope constraints
Define the edges of the topic. What is in, what is out.
- "Focus only on the onboarding experience, not the product features."
- "Do not discuss pricing or competitors."
Length constraints
Set hard or soft limits on how much AI should write.
- "Under 300 words."
- "Maximum three paragraphs."
- "One sentence per section."
Format constraints
Define how the output should be structured.
- "No bullet points. Prose only."
- "Three sections with headers."
- "A single clear recommendation followed by supporting rationale."
Priority constraints
Tell AI what matters most.
- "Prioritize clarity over completeness."
- "Focus on the most practical options, not the most comprehensive list."
- "The most important point should come first."
Exclusion constraints
Name what to avoid.
- "Do not include generic advice."
- "Avoid technical jargon — this audience is non-technical."
- "Do not recommend any tools by name."
Weak example
Write about the risks of moving to a microservices architecture.
What happens: AI produces a thorough, balanced, padded article covering every possible risk from every possible angle at a moderate level of depth. It is fine. It is also probably not what you needed.
Strong example
Write a brief internal summary of the three biggest operational risks of moving to a microservices architecture for a team that is currently running a monolith. Audience: engineering managers making a go/no-go decision. Focus only on risks that affect timelines and team capacity — not algorithmic performance. Format: three risks, each with a one-sentence description and a one-sentence mitigation. Under 250 words.
What is better: Scope is limited to operational risks. Audience is named. Decision context is given. Format is specific. Length is capped. Topic exclusions are set. The output is shaped by all of these.
Practical exercise
Take this over-broad prompt and rewrite it using three to five constraints from the list above. Be specific about what each constraint is meant to do.
Over-broad prompt:
Write something about how our team can use AI to improve our workflow.
Constraints to add:
1. A scope constraint — what specific part of the workflow?
2. A length constraint — how long should it be?
3. A format constraint — how should it be organized?
4. An exclusion constraint — what should it avoid covering?
5. A priority constraint — what should it emphasize most?
Write the constrained version and compare the output to what the original would produce.
Reflection prompt
- For your most common AI requests, what scope or topic limits would immediately make the output more useful?
- Are there things AI commonly includes in your output that you always delete? What constraint would prevent that?
- How often do you set length limits? What happens to the output when you do vs. when you do not?
Key takeaway
Constraints are not restrictions — they are clarifications. They tell AI what matters, what does not, and what the output is actually supposed to be. The right constraints do not limit the output. They focus it.