The Prompt Psychologist

The 4 Types of Bad Prompts (And How to Fix Them)

5 min read

Not all bad prompts are created equal. Some are just lazy. Others are... ambitious disasters.

Here's a taxonomy of terrible prompting, so you can stop doing it.

1. The Vague Void

Example: "Write something about marketing."

What happens: AI gives you 500 words of nothing. It's grammatically correct. It says absolutely zilch.

Why it fails: No context, no audience, no goal. The AI's just guessing.

The fix: "Write a 300-word LinkedIn post about email marketing for small business owners. Tone: practical, not preachy. Include one counterintuitive tip."

Notice what changed? You gave it:

That's six pieces of information the AI can actually use. The original prompt had zero.

2. The Kitchen Sink

Example: "Write a blog post about productivity and time management and goal-setting and also make it funny but also professional and include statistics but keep it under 500 words and add some quotes and make sure it's SEO-optimized and..."

What happens: AI has a stroke. Output is unfocused mush.

Why it fails: Too many competing instructions. The AI doesn't know what to prioritize.

The fix: Pick ONE goal. If you want funny, commit to funny. If you want data-heavy, lean into that. Stop trying to be everything.

Here's the thing about constraints: they're supposed to narrow focus, not expand it. When you pile on ten different requirements, you're not being thorough — you're being confused.

Better approach: "Write a 400-word blog post about one productivity technique that actually works. Tone: skeptical but helpful, like you're talking to someone who's tried everything and failed. Include one specific example of how to implement it."

One topic. One tone. One requirement. That's a prompt the AI can execute.

3. The Assumed Mind-Reader

Example: "Make it better."

What happens: Better how? More formal? Shorter? Funnier? AI takes a wild guess. Usually wrong.

Why it fails: "Better" is subjective. You know what you want. The AI doesn't.

The fix: "Make this more conversational — replace jargon with simple language and add a personal anecdote."

Or: "Cut this by 30%, keeping only the most surprising points."

Or: "Rewrite this for someone who's skeptical and has heard it all before."

See the pattern? You're telling the AI what "better" actually means in this context.

4. The Zero-Context Wonder

Example: "Rewrite this." [pastes 3 paragraphs]

What happens: AI rewrites it... generically. You wanted punchy and got verbose.

Why it fails: No instructions = AI defaults to "safe."

The fix: "Rewrite this for a skeptical audience. Cut the fluff, add one surprising stat, keep it under 150 words."

Context matters. The same content rewritten for a CEO vs. a college student vs. a skeptical journalist should look completely different. But the AI can't know which one you want unless you tell it.

Pattern Recognition

Notice the theme? Specificity wins.

Every fix involves telling the AI:

Do this, and your prompts go from "meh" to "oh damn, that's actually good."

The uncomfortable truth? Most bad AI output isn't the AI's fault. It's yours. You gave it vague instructions and got vague results. Garbage in, garbage out.

But here's the good news: fixing it is simple. Just pretend you're briefing a very literal intern who will do exactly what you say, nothing more, nothing less.

Because that's basically what AI is.