GPT-5.2 hallucinates 80% less than last year.

So why are you still prompting like it's 2024?

If your prompts start with "You are an expert in..." you're fixing a problem that doesn't exist anymore.

Research from Wharton suggests personas help with tone, not accuracy.

Telling Claude it's a "senior data scientist" doesn't make the math better.

It just changes how the answer sounds.

The models got smarter.

Your prompts need to keep up.

Here's what works now:

  1. Be specific, not vague. "Limit to three paragraphs" works better than "be concise" every time.

  2. Add structure.

XML tags, numbered steps, clear sections.

The model needs a framework, not encouragement.

  1. Show examples.

Give one good output and the model copies it.

  1. Describe what you want with words only and you'll keep editing forever.

  2. Say what to do, not what to avoid.

"Don't use jargon" makes the model think about jargon.

Tell it what you want instead.

The job changed too.

You're not a prompt engineer anymore.

You're a context engineer.

Your job is to give the model what it needs to succeed.

Not to trick it into sounding smart.

What prompt habit from 2024 have you dropped?