Essay · AI Stack

When the Model Stops Impressing

The honeymoon phase with AI ends. What matters is what you build after the wow fades.

Every team I work with goes through the same arc. Week one: astonishment. The model writes code, drafts strategy docs, explains legacy systems. Week six: disappointment. The same model hallucinates, misses context, produces confident nonsense. The question isn't whether this happens. It's what you do when it does.

← Back to essays

The disillusionment curve

Gartner calls it the trough of disillusionment. I call it Tuesday. The pattern is predictable:

  1. Peak of inflated expectations: "This will replace half our team."
  2. Trough of disillusionment: "This thing can't even remember what we discussed."
  3. Slope of enlightenment: "Oh, it's a tool. We need to learn how to use it."
  4. Plateau of productivity: "Here's where it actually helps, and here's where it doesn't."

Most teams quit somewhere in the trough. The ones who break through treat the disappointment as data, not defeat.

What the wow phase hides

Early demos succeed because they're low-stakes. You ask the model to explain recursion or write a haiku. No one gets fired if the haiku is mediocre.

Real work is different. Real work has:

  • Context the model doesn't have (org politics, historical decisions, undocumented constraints)
  • Stakes the model doesn't feel (production outages, customer trust, career risk)
  • Feedback loops the model doesn't see (what happened after you shipped its suggestion)

The wow phase trains you to be a passive consumer. The productive phase requires you to become an active collaborator.

Signs you're in the trough

  • You spend more time correcting output than creating it yourself.
  • You've stopped using the tool for anything important.
  • You catch yourself saying "AI can't do X" based on one bad experience.
  • The team jokes about the model like it's a liability.

None of these are terminal. All of them are signals that you haven't found the right wedge yet.

Finding the right wedge

The teams that escape the trough find one narrow use case where AI consistently delivers value. Not "write all our code" but "generate test stubs for this specific pattern." Not "draft all our docs" but "summarize this meeting transcript into action items."

The wedge has three properties:

  1. Bounded scope: Clear inputs, clear outputs, limited ambiguity.
  2. Fast feedback: You know within minutes if the output is good.
  3. Low blast radius: A bad output costs time, not trust.

Once you have one wedge working, you've built the muscle to find the next.

From consumer to collaborator

The shift happens when you stop asking "What can the model do?" and start asking "What can I do with the model?"

Consumer mindset Collaborator mindset
Waits for perfect output Iterates toward good enough
Blames the model for failures Adjusts inputs and context
Uses default prompts Builds reusable context systems
Abandons after bad results Debugs and documents patterns

What to do this week

  1. Audit your last 10 AI interactions. Which ones delivered value? What did they have in common?
  2. Pick one task you've given up on. Reframe it with tighter constraints.
  3. Document one failure mode. What context was missing? How would you provide it next time?
  4. Share the pattern with your team. Disillusionment is easier to escape together.

The plateau is worth reaching

Teams on the plateau don't talk about AI being amazing or terrible. They talk about which workflows it accelerates and which it doesn't. They've stopped being impressed and started being productive.

The wow fades. The work remains. The question is whether you build the systems to make the work better.