AI Hallucinates a Case – Humans Hallucinate a History

The real danger isn’t AI’s fabrications. It’s the ones we institutionalized.

Two articles caught my attention this week—not because of what they said about AI, but because of what they revealed about us.

The first was the familiar panic: AI is hallucinating and we can’t stop it. Lawyers are citing phantom precedent, researchers are footnoting fiction, and the future we are prompt engineering blurs the line between fact and fantasy. The message was clear:

“Beware the Jabberwock, my son!
The jaws that bite, the claws that catch!
Beware the Jubjub bird, and shun
The frumious Bandersnatch!”

The second article, loftier in tone, appeared in Psychology Today and declared that Knowledge is Dead. It riffed on Nietzsche, swapped out theology for epistemology, and announced a shift from “retrieval” to “resonance”—a move from fixed truths to contextual meaning, supposedly heralded by the rise of AI.

Both articles, for all their hand-wringing and philosophizing about AI, are better examples of the shortcomings and dangers of HI.

Let’s first be clear about one thing:

All AI output is a hallucination.

The term describes the perception of something that isn’t there. And AI, for now, is the proverbial brain in a vat—generating associations without perception, context without contact. It cannot verify. It cannot know. It can only simulate coherence.

Let’s be clear about another thing.

AI’s hallucinations are, for now, nothing more than curiosities—momentary slips of language rather than structural threats.

The real danger isn’t that a chatbot fabricates a citation. It’s that AI agents, trained on the same probabilistic models, are being deployed to act—not just speak.

And that risk isn’t theoretical. AI powered agents are already being developed and sold as business automation.

The greater the stakes at the end of the causal chain an agent can activate, the more dangerous the consequences. A hallucinated court case is embarrassing. A hallucinated missile strike is catastrophic.

LLM powered chat is a not the enemy.

Recently, I used AI to review my battery of daily medications for contraindications that might have been missed. What I discovered broke the Hippocratic oath.

A decade-long metabolic collapse, quietly engineered by a rotating cast of specialists—each trained, each competent, each utterly siloed.

My cardiologist treated my cholesterol based on the thresholds of the time. Those statins lowered my LDL—but also degraded my insulin sensitivity. That, combined with cortisol-induced mitochondrial dysfunction, sent me to an endocrinologist, who layered on another battery of treatments. Back and forth I went, in a closed circuit of escalation, until I found myself in full-blown metabolic syndrome—more pills, more doses, more unintended sabotage.

Western medicine, for all its achievements, often treats symptoms in isolation. Its knowledge is domain-bound, its logic procedural. AI was able to look across the silos—and spot interactions hidden in plain sight. Not from insight, but from the data buried in those folded-up medication disclosures: the ones tucked inside every box, written in fine print, and largely forgotten by the very people trained to prescribe them. I had to fight for the adjustments it suggested–HI wasn’t convinced. But once implemented my blood sugar levels normalized for the first time in years.

Could AI have harmed me? Certainly.

Had it been trained on biased data or corporate guidelines, it could have prescribed with lethal fluency. Were this AI powering an android nurse making the rounds, then yes “Danger Mr Robinson”. But HI hasn’t fared that much better.

We have been living through HI hallucinations in slow motion.

A few to jog your memory:

  • The Food Pyramid, subsidized by agriculture lobbies, told us to eat 6–11 servings of bread and pasta daily.
  • Eggs were dangerous—until they were essential.
  • Fats were to be feared—until the ketogenic reversal.
  • Red meat would kill you—until processed carbs took the blame.
  • Margarine was modern medicine—until trans fats were banned.

This isn’t a new pattern. It’s the old one, sped up.

Which brings us back to Knowledge is Dead.

That piece argued that we’ve moved from retrieval to resonance—as if knowledge used to be “found” and is now “manufactured”.

Knowledge was never retrieved like fruit from a tree.

Before it was codified and printed and cited, it was argued, observed, debated. It passed through coherence and consensus before being published. And even then, what we “retrieved” was not truth—it was what had survived scrutiny, suppression, and selection.

Today’s LLMs are built on those very books.

They are trained on human output.

And that output is growing daily—online, in journals, in subreddits, in blogs, in notes like this one.

We are still retrieving from the corpus.

We’ve just changed the interface.

You used to find it in a library.

Now you find it in a prompt box.

Must we shun the Frumious Bandersnatch?

Only if it stops listening.

LLMs are only as persuasive as their coverage is complete. If they exclude dissent, they lose trust. If they hallucinate, they get caught. And if they claim omniscience, they must acknowledge contradiction—something human institutions have long refused to do.

AI has yet to hallucinate on the scale of colonialism, which exterminated and enslaved across three continents under the guise of manifest destiny.

Nor has it matched the Zionist hallucination—“a land without a people for a people without a land”—used to justify the dispossession of millions.

And when you call AI on its hallucination?

It admits it.

Try getting your doctor to reverse your prescription. Try getting settlers to leave the land. Try convincing a civilization to undo its founding myth.

The danger isn’t AI’s errors.

It’s our refusal to interrogate the slow, sacred, consensus-bound hallucinations that shaped our past.

The emperor was never clothed. He just moved too slowly for anyone to notice.

A hallucination becomes dangerous the moment the dreamer can act. Be it neuron or neural net, flesh or firmware, soft tissue or rare earth metal— the risk lies not in the illusion, but in how far it travels once believed. The task ahead is not just to correct the error, but to contain its reach, and to train ourselves to see the illusion for what it is before we mistake it for the Promised Land.

Leave a comment