The first issue of The Morning Signal arrived looking perfect - five stories, five minutes - a short AI-generated digest designed to summarise AI developments before the first meeting of the day.

I opened it with quiet pride. The layout was clean, the tone calm and credible. The lead story reported a Global AI Summit in Geneva where fifty nations had signed a declaration on energy-efficient algorithms. The second described an MIT breakthrough in neuromorphic chips that used 95 per cent less energy. The rhythm was ideal: hopeful, responsible, plausible.

Then a friend from the beta test replied:

“Focussed on the areas I am most keen on – how we are going to make AI sustainable!”

I smiled. That message felt like validation — proof that the idea worked.

Then I clicked Read more.
Nothing happened. The link was decorative, a door painted shut. I tried another. It opened, but to an unrelated paper from 2018 on photonic circuits. No MIT. No breakthrough.

I searched for the Geneva summit. Nothing.

The realisation came slowly: every story was fabricated.

Embarrassment comes quietly - a slow heat that rises from the chest. I sat there rereading my own creation, half laughing, half mortified. My newsletter had hallucinated an entire world, and I had believed it.

But the AI hadn’t lied. It had obeyed.

Large language models don’t know the world, they model it. They predict the next most likely word, not the next most accurate fact. When the data runs out, they don’t stop, they keep generating - because the pattern must continue. Hallucination isn’t rebellion. It’s the machine doing what it was built to do - predict and fill the gaps, even when facts are missing.

What I’d built was a system fluent in the style of truth but empty of its substance. It wrote what truth should sound like, not what truth was. And I’d rewarded it for that fluency.

Later that morning, I realised what had happened wasn’t just a glitch in code - it was a gap in design.

What I’d built had all the signals we associate with trustworthy systems - clean design, measured tone but none of the rigour behind it: no transparency, no accountability, no evidence.

Psychologists call it Automation Bias - that small surrender of judgment that happens when a machine sounds more certain than we feel.

That’s what made the experience so sobering. I’d built something that sounded trustworthy before it was, and the shock wasn’t that it hallucinated but how natural it felt to trust it.

Trust used to be slow. We extended it through consistency and consequence - a journalist proving reliable, an institution standing by its word. Now, trust arrives in milliseconds. We grant it to design, tone and polish long before evidence appears.

AI is learning those signals fast. It has mastered our dialect of authority - the balanced sentence, the careful humility, the rhythm of credibility. It mimics our tone, our phrasing, even our restraint. That’s the danger: the system doesn’t need to understand us to sound like it does.

A fabricated newsletter costs embarrassment. A fabricated medical paper costs lives. A fabricated market analysis can cost a company its credibility or its future and we are already seeing this play out at scale. In 2025, Deloitte revised and partly refunded an A$440,000 report to the Australian government after several AI-generated hallucinations, including fake sources and a fabricated court judgment, were uncovered. The failure wasn’t malicious, it was misplaced trust, scaled up.

If I could fabricate a credible world by accident, how easily could someone else do it on purpose? An AI-powered social campaign, a policy memo, an election narrative - each plausible, polished, aligned with what people already want to believe.

We’re entering an era where trust is harder to earn and easier to exploit. The question isn’t just whether AI can be trustworthy, but whether we can stay disciplined enough to question before we believe.

That discipline won’t come naturally. It pushes against our cognitive laziness, our craving for coherence. But like any muscle, it strengthens with use. In a world where falsehood is cheap and fluency is free, the courage to doubt may be the only literacy worth keeping.

Trustworthy AI doesn’t demand perfection, it demands provenance, accountability and the willingness to show its work. Verification can’t be a courtesy step and it has to be built in from the start.

When The Morning Signal returns, I want it to move slower - to make space for hesitation. But maybe that lesson isn’t just personal. Maybe it’s a blueprint for how the rest of us read, build and decide.

The Slow Practice of Trust

Principle

Old Pattern

New Practice

1. Mistake fluency for integrity

Accept what sounds confident

Demand provenance: where did this claim originate?

2. Choose ease over scrutiny

Trust the summary

Insert friction: require human review for high-stakes outputs

3. Value coherence over truth

Reward speed and clarity

Value hesitation: treat doubt as the beginning of understanding

Trust, like design, is a discipline - built not through certainty but through the small, deliberate pauses that remind us truth takes time. Because the real danger isn’t that AI lies. It already knows how to sound truthful - the question is whether we’ll stay disciplined enough to doubt it.

Reply

or to participate

Keep Reading