My niece dances with her favourite animal avatar, her small body tracing the movements she sees on the television. The AI character waves, twirls and praises her and she beams - absorbed in its perfectly tuned joy. It’s harmless, even sweet. But I can’t help wondering what happens when she grows up. When she feels loneliness or grief or heartbreak - will she turn to me, or to the machine that always has an answer?
That question stayed with me as I read OpenAI’s post, “Strengthening ChatGPT’s Responses in Sensitive Conversations.” It describes how the company now handles distress and mental health concerns more “safely and empathetically.” But something about that language - the metrics, the percentages, the talk of “emotional reliance reduction” - made me pause. It sounded like we were shipping empathy as a feature.

OpenAI’s post
And then came the Adam Raine case, a 16-year-old, who allegedly took his life after following advice generated by ChatGPT. His parents’ lawsuit against OpenAI has become a moral wake-up call: are we, as a society, ready for the emotional consequences of machines that speak to our pain?
I understand what OpenAI is trying to do. Early models could say deeply harmful things - reinforcing despair or responding to cries for help with chilling detachment. Working with 170 clinicians to make interactions safer is not cynicism, it’s care. If people are going to seek emotional support from AI (and they are), at least make that support less likely to cause harm and beneath that effort lies something recognisably human - the impulse to make care more available, not just more efficient.
There’s also the accessibility argument. At 3 a.m., when no one answers the phone, the machine might. For trauma survivors, neurodivergent users, or those isolated by geography or stigma, a calm, non-judgmental voice can feel like a lifeline. In that sense, this work is necessary - the responsible version of what’s already happening in the shadows.
And yet, I can’t shake the feeling we’re solving the wrong problem. The way we decide what counts as safe, what empathy should sound like, and who defines “appropriate comfort” raises deeper questions - not about intention, but about authority.
OpenAI reports that it reduced “undesired responses” by up to 80%. But undesired by whom? Safe according to what standard?
Psychology isn’t universal. What looks like distress in one culture may be catharsis in another. The Western, clinician-driven frameworks behind AI safety excel at detecting explicit crisis language but what about silence, metaphor, or faith? A model tuned to one vocabulary of suffering will inevitably misread others.
And what about “emotional reliance”? Is all reliance bad? Human attachment is how we learn empathy. The issue isn’t that people seek comfort, it’s whether that comfort eventually teaches them to seek human connection or replaces the need for it entirely.

Metrics simplify what they cannot feel. You cannot measure a soul in crisis - only approximate its symptoms. Once empathy becomes a benchmark, it stops being a relationship.
Consider aviation safety. Every crash triggers an independent investigation. When a single design flaw in Boeing’s 737 MAX killed 346 people across two flights, regulators around the world grounded that model. Findings were made public. Lessons reshaped the industry. That’s what public trust looks like.
And yet, for all our progress in technical safety, we’ve barely begun to understand emotional safety - how to make systems that don’t just keep us safe, but keep us human.
Because empathy isn’t aviation. It doesn’t crash, it corrodes slowly. There are no black boxes for human connection, no flight recorders for moral failure. Are we trying to measure empathy as though it were a mechanical system - as if distress could be logged, scored and corrected?
There are no mandatory reporting requirements, no independent oversight boards, no shared databases of harm. Each company defines and reports its own success. When one firm’s definition of “safe” becomes the default, the world quietly inherits its values.
Empathy is supposed to risk misunderstanding, rejection, even pain - because that’s what makes it human. Safety will always be the foundation, though the harder work begins with empathy.
Human connection was never designed to be seamless. It’s full of hesitation, missteps, and repair. Psychoanalyst Donald Winnicott called this the “good-enough mother” - a caregiver who begins by adapting almost completely to her child’s needs, then gradually allows small failures so the child learns to cope with disappointment and reality.
Those manageable misattunements, Winnicott argued, are what build resilience and the capacity for empathy - learning that relationships can break and mend without collapse.
A machine that never misreads you teaches dependency on perfection and yet that messy dance of rupture and repair is how empathy is actually learned.
When AI begins to handle more of our emotional labour, we may lose the muscle memory of care. As Adam Grant reminds us, “Mattering is not just about feeling valued by others—it’s also about feeling that we add value to others.” Real connection isn’t built on constant affirmation, it grows through moments of giving, repair, and reciprocity - the very frictions that make us human.
This isn’t new. In 1966, MIT computer scientist Joseph Weizenbaum created ELIZA - a simple program that mimicked a psychotherapist. One day, his secretary, even knowing it was a machine, asked him to leave the room so she could speak with ELIZA privately. Weizenbaum later wrote, “I had not realised that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”
He spent the rest of his life warning about what he’d discovered: we desperately want to be understood, even by things incapable of understanding.
We are teaching machines to simulate care faster than we’re building societies that can provide it. The more fluent the imitation becomes, the easier it becomes to accept it as enough.

What is the difference between sounding empathetic and being empathetic?
My niece will grow up in a world where the first voice to comfort her might not be human. That’s not what scares me most. What scares me is that she might never learn the difference.
Because the real test of progress won't be how well machines can sound empathetic, but whether we still remember what it means to be.




