I wasn’t expecting anything deep. Maybe something like, “You seem thoughtful,” or “You ask curious questions.” Instead, it confidently told me I was likely female.

No profile photo. No voice. Just my writing. Apparently, I use words with “softness,” “clarity,” and “empathetic tone.” In other words, the algorithm had assigned me a gender based on my sentence structure - and that gender was not mine.

I laughed. Then I paused. Because something about the answer stuck with me.

What is it about empathy that reads as feminine? Why do nuance, warmth, or introspection signal “woman” to a machine that doesn’t even know what gender is?

Of course, large language models like ChatGPT don’t “know” in the way we do. They don’t understand identity or experience. What they do understand or rather, what they replicate are patterns. Patterns scraped from billions of words across the internet: books, blogs, tweets, comment sections, corporate emails, subreddits, reviews, and Reddit threads.

And those patterns? They’re ours. Full of unspoken assumptions, inherited stereotypes, and deeply ingrained cultural shortcuts. ChatGPT didn’t come up with those on its own. It just learned what we’ve been teaching - over decades, unconsciously, at scale.

When a résumé screener gives preference to “aggressive” over “collaborative,” or when a recommendation system serves fewer leadership development videos to women, we call it AI bias. But really, it’s our bias, multiplied. These machines aren’t malicious (yet). They’re just very, very good at echoing us and sometimes, that echo is uncomfortable.

To be fair, there’s a lot of work happening in the field to address this. Researchers and practitioners are developing tools to mitigate bias: data audits to flag skewed inputs, fairness constraints that prevent one group from being unfairly advantaged, human-in-the-loop systems for oversight and counterfactual testing - essentially asking, “What happens if we flip the gender on this prompt?”

There are excellent toolkits too: Miscrosoft’s responsible AI toolbox, What-If Tools, and open-source libraries that help teams measure and mitigate different types of bias. And all of that matters - especially in high-stakes domains like hiring, lending or healthcare.

But still… I think we might be missing the point.

Even if we clean up the model - strip out the bias, refine the tuning, add safeguards - what happens when the model reflects something back to us that we taught it? Not because it’s broken but because the pattern is real. That’s the uncomfortable part. The problem might not be the machine. It might be what we’ve been feeding it all along.

ChatGPT guessed my gender. It wasn’t wrong. It was a reflection of the way I write and of the meanings we’ve layered onto different kinds of expression.

If the mirror makes us flinch, maybe the issue isn’t the glass?

Has a system ever misread you? Or maybe - has it ever revealed something you didn’t even realise you’d internalised?

I’d love to hear.

Ash

Sources:

  1. Microsoft Responsible AI Toolbox - https://responsibleaitoolbox.ai

  2. Tensorflow Responsible AI - https://www.tensorflow.org/responsible_ai

Reply

or to participate

Keep Reading