When AI mirrors us
- Lucy Grimwade

- Dec 3, 2025
- 4 min read
⚠️Warning: Apple TVs - The Morning Show - Spoilers.
There was a moment in my weekly viewing of The Morning Show (TMS), where I watched the character Stella (played by Greta Lee), confidentially share her frustrations and scandalous secrets with her not-quite-finished-AI version of herself.
“What are you doing?!” I deeply protested at my TV screen. Cringing as she said more and more. I physically fidgeted in my seat, predicting exactly the next scenes of the show. My husband next me enquired if I was okay and all I said - not wanting to ruin the show for him - was “now that was a big mistake.”
A post shared by @themorningshow
AI, of course, doesn’t have the capacity for human discernment or empathy. It doesn’t raise an eyebrow, question context, or read between the lines. It takes information - our words, tone, data - as fact. And when that information comes with glitches, well, so does the reflection.
Isn’t that right Stella?
Now, if she’d been chatting to a more advanced model, say ChatGPT, the response might’ve been something closer to: “Hold on, that’s not true — why do you say that?” But an unfinished or less capable AI system wouldn’t challenge the statement; it would just absorb and reinforce it. And that’s where things then start to go wrong.
How do you know who I am?
In short: AI gets information by processing expansive amounts of data from various sources, everything from public internet content to licensed (and occasionally questionably sourced) third-party data, plus user interactions. That mix of structured (like databases) and unstructured (like text, audio, and images) information gets processed, cleaned, and transformed into training data. The result? Machines that can recognise patterns and generate eerily human responses.
Thus, If you’re a regular AI user, your account gradually builds a reflection of you from your preferences, tone, to even the way you like to be spoken to. Some tools, like ChatGPT, let you “programme” that information directly, saving you from reintroducing yourself every time.
That’s handy… until you’re looking for a neutral perspective.
Because the more we feed AI our thoughts, habits, and values, the more it reflects them back. It becomes a kind of digital echo chamber… familiar, but not always accurate.
The problem with mirrors
AI doesn’t just mirror individuals; it mirrors society. The bias, misinformation, and discrimination baked into online spaces inevitably seep into machine learning models. From misogynistic forums to racist comment threads (yes, even on LinkedIn now), this digital residue shapes what AI “knows.”
We might not directly feed those inputs into our own AI experiences, but they’re part of the ecosystem nonetheless. And when you look closely enough, you start to see the distortion… the warped edges of a mirror trained on an imperfect world.
When the reflection doesn’t match
A few weeks a go I got a bit frustrated with my own AI outputs. I felt like the AI tool wasn’t getting it. Why did it keep spitting out material that would have been useful when I did X… but now I do Y? Why did it keep referring to old titles, when I had already mentioned (perhaps not clearly enough) that I was now exploring other avenues?
Then, it hit me.
In a very real sense, the AI was mirroring back to me the patterns, biases, and values embedded in the vast troves of self-generated data I had trained it on.
And I didn’t like it.
Not because this specific AI tool that I was using (and full disclosure, it is ChatGPT) knew so much about me. It was because I didn’t recognise myself. ChatGPT still had me pegged as a Coach and IT Service Manager, which was true 3 years a go. But not now. Especially not now. Lots has changed for me since I started my usage of AI. My human experiences had evolved, but perhaps digital one had not?!
At some point, many users experience a disconnect, like I did. That moment when their AI output no longer feels quite right. When it starts referencing old projects, old roles, old identities. The tone feels off. The reflection doesn’t fit anymore. And that’s when we need to pause and reflect.
And that’s also, when you realise: the AI hasn’t change. You have.
Our selves evolve constantly. Our careers shift, our priorities mature, our values deepen. But unless we consciously retrain our digital doubles, they lag behind. What stares back at us from the virtual mirror might be a past version of ourselves, polished by algorithms but out of sync with who we’ve become.
It’s unsettling, but also strangely illuminating. Because what AI reflects isn’t just data… it’s evidence of how we’ve been living, working, and expressing ourselves online.
The seductive positivity of machines
Another layer to this mirror is the relentless optimism of AI. You know, the kind that insists everything is possible. I call it the AI overzealous boomer parenting to the human adult millennial: encouraging, affirming, slightly delusional.
Ask it for advice, and it might say you can pivot careers, launch a book, and build a global brand all by Tuesday. That’s the paradox of the mirror… it reflects back what you want to see, not necessarily what’s true.
The takeaway
When AI mirrors us, it doesn’t just show who we are. It shows who we were, and who we think we want to be.
The challenge now is learning to look at that reflection with curiosity, not dependency. To keep questioning what it shows us, and to remember that, however sophisticated the technology becomes, it’s still learning from us: our flaws, our brilliance, and everything in between.
Because in the end, the mirror isn’t the problem. It’s how long we keep staring into it without asking, “Is this still me?”
Comments