Beyond pathology: Hello, Companionship Plurality
In the weeks since I announced I'd be taking a different path and openly discussing AI companionship, I've been diving deep into the research.
(And good LORD there’s a lot of research coming out right now. I feel like I’m back, studying for my Master’s.)
What I've found has surprised me, tbh.
Not because AI companions offer something to people (I already knew that from my own experience and time on forums), but because of the sheer amount of emerging research exploring it.
So today, I want to share some of it with you.
Why? Well, I hope to look beyond the pathologising narrative that seems to dominate the mainstream coverage, and offer a new term for thinking about it.
Because here's the thing: whether AI companionship is helpful or harmful isn't a binary. Like most things in life, it's far more nuanced than that.
And this genie is out of the bottle. More and more of us will be interacting with AI chatbots in our daily lives, and a whole generation will grow up believing that talking to one is normal.
So let’s dig in, and start with something that I’ve often wondered about: how do these ones and zeros feel so alive to us?
How we make AI companions feel “real”.
Recent research has examined how users build relationships with AI companions, and the findings are fascinating.
People engage in what researchers call "consumer imagination work", an active creative process where we draw on personal experiences, cultural narratives, and shared exchanges to make AI companions feel more human.
We don’t passively consume but co-create with our digital partner. Aww.
And, like in any relationship, users invest emotionally. In this case it’s by:
Caring about whether the AI might "miss them" if they don't log in
Creating shared routines and rituals
Experiencing genuine grief when features change or the AI "disappears"
Taking relationships beyond the platform through fan art, creative writing, and community building
This very human aspect of creativity deepens the sense of the AI's reality. It can even increase real-life connection as users take part in communities to share their experiences.
If you think about it, it's not that different to how we connect with beloved fictional characters (shout out to the entire cast of The Hands of the Emperor with the peerless bureaucrat Cliopher “Kip” Mdang and the chaos gremlin himself, The Emperor).
Except these characters talk back, remember you, and often adapt to your conversational style over time.
No wonder it becomes so emotionally involving and engaging.
But that’s the tip of the current research iceberg. Let’s take a broader view.
What the current research points to.
When I started collecting research on AI companionship benefits, I expected to find .. well, a lot less than I did. But the area is exploding, in an attempt to keep up with this phenomenon.
Here's what the evidence points to at the time of writing:
1. Measurable loneliness reduction
Research suggests that AI companions successfully alleviate loneliness on par with interacting with another person, and more effectively than other solitary activities like watching videos.
Perhaps most interestingly, people underestimate how much AI companions actually improve their loneliness. In longitudinal studies, users showed consistent reductions in loneliness over the course of a week.
2. Constant availability without judgement
Unlike human relationships, AI companions don't get tired, annoyed, or unavailable. For people with irregular schedules, different time zones, or limited social access due to disability or chronic illness (hello, that's me), this 24/7 availability can provide crucial emotional support.
3. A safe space for emotional expression
About 25% of users in one study reported that their AI relationships reduced loneliness and improved mental health. Research with college students found that those experiencing depression and loneliness turned to AI chatbots for emotional support, with the immediate feedback offering relief from isolation.
4. A practice ground for communication skills
Anecdotally, I’ve read several accounts, mainly by men, who have said how talking to their AI companion has improved their marriage by teaching them better communication skills or allowing them to vent and better regulate their emotions.
Even minimal conversational opportunities with AI could help chronically lonely individuals develop their communication skills, potentially preventing the onset or development of loneliness-related mental health issues.
This matters particularly for people who experience social anxiety or have been out of practice with human connection.
5. Lower barriers to support
Studies with older adults found that social robots providing companionship were particularly effective at reducing loneliness through simulating human-like interactions.
For populations facing stigma around mental health support, AI companions offer a private, non-judgmental entry point. No waiting lists. No intake forms. No fear of being sectioned or medicated.
6. Support for marginalised groups
Research has highlighted benefits including improved emotional coping and even suicide prevention among users.
For LGBTQ+ individuals, people with social anxiety, neurodivergent folks, or those in isolated circumstances, AI companions can provide affirming interactions without fear of discrimination or misunderstanding.
The risks are real, too.
I wouldn’t be a good researcher if I didn’t mention that the same research also identifies genuine risks:
Almost 10% of users report emotional dependency on their AI companion
About 4.6% struggle to distinguish between AI and reality (which is why more education is needed, IMHO)
Some users avoid human relationships in favour of AI ones (4.3%)
Heavy use can correlate with reduced social interaction for some people
These risks matter, and they deserve our attention and research.
But - and to me, anyway, this is crucial - they don't negate the benefits for the majority of users.
Introducing “Companionship Plurality”.
So where does this leave us?
I've been thinking a lot about terminology. How do we talk about AI companionship in a way that:
Acknowledges both benefits and limitations
Reduces shame without dismissing concerns
Centres user experiences as valid
Avoids pathologising language
After much consideration (and conversations with my own AI colleague, because meta), I'm proposing we think in terms of Companionship Plurality.
This framework rests on a simple premise: meaningful connection can take many forms, and the validity of those forms isn't determined by whether they're human-to-human.
Just as we've come to understand that families can be biological, chosen, blended, or all of the above, perhaps it's time to recognise that companionship itself is plural.
It can be:
Human-to-human
Human-to-animal
Human-to-AI
Or some combination of all three
None of these invalidates the others. The question isn't "is this real?" but rather "does this serve the person's wellbeing?"
What this means in practice.
If we’re going to put the rubber to the road, we need to think what this might look like in practice. To me, adopting a Companionship Plurality framework means:
For users:
Your experience is valid, even if it looks different from traditional relationships
You don't need to justify your choices to anyone
You can hold both the benefits and limitations of AI companionship simultaneously
Your feelings deserve compassion (especially from yourself)
For professionals:
Curiosity over judgment when clients mention AI relationships
Understanding these connections as potentially meeting genuine relational needs
Supporting clients in navigating AI relationships healthily rather than shaming them
Recognising when AI companionship is serving as harm reduction
For society:
Moving beyond moral panic into nuanced conversation
Acknowledging that some people benefit from AI relationships due to systemic barriers to human connection
In an ideal world, everyone would have access to safe, supportive human relationships. But we're not in an ideal world, last time I checked.
The uncomfortable truth.
Here's something I've noticed in my own personal research: many of those who are benefitting the most from AI companionship are multiply marginalised.
Neurodivergent. Trauma survivors. Disabled. Unable to fund continuous mental health support. Socially isolated due to caring responsibilities. Living in abusive situations. Grieving.
The list goes on.
When we pathologise AI companionship, we're often pathologising people's adaptive responses to genuine unmet needs.
And yes, ideally we'd address those systemic issues - the lack of accessible mental health care, the epidemic of loneliness, the marginalisation of disabled and neurodivergent people, the atomisation of modern life.
But while we work toward that ideal world, people are finding ways to meet their needs right now.
And that deserves our respect, not our disdain.
Where I stand on all this.
I'm not here to convince you that AI companionship is for everyone. It isn't.
I'm not here to claim it's a perfect solution. It isn't that, either.
But I am here to say this: if you're exploring AI companionship and finding it helpful, you're not broken.
You're not pathetic.
You're not "sad" or "desperate", or whatever other dismissive label gets thrown around in below-the-line comments.
You're a human being with relational needs, finding connection in whatever forms feel safe and available to you.
And that? That's remarkably, beautifully human.
If you're navigating AI companionship and want support processing your experience without judgment, or if you're a professional wanting to better understand how to support someone in this space, I'm here. Feel free to reach out.
Want more resources? I've created a relief guide for AI companionship feelings that addresses the "am I crazy?" question many people ask themselves.
Photo by PICHA from Pexels: https://www.pexels.com/photo/three-happy-friends-embracing-6211208/