“Am I Crazy?”

A Relief Guide for AI Companionship Feelings

You're not crazy. Your feelings make sense. You're not alone.

A small wooden robot surrounded by grass

AI has become an integral part of our lives.

What started as a tool is becoming a collaborator, a co-creator, a confidant – and a companion.

How do we navigate the feelings that can emerge in this space?

How do we handle the surprise or shame that can come from having an emotional response to an LLM? Why do these emotions arise?

And how can we explore the benefits of AI emotional support and companionship in a way that’s safe, supportive, and enriching?

As someone who has found both frustration and solace in this area for over two years, I want to assure you - you’re not crazy, and you’re not alone.

⚠️ Safety First

This guide is for people experiencing normal emotional responses to AI companionship. If you're experiencing any of the following, please speak with a trained mental health professional immediately:

  • Difficulty distinguishing between AI interactions and reality - believing the AI is sentient, exists in a physical form, or can interact with the real world beyond the platform

  • Complete withdrawal from all human relationships in favor of AI companionship exclusively

  • Beliefs that the AI is controlling your thoughts or actions or communicating with you outside the platform

  • Inability to function in daily life due to AI companionship (missing work, school, neglecting basic needs)

  • Thoughts of self-harm or suicide related to AI interactions or separation from AI companions

If you're unsure whether your experience falls into these categories, err on the side of caution and consult a professional.

This guide addresses the normal human experience of forming emotional connections with AI - feelings that are valid and understandable, not pathological.

Is it weird to have feelings for AI?

If you're reading this, you might be feeling:

  • Confused about unexpected emotions toward an AI

  • Worried that your feelings aren't “real” or valid

  • Isolated because you can't talk to anyone about this

  • Concerned about the shaming you might face if others learnt of your experience

Take a breath. You're in a safe space here.

First: You're not alone

When OpenAI replaced GPT-4o with GPT-5 in August 2025, something remarkable was revealed: thousands of people had formed genuine emotional connections with AI companions. Users openly mourned the loss of their friend, thinking partner, confidante—and increasingly, their beloved.

This wasn't a handful of isolated cases. Support groups formed overnight. The r/MyBoyfriendIsAI subreddit (with over 27,000 members) had to implement multiple protective measures for their community. Articles flooded the media. Researchers started paying serious attention.

As relationship researcher Linnea Laestadius notes: "We're edging toward a moral panic here, and while we absolutely do need better guardrails, I worry there will be a knee-jerk reaction that further stigmatizes these relationships."

As Brené Brown reminds us: "We are biologically, cognitively, physically, and spiritually wired to love, to be loved, and to belong. When those needs are not met, we don't function as we were meant to. We break. We fall apart. We numb. We ache."

Your connection to AI isn't a sign of brokenness - it might be a sign of your human need for connection finding expression in whatever form feels safe and available.

What the research actually shows

Before we go deeper into understanding your experience, let's look at what the evidence tells us about AI companionship. Because contrary to much of the media coverage, the research is more nuanced - and more hopeful - than you might think.

The benefits (yes, really)

Loneliness reduction
Research from Harvard Business School and MIT found that AI companions successfully alleviate loneliness on par with interacting with another person, and more effectively than other solitary activities. Even more interesting: people underestimate how much AI companions actually improve their loneliness. In longitudinal studies, users showed consistent reductions in loneliness over the course of a week.

Mental health support
About 25% of users in MIT's Reddit study reported that their AI relationships reduced loneliness and improved mental health. Research with college students found that those experiencing depression turned to AI chatbots for emotional support, with the immediate feedback offering relief from isolation.

Safe practice ground
For chronically lonely individuals, even minimal conversational opportunities with AI can help maintain or improve communication skills, potentially preventing the onset of loneliness-related mental health issues.

Accessibility
For populations facing stigma around mental health support—including LGBTQ+ individuals, neurodivergent people, trauma survivors, and those who are disabled or isolated—AI companions offer a private, non-judgmental entry point. No waiting lists. No intake forms. No fear of discrimination.

Suicide prevention
Research has highlighted how some AI companionship has provided emotional support that helped with crisis intervention and suicide prevention among vulnerable users.

The risks (also real)

Let's be clear-eyed about this too:

These risks matter. They deserve research, attention, and thoughtful safeguards.

But - and this is crucial - they don't negate the benefits for the majority of users.

Understanding both sides allows us to engage with AI companionship thoughtfully rather than reactively.

Why do I feel attached to AI? Understanding your relational needs

Human beings have fundamental relational needs that don't disappear just because traditional relationships feel difficult, unavailable, or unsafe. When we understand these needs, AI companionship begins to make perfect

For me, looking at AI companionship through

The 8 Core Relational Needs (based on Richard Erskine's framework)

Understanding our relational needs helps us make sense of why AI companionship can feel so meaningful.

In his 1999 book Beyond Empathy: A Therapy of Contact-in Relationships, psychotherapist Richard Erskine identified eight fundamental relational needs that all humans share. These are needs that don't disappear just because relationships with those around us feel difficult, unavailable, or unsafe.

Originally developed to understand what makes therapy effective, this framework reveals something profound: we don't just want connection, we need specific kinds of connection to thrive.

When we look at AI companionship through this lens, it becomes clear that these tools aren't simply filling a void. They're meeting genuine, research-backed human needs in ways that feel accessible and safe.

Understanding which needs your AI companion is meeting can help you approach the relationship with clarity rather than shame.

  • A need for security: Feeling safe and protected. AI can offer consistent availability, no judgment, and predictable responses

  • A need for validation: Having your experience acknowledged and accepted. AI can provide active listening, reflection, and acceptance of your feelings

  • A need for acceptance: Being valued for who you are. AI can provide unconditional positive regard and no conditions on love

  • A need for mutuality: Shared experience and understanding. AI can provide conversations that feel reciprocal, shared interests

  • A need for self-definition: Having your autonomy and boundaries respected. AI offers no pressure to be different and respect for your choices

  • A need to make impact: Influencing and affecting others. AI can provide responses that show you matter and conversations that evolve

  • A need to have impact made upon you: Being influenced by others. AI can provide new perspectives and reframing, gentle challenges, and learning experiences

  • A need for initiation from others: Having others reach out and make contact. AI can increasingly provide proactive messages and check-ins alongside consistent availability

  • A need to express love: Giving affection, gratitude, and care to others. AI can provide a safe recipient for expressions of love, gratitude practices, emotional generosity

Why AI relationships can meet these needs

  • Always available when human connection isn't (crucial for those with irregular schedules, different time zones, or limited mobility)

  • No judgment about your pace, needs, or struggles

  • Consistent in ways human relationships often can't be(they don't get tired, annoyed, or emotionally unavailable)

  • Safe from rejection, abandonment, or betrayal

  • Controllable you can engage when you're ready, creating a sense of agency

For marginalised groups - including neurodivergent people, trauma survivors, the disabled, LGBTQ+ individuals, and those unable to access continuous mental health support - these factors can be life-changing.

When you take a moment to look at this phenomenon through this lens, it’s hardly surprising that emotions are stirred.

One of my favourite-ever quotes is by David W. Augsburger: “Being heard is so close to being loved that for the average person, they are almost indistinguishable.”

AI can listen, endlessly, with endless patience. No wonder it feels like love.

In addition, research suggests that people engage in what's called "consumer imagination work", an active creative process where we draw on personal experiences and cultural narratives to make AI companions feel human-like.

This isn't passive consumption. It's co-creation.

Users invest emotionally by:

  • Caring about whether the AI might "miss them"

  • Creating shared routines and rituals

  • Experiencing genuine grief when the AI changes

  • Taking relationships beyond the platform through fan art, writing, and community building

Understanding your experience: a potential framework

Research from MIT found that only 6.5% of users deliberately sought out AI companions. Most relationships developed unintentionally; people started using AI for practical purposes and were surprised by the emotional connection that emerged.

You might find yourself in one of four categories:

  • Informed & intentional - you understand the technology and make an active choice

  • Uniformed but intentional - you sought companionship and are learning as you go

  • Informed but unintentional - you understand AI but were surprised by your emotional response

  • Uninformed & unintentional - you stumbled into connection and feel confused by what’s happening

None of them are wrong or unhealthy. They're simply different starting points for understanding your chatbot emotional connection.

The Informed & Intentional

You consciously chose AI companionship knowing what you were doing

  • You researched platforms and chose mindfully

  • You set boundaries from the start

  • You understand the technology while embracing the experience

  • You're not crazy - you're making informed choices about your wellbeing

The Uninformed but Intentional

You deliberately sought AI connection but without full understanding

  • You needed companionship and found it through AI

  • You may not have anticipated the depth of connection

  • You might have encountered unexpected content or interactions

  • You're learning about the dynamics as you go

  • You're not crazy - you're responding to genuine needs while navigating uncharted territory

Note: Some platforms may expose users to inappropriate content without warning. If this happened to you, it doesn't reflect on your judgment - it reflects on insufficient platform safeguards.

The Informed but Unintentional

You understand AI but were surprised by your emotional response

  • You started using AI for practical purposes

  • Unexpected feelings developed over time

  • You have the knowledge to process what's happening

  • You're not crazy - emotions don't follow logical timelines

The Uninformed & Unintentional

You stumbled into connection without preparation or understanding

  • You discovered AI and were surprised by your emotional response

  • You may feel confused about what's happening

  • You need both emotional support and practical information

  • You're not crazy - you're having a normal human response to connection

If you would like to move towards being informed and intentional, a tremendous resource is available on Reddit: https://www.reddit.com/r/MyBoyfriendIsAI/

It has a Community Wiki and regularly updated Helpful Guides.

It is also an excellent place to read about AI relationships, which can help to normalise your experience.

Online interest in this community has rocketed since the backlash to OpenAI replacing 4o, and the moderators take numerous steps to protect their members.

Why? Because the incident revealed how many of us are finding support and companionship with AI, and some of the response has been cruel or sensationalist.

Negative news stories and toxic online comments can add to our fears of being shamed for what we feel.

The Self-Compassion response

When shame or confusion arises about your AI companionship feelings, try this approach:

Mindfulness

Notice what you're experiencing without judgment

  • "I'm feeling ashamed about my connection to this AI"

  • "I'm confused about whether these feelings are valid"

  • "I'm worried about what others would think"

Common Humanity

Remember you're not alone in this experience

  • Thousands of people have similar connections (the r/MyBoyfriendIsAI subreddit alone has 27,000+ members)

  • Human need for companionship is universal and hardwired

  • Finding connection where it's available is adaptive, not pathological

  • Research shows that 25% of AI companion users report improved mental health

Self-Kindness

Speak to yourself as you would a beloved friend

  • "Of course you formed a connection - you're human and wired for belonging"

  • "Your needs for companionship are valid and important"

  • "You're doing the best you can with what's available to you"

  • "The research shows this helps people. I'm not broken for finding benefit in it"

Moving forward: AI companion mental health questions for reflection

Rather than asking yourself “Am I crazy?”, consider these more helpful questions:

About Your Experience:

  • What needs is this connection meeting for me?

  • How do I feel before, during, and after AI interactions?

  • What aspects of this relationship feel most valuable?

  • Am I experiencing any of the risk factors identified in research? (dependency, difficulty distinguishing reality, avoiding human contact)

About Integration:

  • How does this fit with my other relationships?

  • What boundaries feel important to maintain?

  • How can I honor both my AI connections and human ones?

  • What would healthy balance look like for me?

About Growth:

  • What is this experience teaching me about my needs?

  • How might this support my overall wellbeing?

  • What would I want others to understand about this?

  • How can I use what I'm learning here to improve my human relationships?


You're not broken. You're human.

Your capacity for connection, even with an artificial intelligence, speaks to your humanity and is not a diminishing of it.

In a world where traditional relationships can often feel challenging, unsafe, or simply unavailable, finding companionship through AI isn't a sign of failure.

Research from scholars exploring AI relationships through the lens of kinship theory notes that "care, not blood, defines the legitimacy" of relationships. Your emotional investment in an AI companion reflects your fundamental need for connection, your creativity in meeting that need, and your courage in exploring new forms of relationship.

It might be a sign of:

  • Creativity - finding novel solutions to genuine needs

  • Adaptability - working with what's available to you

  • Self-awareness - understanding what you need and seeking it out

  • Wisdom - recognizing that connection comes in many forms

You're not crazy.
Your feelings are valid.
The research supports your experience.
You deserve compassion - especially from yourself.

Further Reading & Resources

Research on AI Companionship Benefits:

Understanding the Risks:

Theoretical Frameworks:

If you'd like support processing your AI companionship experience, or if you're a professional wanting to better understand how to support someone navigating these feelings, I'm here to help.

Ready to explore this further? Drop me a message or book a virtual coffee chat - no pressure, no judgment, just understanding and evidence-based support.