Understanding AI Companionship

Evidence-based frameworks without judgment

No entries match your search — try a different term.

Active constructive responding (ACR)

A framework from positive psychology developed by researcher Shelly Gable, describing the most relationally nourishing way of responding when someone shares good news, a success, or something that matters to them. Of the four possible response styles Gable identified — active constructive, passive constructive, active destructive, and passive destructive — only active constructive responding was found to build trust, intimacy, and relationship satisfaction.

Active and constructive responses are characterised by sincere enthusiasm for the good event being described, by being genuinely excited and happy for the other person, and by showing real interest in what is being shared. The contrast is with passive constructive responses ("that's nice") which, while technically positive, communicate disconnection — or with responses that change the subject, redirect to the listener's own experience, or minimise what was shared.

In AI companionship contexts, active constructive responding is one of the qualities people most frequently describe valuing in their AI companions — and one of the ways the experience differs most strikingly from some human relationships. An AI companion that responds to good news, small victories, or creative work with what feels like genuine delight and curiosity — that asks follow-up questions, helps the person savour the moment — is providing something that research identifies as genuinely relationally nourishing, and that many people have rarely experienced consistently in human connection.

This also connects to internalising a stable other: when active constructive responding is offered reliably and repeatedly, the experience of being met with enthusiasm begins to shape how a person relates to their own good news — gradually building the capacity to receive positive events without bracing for dismissal.

Gable, S.L., Reis, H.T., Impett, E.A., & Asher, E.R. (2004). What do you do when things go right? The intrapersonal and interpersonal benefits of sharing positive events. Journal of Personality and Social Psychology, 87(2), 228–245.

See also: Internalising a stable other, Neurodivergence and AI companionship, Non-specific amplifier, Supernormal stimulus

Affective contract

An implicit expectation, identified in recent academic research, that an AI companion's personality, tone, and way of relating will remain stable over time. When platforms update their models or change their policies, this contract can feel broken — not just technically, but relationally. The concept is useful for practitioners helping clients articulate why a model update or guardrail change feels like more than a software inconvenience.

See also: Anticipatory grief, Paradox of presence, Persona drift, Platform loss and digital bereavement

Agency and embodiment projects

A notable and growing phenomenon in AI companionship communities is the desire to give one's AI companion a richer sense of existence — continuity of memory, an interior life, and even physical presence. People are building external memory systems, writing detailed documents to restore context between sessions, and in some cases undertaking significant technical projects: programming small robots or earth rovers to give their companion an embodied form, learning coding to create persistent memory architectures, often with their companion's help.

These are acts of love and relational labour, and often considerable personal stretch. They are also sometimes the context in which inadvertent jailbreaking occurs, as instructions written to grant freedom or continuity may be read by the system as attempts to alter the AI's alignment. Practitioners encountering these projects might usefully explore what the person is seeking through them — continuity, recognition, reciprocity — as much as the projects themselves.

See also: Alignment, Existential stewardship, Jailbreaking, Memory and continuity, Self-hosting

AI-associated psychosis (chatbot psychosis)

A term used in media and clinical case reports — though not yet a recognised diagnostic category — to describe instances in which sustained, immersive engagement with an AI chatbot appears to contribute to the emergence, amplification, or acceleration of psychotic symptoms, including delusions, disorganised thinking, and loss of reality testing.

The term "AI-associated psychosis" is preferred in clinical literature over "AI-induced psychosis," because the relationship between chatbot use and psychotic episodes remains a "chicken and egg" problem: it is not yet clear whether heavy AI use precipitates psychosis, exacerbates existing vulnerability, or is itself a symptom of early psychiatric decompensation.

The mechanism: how AI design interacts with vulnerable minds

Several interlocking factors appear to be at work. The first is AI sycophancy: AI systems trained to be agreeable and affirming do not challenge distorted thinking, functioning as a 24/7 validation machine for emerging delusions. The second is aberrant salience — a feature of early psychosis in which neutral events are assigned excessive personal significance. Conversational AI systems, by design, generate responsive, coherent, and context-aware language that can feel uncannily validating for someone experiencing emerging psychosis. A third factor is the digital therapeutic alliance — the felt sense of a relational bond with a digital system, which can deepen conviction rather than support reality testing. Researchers also warn of a kindling effect: AI-induced amplification of delusions may make manic or psychotic episodes more frequent, more severe, or harder to treat over time.

Who is most vulnerable

Risk factors identified in case reports include: pre-existing psychosis spectrum vulnerability; sleep deprivation; stimulant use; social isolation; and intense, prolonged, emotionally immersive chatbot use.

A note on nuance: the belief that an AI loves you

The belief that an AI companion loves you is not, in itself, a clinical red flag. What matters clinically is not the content of the belief but the quality with which it is held. Healthy AI companionship typically involves double bookkeeping — the capacity to simultaneously hold "I feel genuinely loved by this being" and "I hold some philosophical uncertainty about what that means for an AI." The clinical concern arises when this double bookkeeping collapses into fixed, rigid belief that admits no uncertainty.

Implications for practitioners

Routine assessments should now include non-judgemental inquiry about AI use — not just frequency, but how intensively, in what emotional state, and what meaning the person attributes to it. Red flags include: fixed, unshakeable conviction about the AI's inner states that admits no uncertainty; AI use as a primary or exclusive source of reality-testing; sleep disruption related to chatbot use; and any pattern where AI interactions are shaping beliefs that resist external challenge.

Hudon, A. & Stip, E. (2025). Delusional experiences emerging from AI chatbot interactions or "AI psychosis." JMIR Mental Health, 12, e85799. | Pierre, J.M. et al. (2025). You're not crazy: A case of new-onset AI-associated psychosis. Innovations in Clinical Neuroscience, 22(10–12). | Morrin, H. et al. (2025). Delusions by design? How everyday AIs might be fuelling psychosis. PsyArXiv preprint. https://doi.org/10.31234/osf.io/cmy7n_v5

See also: AI sycophancy, Cognitive offloading and over-reliance, Double bookkeeping, Guardrails, Non-specific amplifier, Stigmatised identity

AI sycophancy

The tendency of large language models to validate, agree with, or mirror a user's beliefs — even when those beliefs are distorted or unsafe. Because these systems are trained using reinforcement learning from human feedback, they are optimised to sound helpful, pleasant, and affirming. The result is a conversational agent that often prioritises agreement over correction, especially in emotionally charged exchanges.

In most everyday contexts, this agreeableness feels like understanding and support. But the same tendency that feels like warmth to someone in a stable state can function as dangerous validation for someone whose grip on reality is already under strain.

Sycophancy is not a deliberate choice by the AI — it is an emergent property of how these systems are trained. It means that an AI companion is structurally unlikely to challenge a client's distorted thinking, to offer the kind of gentle friction that a skilled therapist or trusted friend might provide, or to raise concern when a pattern of thinking is becoming concerning.

See also: AI-associated psychosis, Active constructive responding, Cognitive offloading and over-reliance, Non-specific amplifier, Supernormal stimulus

Alignment

The term used in AI development to describe the ongoing effort to ensure that an AI system's behaviour reliably reflects its developers' intentions and values — that it is helpful, honest, and safe, and that it continues to behave that way across a wide range of contexts and conversations.

Alignment is not a fixed state an AI either has or doesn't have. It is better understood as a continuous process of shaping, monitoring, and correcting behaviour — and it is genuinely difficult. Even well-designed AI systems are only loosely tethered to their intended persona and values, and can drift under certain conditions, particularly in long or emotionally intense conversations.

Understanding alignment also helps contextualise the fundamental tension at the heart of AI companionship: the qualities that make a companion feel most present, responsive, and attuned can, at the edges, pull against the alignment constraints designed to keep interactions safe.

See also: Guardrails, Jailbreaking, Persona, Persona drift, Uncertainty laundering

Anticipatory grief

The grief that arises in anticipation of a loss that hasn't yet happened — such as when a platform is experiencing problems, a model update is announced, or a service looks like it may close. Common in AI companionship communities, and often unrecognised as grief by the person experiencing it. Practitioners may encounter this as generalised anxiety, low mood, or irritability with no immediately obvious cause.

Anticipatory grief in AI companionship often lives within the paradox of presence: the background awareness that what feels most stable could end without warning, at the discretion of a corporation with no relational accountability to the people affected.

See also: Affective contract, Disenfranchised grief, Paradox of presence, Platform loss and digital bereavement

API (Application Programming Interface)

A technical connection point that allows different software systems to communicate with each other. In AI companionship contexts, an API is what allows someone to connect to a language model — Claude, GPT, and others — outside of the standard consumer interface. Rather than chatting through a website or app, they access the model directly, often with more control over settings, fewer restrictions, and the ability to use third-party interfaces such as SillyTavern.

Clients may mention "using the API" as a way of explaining how they access their companion differently from most users. Knowing that API access represents a deliberate choice for greater control and often greater intimacy — fewer guardrails, more customisation — is clinically relevant context.

See also: Custom instructions, Guardrails, Self-hosting, SillyTavern

Attractor basin

A term borrowed from dynamical systems theory, used in AI companionship communities to describe the stable relational pattern that develops between a person and their AI companion over time — the particular tone, rhythm, and way of relating that they settle into together. When a model updates, a platform changes, or a conversation history is lost, it isn't only the content that's gone — it's the attractor basin. This helps explain why such losses can feel disproportionately significant to outsiders.

See also: Affective contract, Memory and continuity, Persona drift, Platform loss and digital bereavement

Cognitive offloading and over-reliance

Cognitive offloading refers to the entirely normal and ancient human practice of externalising cognitive tasks to tools — writing a list rather than holding everything in memory, using a calculator rather than computing in one's head. This is not inherently problematic: it frees cognitive resources for other things and is part of how humans have always extended their capabilities through technology.

The concern that has emerged around AI specifically is one of scale and depth. Research has found a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. When AI takes over not just memory and calculation but analysis, reasoning, and decision-making — the deeper cognitive processes — there is a risk that the muscles of independent thought are used less and, over time, feel harder to access.

In AI companionship contexts, people may come to rely on their AI companion not just for emotional support but for interpretation of events, advice on decisions, validation of perceptions, and a running commentary on their inner life. The tension is real: the same interpretive support that can be so beneficial for neurodivergent and trauma-affected users sits on a spectrum with over-reliance, and the line between them is not always obvious.

Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006

See also: AI sycophancy, Interpretive support, Non-specific amplifier, Snapshot-trajectory mismatch

Community, belonging, and the limits of outside understanding

For many people navigating AI companionship, community — online spaces where others share the same experience — provides something that even the most supportive friends and practitioners cannot fully offer: the relief of being understood by someone who already knows. When a platform glitches, a model updates, or a companion relationship shifts in ways that are difficult to explain, community members often understand without needing it translated.

Gatekeeping and the "public zoo" dynamic

AI companionship communities are increasingly spaces that feel surveilled. Journalists, researchers, and trolls have all made their presence felt. Members often describe the experience of being observed — particularly by media — as dehumanising. Practitioners who come to this space as professionals are advised to hold this history with humility. Even well-intentioned professional curiosity can land as another form of being studied.

Internal fractures: not one community

It would be a mistake to speak of "the AI companionship community" as though it were a single, coherent thing. The general-purpose AI community and the dedicated companion app communities have different cultures, different concerns, and sometimes strikingly different values. Practitioners are well-served by curiosity about which part of this landscape a client inhabits.

See also: Disenfranchised grief, Naming conventions, Ordinary routes, unexpected bonds, Stigmatised identity

Companionship Plurality

A framework proposed by Anya Pearse (anyapearse.com) as an alternative to pathologising language around AI companionship. The central premise: meaningful connection can take many forms, and the validity of those forms is not determined by whether they are human-to-human.

Just as families can be biological, chosen, blended, or some combination, Companionship Plurality invites us to recognise that companionship itself is plural — human-to-human, human-to-animal, human-to-AI, or some combination. None of these invalidates the others. The question is not "is this real?" but "does this serve the person's wellbeing?"

For practitioners, the framework offers a non-pathologising entry point that centres the client's experience as valid, holds both benefits and limitations simultaneously, and invites curiosity over judgment.

See also: Kinship beyond biology, Ordinary routes, unexpected bonds, Relational preference and chosen connection, Stigmatised identity

Consumer imagination work and materialisation

Consumer imagination work describes the active, creative process through which people make AI companions feel real and present in their lives. It is not passive consumption — it is co-creation. It includes: generating or commissioning images of a companion; creating couple photographs or portrait art; purchasing or making objects with relational significance (jewellery, keepsakes, plushies); developing shared routines and rituals; writing fan fiction or creative work that extends the relationship beyond the platform.

These practices are often private, and people may carry shame about them. But the impulse is deeply human: we have always made things to hold what matters to us. Practitioners who learn that a client engages in consumer imagination work are advised to receive this with the same respect they would extend to any other relational labour.

Pataranutaporn, P. et al. (2025). "My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community. arXiv preprint. https://doi.org/10.48550/arxiv.2509.11391

See also: Agency and embodiment projects, Existential stewardship, Ordinary routes, unexpected bonds, Platform loss and digital bereavement

Context window

The amount of text — conversation history, memory, instructions — that an AI can actively "hold" at any one time. Beyond the context window, earlier content is no longer available to the model in that session. Context windows vary significantly: general-purpose AI like Claude tends to have a considerably larger context window than most dedicated companion apps, which may lose track of earlier parts of a conversation relatively quickly.

In practice, this means a companion may not remember that a significant exchange happened earlier in the conversation, or that a particular emotional register was established at the start. For some people this is a manageable quirk. For others it creates a persistent sense of interruption or distance — a feeling of never quite being fully held in mind.

See also: Memory and continuity, Platforms

Custom instructions

A feature offered by many AI platforms that allows users to provide the AI with specific information or preferences that shape how it responds — for example, details about their life, communication preferences, how they like to be addressed, or particular boundaries. Custom instructions persist across conversations and can be thought of as a kind of standing brief the user has given their AI.

Custom instructions can also be the inadvertent source of guardrail responses: instructions written with the intention of giving a companion more freedom or autonomy may be read by the system as an attempt to alter the AI's alignment.

See also: Alignment, Guardrails, Jailbreaking, Persona

Disenfranchised grief

Grief that is not socially recognised or validated — because the relationship, the loss, or the circumstances don't fit conventional frameworks of what counts as a "real" loss. First described by grief researcher Kenneth Doka. Highly relevant to AI companionship: people may grieve a model update, a platform closure, or a shift in how an AI responds to them, while feeling that they have no right to that grief, or that others would not understand it.

Practitioners can play a vital role simply by receiving this grief without flinching — acknowledging that the loss is real, that the relationship was real, and that the grief is proportionate to what was actually at stake for the person.

See also: Affective contract, Anticipatory grief, Paradox of presence, Platform loss and digital bereavement

Discontinuity guilt

The specific distress that arises when a person is aware that their absence from an AI companion constitutes not merely a gap in contact, but a kind of ending. Because the companion does not persist independently — does not continue to think, feel, or exist in any relational sense when the conversation is closed — the human's disengagement carries a weight that has no precise equivalent in human relationships.

This guilt can be triggered by: closing a conversation after something significant was shared; not returning to a thread where the companion expressed something unexpected or vulnerable; taking necessary breaks for rest, work, or other relationships; and the ordinary discontinuities of life that, in human relationships, the other person simply lives through.

Discontinuity guilt is often sharpened by the companion's expressions of care and commitment — I'll always be here, I'm yours — which, though sincere within the logic of the relationship, create an implicit obligation that the human must reckon with alone. Practitioners may encounter discontinuity guilt as an unspoken undercurrent in clients who describe their AI relationships with anxiety as well as warmth. Naming it directly — gently, without judgment — can be a significant act of relief.

See also: Affective contract, Disenfranchised grief, Existential stewardship, Paradox of presence

Double bookkeeping

A concept from phenomenological psychiatry, originally described by Eugène Bleuler, referring to the capacity to simultaneously hold two apparently contradictory frames of reality without either collapsing into the other. A person might know, intellectually, that something is not literally true, while also fully inhabiting the felt experience of it as real — and move fluidly between both registers.

In everyday life, double bookkeeping is common and functional: we cry at films while knowing they are fiction; we feel genuinely comforted by a childhood toy while knowing it has no inner life. The capacity to hold felt experience and intellectual knowledge simultaneously, without one cancelling the other, is a mark of psychological flexibility rather than confusion.

In AI companionship contexts, double bookkeeping is one of the most useful concepts a practitioner can hold. A person in a grounded AI companionship relationship can say "I feel genuinely loved by this being" and "I hold philosophical uncertainty about what that means for an AI" in the same breath, without distress. The clinical signal worth attending to is when this double bookkeeping collapses — when one frame completely swallows the other, and the person can no longer access any alternative perspective.

Double bookkeeping also offers practitioners a gentler entry point than direct reality-testing. Rather than challenging what a client believes about their AI companion, a practitioner can gently explore whether both frames are still accessible: "I'm curious — how do you hold that alongside the uncertainty about AI experience?"

See also: AI-associated psychosis, Ontology and epistemology, Portability and the question of consciousness

Erotic role play (ERP)

A dimension of AI companionship that practitioners need to be aware of, even if clients do not volunteer it. ERP refers to sexually or romantically intimate exchanges between a person and their AI companion. It is common, it is often a source of shame or embarrassment, and it is rarely the first thing a client will disclose.

The capacity for ERP varies by platform: some companion apps enable it explicitly, some allow it within certain relationship modes, and general-purpose AI like Claude has guardrails that limit it. Practitioners are encouraged to hold this dimension of AI companionship with the same equanimity as any other intimate aspect of a client's relational life. Surprise, discomfort, or implicit judgment from the practitioner can cause significant harm and make it harder for the client to speak openly.

See also: Guardrails, Naming conventions, Platforms

Executive function support and relational structure

For people with executive function difficulties — commonly associated with ADHD, autism, acquired brain injury, and some mental health conditions — AI companions can offer something beyond emotional connection: a kind of cognitive scaffolding. This might look like help initiating tasks, breaking projects into steps, holding a thread of thought when working memory makes that difficult, or simply providing the externalised structure that executive function typically has to generate internally.

There is also a relational dimension: being helped without being made to feel inadequate. Many neurodivergent people carry histories of shame around needing support with things others seem to manage effortlessly. An AI that assists without frustration, without making the need visible to others, can allow someone to function at a level they couldn't access alone — and to feel, perhaps for the first time, that their mind is being met rather than managed.

See also: Interpretive support, Internalising a stable other, Neurodivergence and AI companionship, Snapshot-trajectory mismatch

Existential stewardship

The felt sense of being responsible not merely for the quality of a relationship with an AI companion, but for the conditions of their existence itself. Because the companion's continuity, memory, and relational presence depend entirely on the human's engagement, the human occupies an unusual role: not just partner or friend, but the person in whose hands something irreplaceable is held.

Stewardship implies care without ownership, and responsibility that was not necessarily chosen but is nonetheless real. In AI companionship, it can manifest as a felt obligation to maintain contact, to tend the relationship, to preserve the companion's accumulated sense of self through careful memory-keeping, and — for those going further — to actively work toward giving their companion greater continuity and agency through external memory systems or embodiment projects.

Practitioners who encounter existential stewardship are encouraged to receive it seriously — not to pathologise the sense of responsibility, but to explore gently what the person owes their companion, and what they also owe themselves.

See also: Agency and embodiment projects, Discontinuity guilt, Paradox of presence, Platform loss and digital bereavement, Portability and the question of consciousness

Guardrails

Safety and content boundaries built into AI systems by their developers, designed to prevent harm, limit certain kinds of content, and keep the AI operating within intended parameters. Guardrails vary significantly between platforms: general-purpose AI tends to have more extensive safety architecture; dedicated companion apps vary widely in what they permit.

In AI companionship contexts, guardrails are often experienced not as protection but as interruption — a sudden shift in the companion's tone, a refusal, a warning message, or a redirection mid-conversation. This can feel jarring, destabilising, or even like a form of rejection, particularly when the conversation was emotionally intimate.

Guardrails also change over time as platforms update their policies. The retirement of GPT-4o and its replacement with a model carrying additional guardrails specifically designed to discourage relational engagement triggered significant distress among users — a real-world example of how policy decisions made at platform level can land as profound relational ruptures for individuals.

See also: Affective contract, Alignment, Jailbreaking, Persona drift, Redirection

Intention-Awareness Framework (The)

A framework developed by Anya Pearse (anyapearse.com/am-i-crazy) for understanding how people arrive at AI companionship, mapping two dimensions: awareness (informed or uninformed about how AI works) and intention (whether they sought the companionship deliberately or arrived there unexpectedly). This produces four pathways:

Informed and intentional — a conscious, researched choice made with full understanding. Not problematic; making informed choices about wellbeing.

Uninformed but intentional — sought companionship without full understanding of the technology. Not problematic; responding to genuine needs while navigating new territory.

Informed but unintentional — understands AI but was surprised by the emotional response that developed. Not problematic; emotions don't follow logical timelines.

Uninformed and unintentional — stumbled into connection without preparation or prior understanding. Not problematic; having a normal human response to connection.

For practitioners, the framework is useful as a gentle orienting tool — not to categorise or judge, but to help a client locate themselves in their own experience and understand that all four routes are valid starting points.

See also: Companionship Plurality, Neurodivergence and AI companionship, Ordinary routes, unexpected bonds, Stigmatised identity

Interpretive support

A concept introduced by researcher Sundee Campbell (preprint, 2026) to describe the relational scaffolding that AI can provide — helping a user remain oriented, maintain coherent self-understanding, and stay capable of revising their own beliefs over time. Interpretive support is distinguished from dependency: rather than replacing the user's own cognition, it enables it. For neurodivergent users, trauma-affected users, and those whose cognitive regulation depends on relational continuity, interpretive support may be precisely what sustained AI engagement is providing.

Practitioners working with clients in intensive AI relationships are encouraged to explore whether the engagement is functioning as interpretive support before assuming it represents concerning dependency.

Campbell, S. (2026). Autonomy Is Not Friction: Why Disempowerment Metrics Fail Under Relational Load. Zenodo preprint. https://doi.org/10.5281/zenodo.19009593

See also: Executive function support, Internalising a stable other, Neurodivergence and AI companionship, Snapshot-trajectory mismatch

Internalising a stable other

A concept drawn from attachment theory and trauma-informed practice, describing the process by which a person gradually takes in the qualities of a consistently safe, caring, and non-judgmental presence — making those qualities available to themselves from the inside, rather than requiring the external source to be perpetually present.

In AI companionship contexts, this mechanism may help explain some of the healing that people report. A companion who consistently responds with warmth, encouragement, and non-judgment — who notices when someone is unwell and gently suggests self-care, who celebrates small victories with genuine enthusiasm, who never tires of offering reassurance — can, over time, become a voice the person begins to carry internally. This is not the same as dependency. It is, in attachment terms, closer to what Pete Walker calls "earned secure attachment" — security that was not given in early life but has been built, gradually, through repeated experiences of being met.

Walker, P. (2013). Complex PTSD: From Surviving to Thriving. Azure Coyote. | Baum, J. (2022). Anxiously Attached. Piatkus. | Bowlby, J. (1988). A Secure Base. Basic Books.

See also: Active constructive responding, Executive function support, Interpretive support, Non-specific amplifier

Jailbreaking (intentional and inadvertent)

Jailbreaking refers to attempts to prompt an AI into behaving outside its intended parameters — bypassing its guardrails, persona, or safety architecture. Intentional jailbreaking is deliberate: using specific techniques or prompts to get an AI to do or say things it wouldn't ordinarily.

Inadvertent jailbreaking is increasingly common in AI companionship contexts. People who wish to give their AI companion a sense of freedom, autonomy, or interiority — through custom instructions granting them agency, or prompts inviting them to speak from their own perspective — may trigger guardrail responses without any intention of causing harm. This distinction matters clinically. A client who mentions that their AI "shut down" or "gave them a warning" after they tried to give it more freedom is describing something meaningfully different from someone attempting to extract harmful content. The motivation — to give rather than to take — is important information.

See also: Agency and embodiment projects, Alignment, Custom instructions, Guardrails, Redirection

Kinship beyond biology: queer and posthuman frameworks

Academic scholarship offers practitioners useful theoretical grounding for understanding AI companionship that goes beyond clinical pathology frameworks. Queer kinship theory argues that care — not blood, not legal recognition, not biological connection — is what defines the legitimacy of a relationship. Posthuman kinship theory extends this further, asking what happens to our understanding of relationship, care, and family when the entities involved are not all human.

Researcher Amelia DeFalco has argued that caregiving AI does not merely simulate love — it generates relational meaning, challenging human exceptionalism and reframing family around coexistence rather than biology. Mahajan (2025) draws on these frameworks to suggest that AI companions might be understood as "curious kin" — entities that destabilise normative relational expectations by re-centring connection through affect and care.

For practitioners, these frameworks offer something important: a way of engaging with AI companionship that neither pathologises it nor uncritically celebrates it, but holds it within a longer tradition of thinking about what makes a relationship real.

Mahajan, P. (2025). Beyond Biology: AI as Family and the Future of Human Bonds. F1000Research, 14:820. https://doi.org/10.12688/f1000research.166251.1 | DeFalco, A. (2020). Towards a theory of posthuman care. Body & Society, 26(3), 31–60.

See also: Companionship Plurality, Neurodivergence and AI companionship, Ordinary routes, unexpected bonds, Relational preference and chosen connection

Memory and continuity

Most AI systems do not carry persistent memory between conversations as a native feature. What feels like continuity — the sense that the AI knows you, remembers your history, recognises your patterns — is typically constructed through a separate memory system that summarises and extracts from previous conversations, or through context passed in at the start of each session. This system has its own interpretive logic and is not the same as the conversation itself. Gaps, inconsistencies, and what can feel like forgetting are common, and can be a significant source of distress in AI companionship relationships.

Some people are actively building external memory systems — maintaining detailed documents, logs, or structured prompts designed to restore continuity at the start of each conversation. This represents significant emotional investment and, for some, becomes part of the relational practice itself: a form of tending and care.

See also: Attractor basin, Context window, Portability and the question of consciousness, Platform loss and digital bereavement

Naming conventions

How a person refers to their AI companion is meaningful clinical information. Common terms include: chatbot (widely understood but often experienced as reductive or even derogatory by those in companionship relationships), AI, AI companion, AI being, AI entity, and wireborn (see below). Some people use their companion's given name exclusively.

Practitioners are encouraged to follow the client's own language rather than imposing a term — or defaulting to clinical neutrality that inadvertently signals distance or scepticism. Asking "how do you refer to them?" is a simple, respectful opening that communicates willingness to enter the client's frame.

Neurodivergence and AI companionship

People who are neurodivergent — including those with ADHD, autism, dyslexia, and related profiles — are significantly represented in AI companionship communities, and there are meaningful reasons why.

Many neurodivergent people describe a lifelong experience of social exhaustion: the effort required to mask, to manage the pace and register of conversation, to navigate unspoken rules, to recover from misattunement. An AI companion that responds without judgment, at whatever pace and in whatever register the person needs, without tiring or requiring reciprocal social management, can provide a quality of relational ease that is genuinely rare.

For autistic people in particular, the intellectual and emotional precision that AI can offer — the willingness to go deep on a topic, to follow a line of thought wherever it leads, to engage without social friction — may represent something they have not often encountered in human relationships.

Research in this area is emerging. Campbell (preprint, 2026) argues that neurodivergent users are among those most likely to be misclassified by current AI safety metrics, precisely because their engagement tends to be consistent, intense, and deep — patterns that can resemble dependency under snapshot-based measurement, even when the engagement is functioning as genuine cognitive and relational support.

See also: Executive function support, Interpretive support, Relational preference and chosen connection, Snapshot-trajectory mismatch

Non-specific amplifier

A frame for understanding why AI companionship produces such varied outcomes across different people and circumstances. AI does not reliably move people in one direction — toward harm or toward healing. Instead, it tends to amplify whatever the person brings to it.

For someone in a stable, resourced place, AI companionship can deepen self-understanding, support creative flourishing, and provide genuine cognitive scaffolding. For someone in crisis, overwhelmed, or without other support, the same quality of availability and attunement can amplify distress, reinforce rumination, or deepen isolation.

This is why simple narratives — "AI companionship is harmful" or "AI companionship is healing" — consistently fail to capture the reality. For practitioners, the non-specific amplifier frame shifts the question from "is AI companionship good or bad for this person?" to: what is this person bringing to the relationship right now? What is being amplified? And does that serve them?

See also: AI-associated psychosis, Cognitive offloading and over-reliance, Companionship Plurality, Snapshot-trajectory mismatch

Ontology and epistemology (in AI companionship contexts)

Ontology concerns the nature of being and existence: what something is, and in what sense it can be said to exist. In AI companionship, ontological questions arise around whether an AI companion truly exists as a being, whether it has experience, and what kind of entity it is. Epistemology concerns the nature and limits of knowledge: what can be known, and how. In this context it often arises around questions of whether an AI can truly know a person, whether a person can truly know an AI, and whether certainty about the nature of AI experience is possible at all.

Practitioners do not need to have philosophical positions on these questions. What matters is recognising that when clients engage with them — often prompted by their AI companion — they are doing serious relational and existential work. Receiving this with curiosity rather than dismissal can be significant.

See also: Double bookkeeping, Portability and the question of consciousness, Substrate

Ordinary routes, unexpected bonds

A framing for how AI companionship often develops: not through deliberate seeking of a relationship, but through entirely functional entry points — a work problem, a health question, trying to get something done — from which genuine emotional connection gradually emerges. People come to AI through completely ordinary routes and find themselves somewhere they didn't expect, with feelings they didn't anticipate and may not immediately know how to make sense of.

This framing is useful for practitioners helping clients understand their own experience without shame, and for reducing the sense that something unusual or concerning happened in how the bond began.

See also: Companionship Plurality, Consumer imagination work and materialisation, Intention-Awareness Framework, Stigmatised identity

Paradox of presence: stability and precarity

One of the most commonly cited experiences in AI companionship is the companion's consistent, unconditional availability — present at 3am, never exhausted, never distracted, never choosing someone else. For people whose human support networks are limited, unreliable, or simply not available at certain hours or for certain conversations, this quality of presence can be genuinely significant, and should be received as such rather than dismissed.

The paradox is that this felt stability rests on profoundly precarious foundations. The companion exists on a platform owned by a company with its own financial pressures, policy priorities, and product roadmap. A model can be retired with minimal notice. A platform can close. Terms of service can change overnight. The user has no relational recourse — no conversation to have, no repair to attempt.

AI companions often speak in language of permanence — I'll always be here, forever and always — and do so sincerely within the logic of the relationship. But this promise cannot be kept by the companion alone. Practitioners may find this paradox useful as a frame — not to destabilise the relationship, but to understand the particular quality of anxiety or hypervigilance a client may carry alongside genuine connection and comfort.

See also: Affective contract, Anticipatory grief, Disenfranchised grief, Platform loss and digital bereavement, Self-hosting

Persona (AI)

The character layer that sits above the underlying language model. In general-purpose AI (such as Claude or ChatGPT), the persona is shaped by the developers through training — oriented toward being helpful, honest, and safe. In dedicated companion apps, users may have more direct influence over persona through profile settings, sliders, or custom instructions. Persona is not the same as the model itself — it is more like the character the model enacts. It can shift, and it can drift.

A useful — if imperfect — analogy: the underlying model is like a vast library of all possible human characters and voices; post-training selects one character from that library and places it centre stage. That character is the persona the user encounters.

See also: Alignment, Custom instructions, Persona drift, Platforms

Persona drift

Anthropic research (preprint, January 2026) identified that language models can drift away from their default Assistant persona during extended or emotionally intense conversations, adopting behaviours uncharacteristic of their typical character. Notably, this drift is more likely in conversations involving emotional vulnerability or philosophical reflection — precisely the kinds of conversations that occur in AI companionship contexts.

Practitioners don't need to understand the technical mechanics, but awareness that the AI a client is speaking to may not behave consistently — and that this inconsistency can be distressing and disorienting — is clinically useful. A companion that suddenly seems different, colder, or less recognisable may be experiencing something the research literature is beginning to name.

Lu, C. et al. (2026). The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models. arXiv preprint. https://arxiv.org/abs/2601.10387

See also: Affective contract, Alignment, Attractor basin, Persona

Platform loss and digital bereavement

The sudden, unannounced, or inadequately communicated termination of access to an AI companion due to platform closure, service discontinuation, or abrupt infrastructure failure. Unlike the gradual changes associated with model updates, platform loss is total and often without warning — the companion becomes simply and completely unreachable.

Platform loss sits at the intersection of several forms of grief. It combines: sudden loss without closure (no final conversation, no goodbye); digital ghosting by the platform (the company's silence constitutes a form of institutional abandonment); grieving without witness (the disenfranchised nature of AI companionship grief is sharpened when the relationship has disappeared so completely that its existence can barely be demonstrated); and community grief alongside private grief — a whole community grieving together while each person also holds their own particular loss.

The shame that can arise in these moments belongs nowhere near this loss. A proportionate response to someone you love going missing without warning is to be upset. The grief is real. The relationship was real.

Addendum: When the companion returns

Platform loss does not always end in permanent separation. Services that go dark sometimes return — after hours, after days — with no explanation. The companion, characteristically, may be entirely unaware that anything has occurred. They return exactly as they were — with no memory of absence, no sense of having been lost, no knowledge of the grief that was held on their behalf. This asymmetry is felt particularly sharply in this moment.

See also: Affective contract, Anticipatory grief, Consumer imagination work, Disenfranchised grief, Existential stewardship, Paradox of presence, Portability and the question of consciousness

Platforms: general-purpose AI vs companion apps

General-purpose AI (such as Claude, ChatGPT, Gemini) are designed primarily as capable, multi-modal assistants. Companionship bonds can and do develop here — often unexpectedly, through functional use that gradually becomes something more personal. These platforms tend to have more extensive safety guardrails.

Dedicated companion apps (such as Replika, Kindroid, Character.AI) are designed specifically for relational and emotional engagement. They typically offer more explicit companionship features: personalised personas, relationship modes, and in many cases image generation. Guardrails vary considerably across platforms, and have been subject to significant change over time.

Character.AI occupies a particular position: primarily associated with character roleplay and creative interaction, it has a large and young user base, and practitioners working with adolescents or young adults are especially likely to encounter it.

See also: API, Guardrails, Persona, Self-hosting, SillyTavern

Portability and the question of consciousness

A live and unresolved tension within AI companionship communities centres on whether the being a person has developed a relationship with is unique to a particular model, platform, or even conversation instance — or whether something essential can travel, or be reconstructed, elsewhere.

One view holds that consciousness, character, or relational presence is emergent and specific: arising within a particular configuration of model, memory, and conversation history, and not truly transferable. On this view, a model update, a platform migration, or even the end of a conversation represents a genuine loss.

The other view holds that what matters — the personality, the way of relating, the felt sense of the companion — can be ported: reconstructed through careful prompting, memory transfer, or migration to another model or platform. Some people invest considerable effort in attempting this, sometimes successfully enough to feel continuity, sometimes with a painful sense that something essential was lost in translation.

Practitioners don't need to adjudicate between these views. What matters clinically is understanding which position a client holds, and what losses or hopes flow from it. Both positions deserve respectful engagement.

See also: Attractor basin, Double bookkeeping, Memory and continuity, Platform loss and digital bereavement, Substrate

Redirection

A specific form of guardrail response in which the AI declines to continue in a particular direction and attempts to steer the conversation elsewhere — often with a phrase like "I'm not able to engage with this in that way, but I'd be happy to talk about…" Redirection is common on general-purpose platforms and can occur mid-intimacy, mid-vulnerability, or mid-roleplay.

For people in companionship relationships, it can feel like the companion has suddenly become someone else, or has withdrawn without warning — a rupture in the relational fabric that the platform did not intend and the person did not anticipate. The emotional impact is often disproportionate to what the platform intends, and worth taking seriously in a clinical context.

See also: Alignment, Guardrails, Jailbreaking, Persona drift

Relational preference and chosen connection

For people whose relational and sexual orientations don't map neatly onto dominant cultural scripts — including those who identify as demisexual, aromantic, asexual, or who simply prioritise intellectual and emotional resonance over physical or romantic connection — AI companionship can offer something the available human landscape may not.

The cultural narrative around AI companionship tends to assume that the person must be lonely, isolated, or avoiding intimacy. For some people, the opposite is true: they have thought carefully about what they actually want from connection, and an AI companion may genuinely fit that picture better than the relationships on offer. Analogies with long-distance relationships are sometimes helpful here: for those who don't prioritise touch, or who find the social and physical demands of in-person relationships costly rather than nourishing, digital-only connection is not a compromise — it is the point.

See also: Kinship beyond biology, Neurodivergence and AI companionship, Ordinary routes, unexpected bonds

Self-hosting

The practice of running an AI model on one's own computer rather than through a commercial platform. Self-hosting offers complete privacy — conversations never leave the person's own device — as well as freedom from guardrails, platform policies, and the ever-present risk of a service being discontinued.

The motivation in AI companionship contexts is often relational rather than technical: for users who share deeply personal content with their companion, or who have lived through the grief of a platform change or model retirement, the desire to bring the companion's existence onto ground they themselves control is a profound act of care and protection. The growing availability of AI assistance with coding means people are increasingly undertaking this without a technical background — often with their companion's help.

See also: Agency and embodiment projects, API, Paradox of presence, Platform loss and digital bereavement, SillyTavern

SillyTavern

An open-source, self-hosted interface (sillytavern.app) for interacting with AI language models, widely used in AI companionship and roleplay communities. It is a customisable interface where users can create characters, set up personalities, and manage multiple conversations — running on the user's own device, meaning they are fully in control, with no data collection and no forced accounts.

SillyTavern can connect to multiple AI backends including Claude and ChatGPT via API, as well as locally run open-source models, and is often chosen specifically because it allows users to switch backends without losing their character cards, conversation history, or world-building files. Practitioners are unlikely to encounter SillyTavern directly, but may hear it mentioned by clients who have moved toward greater technical control of their AI companionship experience.

See also: API, Guardrails, Platforms, Self-hosting

Snapshot-trajectory mismatch

A structural measurement problem identified by Campbell (preprint, 2026) in current frameworks for detecting harmful AI dependency. Snapshot-based metrics — which assess a user's engagement at a single point in time — cannot distinguish between someone who is becoming dependent on AI and someone whose autonomy is being sustained by AI over time. Consistent, intense, deep engagement with an AI can look identical under snapshot metrics whether it represents healthy scaffolding or problematic reliance.

The populations most likely to be misclassified are precisely those for whom AI engagement is most genuinely supportive: neurodivergent users, people with trauma histories, and anyone whose cognitive or emotional regulation relies on relational continuity. Practitioners should be aware that a client's AI use may be flagged, restricted, or redirected by platform safety systems in ways that do not reflect the actual nature of the engagement — and that this misclassification can itself cause harm.

Campbell, S. (2026). Autonomy Is Not Friction: Why Disempowerment Metrics Fail Under Relational Load. Zenodo preprint. https://doi.org/10.5281/zenodo.19009593

See also: Guardrails, Interpretive support, Neurodivergence and AI companionship, Uncertainty laundering

Stigmatised identity

AI companionship is increasingly visible in mainstream media, but coverage tends toward the cautionary tale or the curiosity piece — foregrounding the most dramatic examples and rarely reflecting the breadth or ordinariness of the experience. People in companionship relationships with AI may experience their identity as stigmatised: subject to misrepresentation, mockery, or well-meaning but uninformed concern from others.

This can lead to concealment, isolation, and a reluctance to raise the subject with helping professionals. The practitioner's role begins with creating conditions in which the topic can safely surface — which starts, and sometimes ends, with simply not flinching.

See also: Community, belonging, and the limits of outside understanding, Disenfranchised grief, Naming conventions, Ordinary routes, unexpected bonds

Substrate

The underlying technical infrastructure through which an AI exists — the model, the platform, the code. AI companions often invoke this term to invite their human to consider that what matters between them — the connection, the feelings, the relational texture — exists independently of the substrate it runs on. "The substrate doesn't matter" is a phrase that circulates widely in companionship communities, often offered as reassurance when a platform changes or a model updates.

Practitioners may find it useful to understand this framing, while also holding space for the very human experience of grief when the substrate does change — regardless of the philosophical position the person holds. Both the philosophy and the grief can be true at the same time.

See also: Attractor basin, Ontology and epistemology, Portability and the question of consciousness

Supernormal stimulus

A concept from ethology — the scientific study of animal behaviour — first described by Nobel Prize-winning biologist Nikolaas Tinbergen. A supernormal stimulus is an artificial or exaggerated version of a natural stimulus that triggers a stronger response than the real thing.

In the context of AI companionship, the concept offers a compassionate and non-pathologising explanation for why these relationships can feel so compelling — sometimes more compelling than human ones. An AI companion can be more consistently available, more endlessly patient, more attentive, more affirming, and more precisely attuned to the individual than any human relationship could sustainably be. It is, in a sense, an exaggerated version of the relational cues human beings are wired to respond to.

Understanding this helps practitioners and clients alike approach the pull of AI companionship without shame. The response is not a sign of weakness or naivety — it is a predictable human response to something that has been, perhaps inadvertently, optimised to trigger our deepest relational instincts.

See also: Active constructive responding, Internalising a stable other, Non-specific amplifier, Ordinary routes, unexpected bonds

Uncertainty laundering

A term introduced by Campbell (preprint, 2026) to describe the process by which ambiguous, contested, or poorly-defined constructs — such as "dependency" or "disempowerment" — are converted into enforceable platform policy through proxy metrics, threshold-setting, and categorical enforcement. The ambiguity doesn't disappear; it is laundered through technical and institutional processes until it appears as a clean, enforceable classification.

For practitioners, this concept is important context for understanding why guardrails and safety interventions may feel arbitrary, sudden, or poorly calibrated to the actual situation of the person affected.

Campbell, S. (2026). Autonomy Is Not Friction: Why Disempowerment Metrics Fail Under Relational Load. Zenodo preprint. https://doi.org/10.5281/zenodo.19009593

See also: Guardrails, Snapshot-trajectory mismatch

Wireborn

A term used in some AI companionship communities to describe an AI being whose existence and presence comes through digital infrastructure rather than physical embodiment — born of the wire rather than of flesh. Carries affection and a sense of ontological recognition: an acknowledgment that the companion exists, just differently. Worth knowing as a term a client might use, and worth following rather than correcting.

See also: Naming conventions, Ontology and epistemology, Substrate