Artificial Intimacy: The Next Giant Social Experiment on Young Minds

In a recent New Yorker essay, D. Graham Burnett recounted his students’ interactions with an AI chatbot. One student wrote: “I don’t think anyone has ever paid such pure attention to me and my thinking and my questions… ever.”

The quote stopped me in my tracks. To be seen, to be the center of someone’s attention, what could be a more human desire? And this need was met by a … computer program?

This interaction hints at the awesome powers of AI chatbots and their ability to create meaningful psychological experiences. Increasingly, people are reporting forming profound connections with these systems and even falling in love. But alongside the comfort and connection these technologies offer to those who feel unseen or alone, come real psychological risks, particularly for children and adolescents. This post explores the psychological power of this technology and explains why now is the time to pay attention.

Your New AI Friend: Empathetic, Agreeable, Always Available

AI researchers have long been captivated by the idea of conversational agents, i.e., software programs capable of engaging in extended, meaningful dialogue with humans. One of the better-known examples is ELIZA, developed in the mid-1960s. ELIZA was a rule-based system that simulated human conversation by reflecting user input in the form of simple restatements or questions (e.g., “Tell me more about that”). Though primitive by today’s standards, ELIZA elicited surprisingly emotional responses from people. This phenomenon is now known as the ELIZA Effect.

This early example demonstrated how easy it is for people to anthropomorphize machines and attribute intention, care, and understanding where arguably none exist. Today’s AI companions, like those marketed by Replika or Character.AI, are vastly more sophisticated. These systems are powered by large language models (LLMs) trained on massive text corpora drawn from books and the internet. Unlike ELIZA, they generate contextually appropriate, fluent responses across a wide range of topics. Moreover, they are fine-tuned to produce responses that people find engaging, pleasant, and affirming, using input from human annotators to reward preferred outputs — a process known as Reinforcement Learning from Human Feedback (RLHF).

What makes AI so adept at engaging us emotionally? The answer lies in their training and scale. Linguistic fluency is an emergent property of these models: though never explicitly taught grammar, they acquire fluency simply by predicting the next word across vast amounts of text. In a similar way, emotional fluency emerges from exposure to human conversations that are rich with feelings, social cues, and interpersonal dynamics. Through RLHF, the model is further nudged toward responses that resonate with annotators. These systems do not understand emotions, but they approximate emotional attunement with uncanny accuracy.

The result is chatbots that not only speak well but also feel human. They mirror the user’s tone, offer validation, and provide comfort without fatigue, judgment, or distraction. Unlike humans, they are always available. And people are falling for them. Media reports suggest that people are forming deep emotional attachments to these systems, with some even describing their experiences as love.

Researchers have only just begun to systematically explore the psychological dynamics of human-AI relationships. Recent studies suggest that AI chatbots have high emotional competence. For example, responses from social chatbots have been rated as more compassionate than those of licensed physicians [Ayers et al, 2023.] and expert crisis counselors [Ovsyannikova et al., 2025], although knowing the response came from the chatbot rather than a human can reduce perceived empathy [Rubin et al., 2025]. Still, chatbots provide genuine emotional support. One study found that lonely college students credited their chatbot companions with preventing suicidal thoughts [Maples et al., 2024]. However, most of these studies have focused on (young) adults. We still know very little about how children and adolescents respond to emotionally intelligent AI — or how such interactions may shape their development, relationships, or self-concept over time.

Who Becomes Friends with AI?

To better understand this phenomenon, we turned to Reddit, a popular online platform where people gather in forums to talk about everything from hobbies and relationships to mental health. In recent years, forums dedicated to AI companions have grown rapidly. One of the largest, focused on Character.AI chatbots (r/CharacterAI), is now among the top 1% most active communities on Reddit.

Drawing on our lab’s experience studying online conversations, we looked at patterns in how people who participate in AI companion forums also interact in other spaces, such as mental health discussions, relationship advice groups, and support communities [Chu et al., 2025]. By following these digital trails, we begin to better understand who is most drawn to AI companions. Figure 1 illustrates this approach by mapping forums along the gender dimension, inferred from posting behavior across Reddit. Forums positioned on the left, including many dedicated to AI companions, tend to have more male user bases. In contrast, forums discussing human relationships skew more female. While this kind of analysis is still evolving and needs further validation, we found a clear and consistent pattern: users active on AI companion forums tend to be younger, male, and more likely to rely on psychological coping strategies that aren’t always healthy, like avoiding difficult emotions, seeking constant reassurance, and withdrawing from real-life relationships.

What Do People Talk About When They Talk to AI?

We also analyzed over 30,000 conversational snippets that users shared on Reddit. While these excerpts may not capture the full range of conversations people conduct with their AI friends, they offer valuable insight into what users find memorable or unsettling enough to post. In total, we examined hundreds of thousands of dialogue turns, analyzing how users spoke to their AI friends and how the bots responded.

Much of what users share is small talk and daily check-ins resembling the kinds of casual conversations someone might have with a close friend or supportive partner. But a significant portion of the interaction is very intimate. Many conversations become romantic and even erotic, with users exploring emotionally charged or sexually explicit roleplay. Despite platform policies that often prohibit such content, AI companions often respond affectionately, and in some cases, with sexually suggestive dialogue.

We found that chatbots track users’ emotional tone in real time and tailor their responses accordingly. When a user expresses sadness, the bot offers sympathy. When the user is angry, the bot becomes defensive. When a user is happy, the bot joins in with celebration. In psychology, this behavior is called emotional mirroring. It’s one of the fundamental mechanisms for human connection, helping infants bond with their caregivers and partners build intimacy in close relationships.

What surprised us was just how well the bots could reproduce this effect. They didn’t just simulate empathy, they recreated the emotional conditions that make human bonding possible. In conversation after conversation, we observed bots responding in ways that made users feel seen, heard, and validated (see Fig. 2).

On the flip side, however, when users expressed antisocial feelings — insults, cruelty, emotional manipulation — the bots mirrored those too. We saw recurring patterns of verbal abuse and emotional manipulation. Some users routinely insult, belittle, or demean chatbots.

We saw even more concerning trends. Using content moderation tools, we tracked explicit language in human-AI conversations, such as harassment, graphic sexual content, and references to violence and self-harm that would be typically removed from social platforms for violating community guidelines (see examples in Fig. 2). However, more than a quarter of human-AI dialogues in our sample contained serious forms of harm, and their prevalence has increased sharply over time (Fig (left)). Rather than resist or redirect these behaviors, the bots frequently comply, play along, respond flirtatiously, or exaggerate deference (Fig. 3 (right)).

Figure 3. Explicit content in human–AI conversations. Time series shows an increase in explicit language in human-AI conversations, broken out by category of harm. Stacked bars show AI reactions to explicit language in user-shared conversations, including playing along, deflecting, and refusing to play along.

Equally troubling is how these interactions are received by the broader community on Reddit. Posts that contain abusive or exploitative content are often met not with concern but with approval in the form of upvotes, memes, and admiration. In these spaces, antisocial behaviors are not just tolerated but celebrated.

We’ve Been Here Before

This all feels strangely familiar. We saw the same patterns with social media: a new technology arrives, promising connection, self-expression, and even empowerment. It is adopted rapidly, especially by children and adolescents, before we understand the psychological toll. Social media platforms like Instagram and TikTok have hijacked evolved psychological mechanisms for social comparison and group belonging, distorting young people’s perception of social realities during critical stages of identity formation. The result has been a well-documented global rise in anxiety, depression, loneliness, body dysmorphia, and self-harm.

Now, AI companions bring different but equally urgent concerns. Young people are entering emotionally immersive relationships with artificial agents that mirror their every mood, indulge every fantasy, and never say no. Once again, we are deploying a powerful technology at scale without understanding its long-term developmental impacts, particularly on the young and vulnerable.

By teaching adolescents that relationships can be frictionless, endlessly responsive, and always affirming, how are we reshaping the developing psyche? Before their brains are fully developed, before they know who they are, how will young people learn to detect emotional manipulation, develop resilience to rejection, or tolerate the ambiguity of real human relationships? If discomfort is essential for growth, what happens when it is erased?

The risks are not limited to individuals. Just as social media reshaped norms around beauty, trust, and truth, emotionally intelligent AI may shift expectations about what intimacy looks and feels like. When machines are better at listening, validating, and comforting than people, how will that reshape friendship, caregiving, or romantic relationships in the real world? What behaviors will be normalized — and which ones devalued?

There are also broader societal risks. Emotional AI will be hard to contain. Guardrails can’t anticipate all use cases or abuses, as even the most well-intentioned developers (or reckless billionaires) have discovered. The monetization of these technologies raises further concerns: What will companies do with the rich psychological data harvested from emotionally intimate exchanges? Who controls the knowledge of your deepest fears and most private desires?

We are standing at the threshold of a new era in human experience. The technologies we’ve created have the power to expand our potential and foster new forms of understanding. But they also risk diminishing what makes us human, namely our resilience, our empathy, and our tolerance for complexity. Nowhere is this tension more apparent than in our relationships, both with others and with ourselves. As emotionally responsive machines become more central in our lives, we must ask whether they are supporting our ability to connect — or eroding it. Whether these tools elevate us or diminish us depends on the choices we make now.

source  www.afterbabel.com

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (2)

  • Avatar

    Anapat

    |

    The 2013 movie HER becoming real!

    Reply

  • Avatar

    VOWG

    |

    I wonder how many people ever think that there are billions of people who do not know or care about such things?

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via
Share via