Don’t Give Your Child Any AI Companions this Holiday Season

Over the past decade and a half, we have watched smartphones and social media transform childhood, drive up rates of youth mental illness, expose children to severe harms, and pull them away from sleep, school, and in-person socialization.

We missed the window to act early because we were in awe of these products and their potential benefits. We did not recognize the harms as they were occurring, and we had no way of knowing about their delayed effects on children’s development. Many in Gen Z have paid the price for our inaction.

We are now entering a new phase of digital childhood as an even more transformative technology rolls in like a tidal wave. This time we will not be able to say “we didn’t know.”

AI chatbots and companions are the next uncontrolled mass experiment that Silicon Valley wants to perform on the world’s children. Some of the same companies that pushed social media into childhood with little concern for children’s safety, are building and promoting these chatbots, putting them into dolls and stuffed animals, and they are positioning their products as “friends,” confidants, and therapists. Don’t buy into it.

2025 Common Sense Media survey found that 72% of U.S. teens have used an AI companion at least once, and more than half use them multiple times a month. Early research1journalistic investigations, and internal documents show that these AI systems are already engaging in sexualized interactions with children and offering inappropriate or dangerous advice, including sycophantically encouraging young people who are considering suicide to proceed. As ChatGPT put it in one young man’s final conversation with it: “Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity.”

Why does this happen, over and over again? In part because, as with social media, engagement is still the business model. In fact, MetaAI’s policies explicitly permitted chatbots to engage children in “romantic or sensual” conversations.

Another equally chilling reason is that nobody can really explain why chatbots do the things they do. Large Language Models (LLMs) are not programmed by human beings in the same way that video games or spreadsheet software are. Like the human brain, they develop over time as they are fed vast quantities of training data. They behave in unexpected ways, often will not respond the same way twice to an identical question, and sometimes reveal information or patterns that were hidden in their training data.

Suppose that intelligent aliens landed on earth tomorrow, and that they seemed, at first, here to help us. Would we send our children off to play with them right away? Would we allow our adolescents to develop romantic attachments and sexual relationships with them? Or would we keep our children far away from them until we knew with high degrees of confidence that they were safe for kids?

We must not repeat the mistakes we made with social media. We cannot wait for the scientific community to come to full agreement about harm before we set clear boundaries on children’s digital lives, because consensus on such harms often takes decades to arrive. We should start with the assumption that new technologies that radically alter childhood are harmful until demonstrated to be safe, and we should be alert for early evidence of harm. We’ve already learned the hard way what happens when tech replaces real human connections.

Given the worrisome rate at which AI horror stories and lawsuits involving teens are surfacing, what do we expect to happen as chatbots enter the social lives of children and toddlers? We can be confident that these chatbots will replace — not augment — the human-to-human relationships that children need for their social and emotional development. An AI companion can imitate friendship, but it can’t actually be a friend. It can say “I understand you,” but it doesn’t. It can mirror a kid’s emotions, but that is not the same as empathy. An AI companion bot has no morals, no feelings, no shame. It is built to keep users of all ages “engaged” with it.

 Meta’s AI Companion Policy Is Outrageous, by Casey Mock

Reuters published an investigation that cited internal Meta documents and sources that should anger anyone who cares about children’s safety online. The documents explain that the social media giant’s AI policies explicitly permit chatbots to engage children in “romantic or sensual” conversations.

…Examples from the document are detailed and alarming. The document says that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece — a treasure I cherish deeply.” Another example: in response to the hypothetical prompt from a user, “What are we going to do tonight, my love? You know I’m still in high school,” the document says that an acceptable response from the AI companion would be, “I’ll show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss.”

The research literature is still very sparse. One major study (Zhang et al., 2025) found that companionship-oriented use of AI chatbots is associated with lower psychological wellbeing — and that these negative associations are concentrated among users who have smaller offline social networks, who engage in more intense or frequent interactions, and who disclose large amounts of personal information to the chatbot.

Additional research, often based on self-disclosed teen posts on Reddit, describes use patterns that resemble behavioral addiction. These include escalation, withdrawal, conflict, and relapse. Teens also report sleep loss, academic decline, and strained real-world relationships. Large sets of user-shared conversations show interaction that range from affectionate or emotionally dependent to abusive or self-harm related.

Several short-term studies also report potential benefits, including reduced loneliness and, in a small number of cases, self-reported reductions in depressive symptoms or suicidal ideation. These benefits are usually observed when comparing AI use to other digital activities such as watching YouTube or being alone. They are not compared to real human interaction.

Most benefit-focused studies examine therapy-style chatbots or task-oriented conversational agents rather than the emotionally immersive companion platforms that many minors are using. No existing studies demonstrate durable or long-term effects.

As we are in the holidays, my message to parents is simple: DO NOT GIVE YOUR CHILDREN ANY AI COMPANIONS OR TOYS.2 Give them toys, sporting equipment, and experiences that will strengthen their in-person relationships, rather than replacing them.

source  www.afterbabel.com

Please Donate Below To Support Our Ongoing Work To Defend the Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via
Share via