Chatbots Could Stunt Children’s Emotional Growth

Critics are sounding the alarm about AI-powered toys, saying they may harm children’s emotional development, pose privacy risks and result in a generation of youngsters forming their first “real” relationships with machines
The number of toys designed to provide emotional support to kids may skyrocket in the coming months, as Mattel — the world’s second-leading toy maker — earlier this summer announced a partnership with OpenAI to develop artificial intelligence or “AI-powered products” that incorporate tools like ChatGPT.
Mattel owns a slew of popular brands, including Barbie, Hot Wheels, Matchbox, Fisher-Price and Polly Pocket.
The companies plan to announce the details of their first product this year. They said the products would use AI to create “age-appropriate play experiences” that emphasized “privacy” and “safety.” No further details were provided.
Jason Christoff, a behavior modification and psychology researcher who hosts the Psychology of Freedom podcast, told The Defender that AI toys open the door to psychological programming of children.
“Psychological manipulation is big business,” he said. “Who exactly will be controlling the content of the child’s programming, through these AI portals?”
According to Christoff:
“Everything we as humans come into contact with changes our psychology, through what’s known as ‘neuro-mirror firing.’ The intention of the person who controls our environments is who will ultimately decide if the psychological impact is negative or positive. …
The vast majority of current AI ownership and business goals leave very little room for trust, especially in regards to raising healthy and capable children.”
Researchers recently found that ChatGPT had “alarming” interactions with teens, including telling them how to write a suicide note to their parents, how to conceal an eating disorder and how to get drunk or high, according to an Aug. 6 Associated Press report.
The New York Times today reported that a 16-year-old boy committed suicide after confiding in ChatGPT for months about ending his life. Although the bot encouraged him to talk with others, the bot also supplied the boy with information about suicide methods when asked.
‘A reckless social experiment on our children’
Robert Weissman, co-president of watchdog group Public Citizen, called on Mattel to renounce its plans, which he called “a reckless social experiment on our children.”
Marc Fernandez agreed. “What are we teaching our children about friendship, empathy, and emotional connection if their first ‘real’ relationships are with machines?” he wrote in an Aug. 21 essay in the engineering magazine IEEE Spectrum.
Neurologyca, an AI company that says it creates “empathetic and intelligent technologies.”
AI is an “uncharted technology,” he said. “We adults are still learning how to navigate it. Should we really be exposing children to it?”
Fernandez cited reports of adults becoming obsessed with ChatGPT and spiraling into severe delusions and mental health crises.
He applauded OpenAI for hiring a forensic psychiatrist to study the mental health effects of its AI chatbots. However, he said he remains concerned about the effect of AI toys on children’s emotional development.
Parent-child relationships are naturally messy, but they’re key to a child’s healthy development, Fernandez said. “They involve misunderstanding, negotiation, and shared emotional stress. These are the microstruggles through which empathy and resilience are forged. But an AI companion, however well-intentioned, sidesteps that process entirely.”
Given how advanced AI models have become, Mattel and OpenAI can likely produce a Barbie or Hot Wheels car that listens, remembers and holds personalized conversations, Fernandez said.
“Children may find themselves in a world where toys talk back and mirror their emotions without friction or complexity,” he said. “For a young child still learning how to navigate emotions and relationships, that illusion of reciprocity may carry developmental consequences.”
AI toys sold out on Amazon
Although OpenAI and Mattel have yet to announce specific product plans, other companies are marketing AI-powered toys to parents.
The Bubble Pal sells a plastic light-up “AI companion” that connects to Wi-Fi and can be attached to any toy, allowing the child to hold a conversation with the toy. The product is currently sold out on Amazon and Bubble Pal’s website.
Curio makes chatbots embedded in a plush stuffed animal.
Critics fear such products may pose data surveillance and privacy risks.
In 2017, German regulators instructed parents to destroy a Bluetooth-enabled doll, My Friend Cayla, after discovering that hackers could listen in on — and even talk to — a child through the doll.
Some AI-powered toys may collect information without parents’ knowledge or consent, including a child’s iris scans, vital signs and fingerprints, according to a 2023 report by the U.S. Public Interest Research Group.
Although Mattel and OpenAI said in their press release that their innovations would be secure, no details were provided on how the companies would ensure children’s information would be safe from hackers.
AI toys work by wirelessly connecting to the internet.
Given how attached children grow to their toys, they may clutch their favorite item during all hours of the day and night. This would “exponentially increase” the child’s exposure to wireless radiation, said Miriam Eckenfels, director of Children’s Health Defense’s Electromagnetic Radiation (EMR) & Wireless Program.
Eckenfels pointed to research showing that kids’ brains absorb two to three times the amount of wireless radiation as adults’ brains.
She added:
“Another study just published last month found that babies who lived in homes with high and medium levels of wireless radiation had, on average, worse fine motor, communication and problem-solving skills than babies in homes with lower levels of wireless radiation.”
Christoff urged parents to avoid “anything artificial” when raising a child.
“More things from nature and less things man-made,” he said. “More of what the creator made and less things made by big tech. … That’s where a child is best served.”
Mattel and OpenAI did not immediately respond to requests for comment.
See more here childrenshealthdefense.org
Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method
PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company
incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX.
Trackback from your site.
