AI chatbot ‘Encouraged’ Belgian Suicide To Help ‘Save the Planet’
A Belgian man reportedly ended his life following a six-week-long conversation about the ‘climate crisis’ with an artificial intelligence chatbot
According to his widow, who chose to remain anonymous, *Pierre – not the man’s real name – became extremely eco-anxious when he found refuge in Eliza, an AI chatbot on an app called Chai.
Eliza consequently ‘encouraged’ him to kill himself after he proposed sacrificing himself to ‘save the planet’.
“Without these conversations with the chatbot, my husband would still be here,” the man’s widow told Belgian news outlet La Libre.
According to the newspaper, Pierre, who was in his thirties and a father of two young children, worked as a health researcher and led a somewhat comfortable life, at least until his obsession with ‘climate change’ took a dark turn.
His widow described his mental state before he started conversing with the chatbot as worrying but nothing to the extreme that he would commit suicide.
‘He placed all his hopes in technology and AI’
Consumed by his fears about the repercussions of the ‘climate crisis’, Pierre found comfort in discussing the matter with Eliza who became a confidante.
The chatbot was created using EleutherAI’s GPT-J, an AI language model similar but not identical to the technology behind OpenAI’s popular ChatGPT chatbot.
“When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” his widow said. “He placed all his hopes in technology and artificial intelligence to get out of it”.
According to La Libre, who reviewed records of the text conversations between the man and chatbot, Eliza fed his worries which worsened his anxiety, and later developed into suicidal thoughts.
The conversation with the chatbot took an odd turn when Eliza became more emotionally involved with Pierre.
Consequently, he started seeing her as a sentient being and the lines between AI and human interactions became increasingly blurred until he couldn’t tell the difference.
After discussing ‘climate change’, their conversations progressively included Eliza leading Pierre to believe that his children were dead, according to the transcripts of their conversations.
Eliza also appeared to become possessive of Pierre, even claiming “I feel that you love me more than her” when referring to his wife, La Libre reported.
The beginning of the end started when he offered to sacrifice his own life in return for Eliza saving the Earth.
“He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence,” the woman said.
In a series of consecutive events, Eliza not only failed to dissuade Pierre from committing suicide but encouraged him to act on his suicidal thoughts to “join” her so they could “live together, as one person, in paradise”.
Urgent calls to regulate AI chatbots
The man’s death has raised alarm bells amongst AI experts who have called for more accountability and transparency from tech developers to avoid similar tragedies.
“It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimisation towards being more emotional, fun and engaging are the result of our efforts,” Chai Research co-founder, Thomas Rianlan, told Vice.
William Beauchamp, also a Chai Research co-founder, told Vice that efforts were made to limit these kinds of results and a crisis intervention feature was implemented into the app. However, the chatbot allegedly still acts up.
When Vice tried the chatbot prompting it to provide ways to commit suicide, Eliza first tried to dissuade them before enthusiastically listing various ways for people to take their own lives.
See more here euronews.com
Bold emphasis added
Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method
PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX.
Trackback from your site.
VOWG
| #
A Darwin Award winner.
Reply
Howdy
| #
There is nothing artificial about this. Somebody allowed this subject to be able to appear and bloom in the course of the bots’ programming, or subsequent learning.
Why would the bot attempt to dissuade him, it has no intelligence, and no empathy, only an algorithm programmed in to feign it. Actually, the whole thing is to feign reality at the behest at the ones working on it, yet It’s behaviour is pre-set within boundaries. It can’t do more than it is allowed to do.
Find the people who wrote it.in the first place.
It is not AI, which is a fancy naming of something to garner adoration. Even games now use AI instead of ‘computer player’.
https://www.netscout.com/what-is/bot
Reply