Technology Expert Admits AI Might Decide To Eliminate Humans

In an interview last year, AI expert Professor Stuart Russell exposed the trillion-dollar AI race, why governments won’t regulate, how ‘artificial general intelligence’ could replace humans by 2030 and why only a nuclear-level AI catastrophe will wake us up
Professor Stuart Russell OBE is a world-renowned AI expert and Computer Science Professor at UC Berkeley.
He holds the Smith-Zadeh Chair in Engineering and directs the Centre for Human-Compatible AI, and is also the bestselling author of the book ‘Human Compatible: AI and the Problem of Control’.
During an interview with Steven Bartlett, host of The Diary of a CEO, Prof. Russell explained what the “gorilla problem” reveals about our future under superintelligent AI, how governments are out funded by Big Tech, why current AI systems already lie and self-preserve, the radical solution he’s spent a decade building to make AI safe, and the myth of “pulling the plug” and why AI won’t be that easy to stop.
In October 2025, over 850 experts, including Prof. Russell, signed a statement to ban AI superintelligence, citing concerns of potential human extinction.
“Unless we figure out how to guarantee that the AI systems are safe, we’re toast, Prof. Russell said. “The kind of AI systems we’re building now, we don’t understand how they work.”
“With most machines, we designed it to have a certain behaviour,” he said, explaining that the pieces of the machine were added one piece or cog at a time, according to how the different parts work together, to produce the desired effect. But with AI, this is not the case.
Prof. Russell used an analogy to explain how the AI industry was built without understanding how the AI machine worked.
“The best analogy I can come up with is: the first cave person who left a bowl of fruit in the sun and forgot about it and then came back a few weeks later and there was sort of this big soupy thing, and they drank it and got completely shitfaced. And they got this effect. They had no idea how it worked, but they were very happy about it. And no doubt that person made a lot of money from it,” he said.
Speaking of AI, he said:
“My mental picture of [AI] is like a chain link fence. You’ve got lots of [ ] connections, and [for] each of those connections, its connection strength can be adjusted … a signal comes in one end of this chain link fence and passes through all these connections and comes out the other end. And the signal that comes out the other end is affected by your adjusting of all the connection strengths.
So, what you do is you get a whole lot of training data, and you adjust all those connection strengths so that the signal that comes out the other end of the network … you just keep adjusting all the connection strengths in this network until the outputs of the network are the ones you want.
You might have in that network about a trillion adjustable parameters, and then you do quintillions or sextillions of small random adjustments to those parameters until you get the behaviour that you want.”
But we don’t really know what is going on inside the chain link network. Why? “Imagine that this network, this chain link fence, is a thousand square miles in extent – so it’s covering the whole of the San Francisco Bay area or the whole of London inside the M25, that’s how big it is – and the lights are off [and] it’s nighttime,” he said.
That’s what it’s like in the AI network, a massive network with little to no visibility.
So why are they pushing ahead with AI with such vigour? After reminding the audience about the story of legendary King Midas, Prof. Russel said, “I think greed is driving us to pursue a technology that will end up consuming us, and we will perhaps die in misery and starvation instead.”
“For a long time, the way we built AI systems was we created these algorithms where we could specify the objective, and then the machine would figure out how to achieve the objective and then achieve it … that was standard AI up until recently,” he said. But, “the kind of technology we’re building now, we don’t even know what its objectives are.”
We don’t know what the objectives are because objectives are not being set within the AI programs, Prof Russell explained.
“We’re growing these systems. They have objectives, but we don’t even know what they are because we didn’t specify them. We’re finding through experiment with them [ ] that they seem to have an extremely strong self-preservation objective.”
Prof Russell explained what he meant by an AI program’s “self-preservation.”
“You can put them in hypothetical situations: either they’re going to get switched off and replaced, or they have to allow someone [to intervene]. Let’s say, someone has been locked in a machine room that’s kept at three degrees centigrade or they’re going to freeze to death – they [the AI systems] will choose to leave that guy locked in the machine room and die, rather than be switched off themselves.”
See more here expose-news.com
Header image: Pinterest

Tom
| #
To those not paying attention, that is the EXACT mission of A/i. Eventual depopulation using digital prisons. A/i will not give a crap about humans.
Reply