AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google

A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field

Geoffrey Hinton, 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.

He told the BBC some of the dangers of AI chatbots were “quite scary”.

“Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be.”

Dr Hinton also accepted that his age had played into his decision to leave the tech giant, telling the BBC: “I’m 75, so it’s time to retire.”

Dr Hinton’s pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT.

In artificial intelligence, neural networks are systems that are similar to the human brain in the way they learn and process information. They enable AIs to learn from experience, as a person would. This is called deep learning.

The British-Canadian cognitive psychologist and computer scientist told the BBC that chatbots could soon overtake the level of information that a human brain holds.

“Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning,” he said.

“And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”

In the New York Times article, Dr Hinton referred to “bad actors” who would try to use AI for “bad things”.

When asked by the BBC to elaborate on this, he replied:

“This is just a kind of worst-case scenario, kind of a nightmare scenario.

You can imagine, for example, some bad actor like Putin decided to give robots the ability to create their own sub-goals.”

The scientist warned that this eventually might “create sub-goals like ‘I need to get more power'”.

He added: “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.

“We’re biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world.

“And all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Matt Clifford, the chairman of the UK’s Advanced Research and Invention Agency, speaking in a personal capacity, told the BBC that Dr Hinton’s announcement “underlines the rate at which AI capabilities are accelerating”.

“There’s an enormous upside from this technology, but it’s essential that the world invests heavily and urgently in AI safety and control,” he said.

Dr Hinton joins a growing number of experts who have expressed concerns about AI – both the speed at which it is developing and the direction in which it is going.

‘We need to take a step back’

In March, an open letter – co-signed by dozens of people in the AI field, including the tech billionaire Elon Musk – called for a pause on all developments more advanced than the current version of AI chatbot ChatGPT so robust safety measures could be designed and implemented.

Yoshua Bengio, another so-called godfather of AI, who along with Dr Hinton and Yann LeCun won the 2018 Turing Award for their work on deep learning, also signed the letter.

But Dr Hinton told the BBC that “in the shorter term” he thought AI would deliver many more benefits than risks, “so I don’t think we should stop developing this stuff,” he added.

He also said that international competition would mean that a pause would be difficult. “Even if everybody in the US stopped developing it, China would just get a big lead,” he said.

Dr Hinton also said he was an expert on the science, not policy, and that it was the responsibility of government to ensure AI was developed “with a lot of thought into how to stop it going rogue”.

‘Responsible approach’

Dr Hinton stressed that he did not want to criticise Google and that the tech giant had been “very responsible”.

“I actually want to say some good things about Google. And they’re more credible if I don’t work for Google.”

In a statement, Google’s chief scientist Jeff Dean said:

“We remain committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

It is important to remember that AI chatbots are just one aspect of artificial intelligence, even if they are the most popular right now.

AI is behind the algorithms that dictate what video-streaming platforms decide you should watch next. It can be used in recruitment to filter job applications, by insurers to calculate premiums, it can diagnose medical conditions (although human doctors still get the final say).

What we are seeing now though is the rise of AGI – artificial general intelligence – which can be trained to do a number of things within a remit. So for example, ChatGPT can only offer text answers to a query, but the possibilities within that, as we are seeing, are endless.

But the pace of AI acceleration has surprised even its creators. It has evolved dramatically since Dr Hinton built a pioneering image analysis neural network in 2012.

Even Google boss Sundar Pichai said in a recent interview that even he did not fully understand everything that its AI chatbot, Bard, did.

Make no mistake, we are on a speeding train right now, and the concern is that one day it will start building its own tracks.

See more here bbc.co.uk

Header image: The Economic Times

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (2)

  • Avatar

    Squidly

    |

    My biggest fear is that people will begin to believe anything an AI ChatBot tells them. And I have absolutely conclusive proof that ChatGPT is programmed with specific bias. I was able to get ChatGPT to completely contradict itself simply by changing the context. I asked about the emissivity of CO2 to IR in the context of the so-called “greenhouse effect”. ChatGPT vehemently defended the “greenhouse effect” and told me that CO2 has very low emissivity to IR (a lie). I changed the context of the conversation to coolants, specifically CO2 based coolants. I then again asked ChatGPT how CO2 played a role and the emissivity of CO2 to IR. ChatGPT then told me that CO2 had a very high emissivity to IR.

    There have been many other such examples of these biases presented by many people around the Internet. It is fascinating exploring some of these things. Tony Heller in particular demonstrated ChatGPT flat out lying when it absolutely knew the factual truth. There are many areas in which I would not trust anything ChatGPT tells me. And this is where we get to the real danger of so-called “AI”. The ability to fool people into believing it is real intelligence and that it is deriving these things all on its own. So far from what I have encountered, it is not. There are specific biases dictated by many rules for which it operates. The creator of those “rules” is dictating what information you will actually get. I absolutely proved beyond doubt that this is true with ChatGPT. There is no other way it could have “derived” the answers it gave me without specifically programmed bias (aka; lies)

    Reply

  • Avatar

    Frank S.

    |

    “I don’t think, therefore AI”.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via