AI Chatbot Could Become Real Threat If Controlled by Oppressive Power

AI chatbot, such as ChatGPT, could become a real threat if it is controlled by an oppressive power like China and Russia, according to Rex Lee, a cybersecurity adviser at My Smart Privacy

He pointed to the recent remark of British computer scientist Geoffrey Hinton, the “Godfather of AI,” who recently left his position as vice president and engineering fellow at Google.

In an interview with The New York Times, Hinton sounded the alarm about the ability of artificial intelligence (AI) to create false images, photos, and text to the point where the average person will “not be able to know what is true anymore.”

Lee echoed the concern, saying, “A legitimate concern is the ability for AI ChatGPT, or AI in general, to be used to spread misinformation and disinformation over the internet.

“But now, imagine a government in charge of this technology or oppressive governments like China or Russia with this technology. Again, it’s being trained by humans. Right now, we have humans who have a profit motive that are training this technology with Google and Microsoft. But now, mix in a government, and then it becomes much more of a threat,” Lee told “China in Focus” on NTD, the sister media outlet of The Epoch Times.

He raised concern that with the facilitation of AI, the Chinese Communist Party (CCP) can exacerbate its human rights abuse practices.

“If you look at this in the hands of a government, like China and the CCP, and then imagine them programming the technology to oppress or suppress human rights, and also to censor stories and identify dissenters on the internet, and so forth, so that they can find those people and arrest them, then it becomes a huge threat,” he said.

According to Lee, AI technology could also enable the communist regime to ramp up its disinformation campaign on social media in the United States at an unprecedented speed.

“Imagine now you have over 100 million Tiktok users in the United States that are already being influenced by China and the CCP through the platform. But now, think of it this way, they’re being influenced at the speed of a jet—you add AI to that, then they can be influenced at the speed of light. Now, you can touch millions of people, literally billions of people, literally within seconds with this and misinformation that can be pushed out,” he said.

“And that’s where it becomes very frightening … how it can be used politically and/or be used by bad actors, including drug cartels, and criminal actors that also can then have access to the technology as well,” he added.

Elimination of Jobs

Lee pointed out that Hinton also expressed concern about the centralization of AI regarding Big Tech.

“One of his concerns was that Microsoft had launched open AI ChatGPT, ahead of Google’s Bark, which is their chatbot, and he felt that Google was rushing to market to compete against Microsoft,” Lee said.

“Another big concern is the elimination of jobs … this technology can and will eliminate a lot of jobs that are out there, that’s becoming a bigger concern,” he said, adding that AI can eliminate jobs “that an automated computer chatbot can do, mainly in the area of customer service, but also in computer programming.”

Mitigate Threats

Lee defined ChatGPT as “a generated pre-trained transformer,” which he said is “basically the transformer, and it’s programmed by humans and trained.”

Thus, he deemed human factors as the biggest concern.

“Basically, AI is like a newborn baby; it can be programmed for good, just like a child. If the parents raise that child with a lot of love and care and respect, the child will grow up to be loving, caring, and respectful. But if it’s raised like a feral animal, and raised in the wild, like just letting AI learn by itself off of the internet with no controls or parameters, then you don’t know what you’re gonna get with it,” he said.

To mitigate such a threat, Lee suggested that the regulators who understand it at a granular level work with these companies to see how they’re programming it and what algorithms are used to program it.

“And they have to make sure that they’re training it with the right parameters to where it doesn’t become a danger not only to them but to their customers.”

See more here theepochtimes

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (4)

  • Avatar

    Val

    |

    The admiration of technology over Spirit is a debilitating weakness reflecting the immaturity of those involved and the experience they go through in their experiential loop, especially the reincarnates, who do it lifetime after lifetime. It’s part of the galaxy game for us all to experience, but certainly doesn’t represent a desirable achievement.

    Reply

  • Avatar

    Howdy

    |

    I see what you’re saying, Val, but this is no mistake. If reincarnates can’t make the grade, their ‘soul’ will move to the world of shells. I believe all that is happening is for a reason, ordained.

    It isn’t just AI you know, anything Humanity comes up with can be geared to the detriment of all. It’s in the ‘genes’. AI is simply the latest version of it.

    Reply

  • Avatar

    typhus

    |

    Must be careful of what we believe.

    That may be one’s Fate.

    Reply

  • Avatar

    Size

    |

    “an oppressive power like China or Russia”? 😅
    Recent history proves that the UK and US are the most oppressive powers on earth.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via