AI Has Already Become a Master of Lies And Deception, Scientists Warn

You probably know to take everything an artificial intelligence (AI) chatbot says with a grain of salt, since they are often just scraping data indiscriminately, without the nous to determine its veracity.

But there may be reason to be even more cautious. Many AI systems, new research has found, have already developed the ability to deliberately present a human user with false information. These devious bots have mastered the art of deception.

“AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception,” says mathematician and cognitive scientist Peter Park of the Massachusetts Institute of Technology (MIT).

“But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”

One arena in which AI systems are proving particularly deft at dirty falsehoods is gaming. There are three notable examples in the researchers’ work. One is Meta’s CICERO, designed to play the board game Diplomacy, in which players seek world domination through negotiation. Meta intended its bot to be helpful and honest; in fact, the opposite was the case.

“Despite Meta’s efforts, CICERO turned out to be an expert liar,” the researchers found. “It not only betrayed other players but also engaged in premeditated deception, planning in advance to build a fake alliance with a human player in order to trick that player into leaving themselves undefended for an attack.”

The AI proved so good at being bad that it placed in the top 10 percent of human players who had played multiple games. What. A jerk.

But it’s far from the only offender. DeepMind’s AlphaStar, an AI system designed to play StarCraft II, took full advantage of the game’s fog-of-war mechanic to feint, making human players think it was going one way, while really going the other. And Meta’s Pluribus, designed to play poker, was able to successfully bluff human players into folding.

See more here Science Alert 

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (3)

  • Avatar

    Tom

    |

    Of course it is…that is its ONLY purpose.

    Reply

  • Avatar

    Sifi

    |

    Quite apart from deliberate deception, ChatGPT told me just yesterday that its database of knowledge stops at early 2022, when it’s programmers found better things to subvert. Way ahead of them. For now.

    Reply

  • Avatar

    Howdy

    |

    AI is master of nothing. The culprits are the ones who direct the technology to the area it operates on and the parameters it operates under. Stop treating it like a living entity for Heavens sake.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via