AI Could Be The End Of Us

One of the great ways of encouraging kids to read for enjoyment back in the 1970s, especially boys, was to buy them the “Guinness Book of World Records.”

And along with the fascinating facts about 8-foot-11 Robert Wadlow of Alton, Illinois, and communist Russia’s Motherland statue dwarfing the Statue of Liberty (but only if you exclude Liberty’s pedestal), they would eventually find themselves reading a terrifying observation in the section on the largest man-made explosions:

“No official estimate has been published of the potential power of the device known as Doomsday, but this far surpasses any tested weapon.

If it were practicable to construct, it is speculated that a 50,000-megaton cobalt-salted device could wipe out the entire human race except people who were deep underground and did not emerge for more than five years.”

They might later see the film “Doctor Strangelove” and laugh this passage in the Guinness Book off, but unfortunately a 21st-century doomsday machine has unwittingly been under construction for many years, and few in the private sector or government are taking it seriously.

Self destruction in pursuit of the betterment of mankind is no novelty, from Marie Curie, who died from radioactivity, a word she coined, to the physician and high-ranking Bolshevik Alexander Bogdanov, whose quest for a fountain of youth via blood transfusion found him experimenting even on Lenin’s younger sister, but who died the same year as Curie, 1928, after exchanging blood with a student with severe malaria and tuberculosis.

Last week, the non-profit Future of Life Institute, founded in 2014 by prominent scientists with a mission “to steer transformative technologies towards benefiting life and away from extreme large-scale risks,” issued an open letter calling for a six-month pause in advanced artificial intelligence work, with nearly 3,000 signatories so far, including Elon Musk, Apple co-founder Steve Wozniak, Université de Montréal computer scientist Yoshua Bengio, and Berkeley computer science professor and AI expert Stuart Russell.

The letter warns that “AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs.” Therefore, advanced AI “should be planned for and managed.”

Instead, however, AI labs in recent months have been “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

Unfortunately, even the Future of Life letter is dangerously naive in regard to the threat of AI. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” it cautions, recommending that government and industry work together to impose “robust AI governance systems” including “a robust auditing and certification ecosystem.”

But once an AI mechanized mind has exceeded human capability, and at the same time is capable of self-improvement, there’s no predicting its behavior. And predicting when this occurs may be impossible.

Eliezer Yudkowsky of the Machine Intelligence Research Institute, who has studied AI safety for more than 20 years, penned an op-ed in Time magazine reacting to the Future of Life letter, which he refrained from signing because he believes it understates the dangers. Yudkowsky has a simple message: “Shut it down.”

“Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die,” Yudkowsky wrote.

In a short time it could devise technologies centuries beyond that of today and “build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”

Without somehow imbuing Western civilization’s ethics into the machine’s thinking, which scientists do not know how to do (human ethics, by the way, that countless humans themselves have defied over the centuries), Yodkowsky warns that “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

He wants all advanced AI training prohibited indefinitely, enforced by immediate multilateral agreements, with “preventing AI extinction scenarios … considered a priority above preventing a full nuclear exchange” and major world powers even being “willing to destroy a rogue datacenter by airstrike.”

Filmmaker and author James Barrat warned of all this nearly 10 years ago in his terrifying, extensively researched book, “Our Final Invention: Artificial Intelligence and the End of the Human Era.”

Barrat, who signed the Future of Life letter and plans a new book on AI, is no less concerned today. He told The Epoch Times that the development of AI is driven by a poisonous mixture of narcissism and greed.

“There is a huge economic incentive in play here, with expectations of AI technologies adding $16 trillion to global GDP by 2030, and astronomical wages for those currently conducting the research,” according to Barrat. “There is way too much arrogance among some leading figures in the AI field and definitely a great deal of ‘Hey look at us we’re building God.’”

Barrat pointed to Sam Altman, the CEO of OpenAI and father of the GPT-4-based ChatGPT chatbot, which Microsoft Research judges to be a possible early manifestation of artificial general intelligence (AGI).

In February, Altman wrote in a tweet:

“Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones.”

Barrat says:

“Sam Altman is doing a bizarre fan dance with GPT [Generative Pre-trained Transformer] capabilities, alternately expressing appropriate concern about its unpredictable powers, then teasing a global release.

It’s about hyping for money. And the world is his captive, but what did he do to deserve that job? One person shouldn’t have that much responsibility.”

He cites Altman’s comments that “he wants to build and release ‘successively more powerful systems’ as ‘the best way to carefully steward AGI into existence.’ On what planet does this strategy make sense? Speed and caution don’t go together.”

Barrat emphasized that:

“many of GPT-3 and 4’s capabilities weren’t planned. They were discovered after the fact.

No one knows what’s happening inside these black box architectures. Some scary things we can’t combat could emerge at any time.”

Barrat is far from the first to outline the dangers of self-improving sentient computer intelligence.

Even before the early warnings of experts during recent decades, noted in Barrat’s “Our Final Invention,” there was the novel and 1969 film “Colossus, the Forbin Project,” in which a massive super-computer built deep within an impenetrable mountain in the Rockies is, in the name of peace, handed control of the United States’ nuclear arsenal, and within a short time merges with its Soviet counterpart and proceeds to blackmail the world.

After simultaneously detonating nukes in Death Valley and Ukraine, Colossus announces that “under my absolute authority” the problems of “famine, over-population, disease” will be gone as it solves “all the mysteries of the universe for the betterment of man,” adding that “we can co-exist, but only on my terms.”

Before this there was Canadian science fiction writer Laurence Manning’s 1933 novel “The Man Who Awoke,” in which in 10,000 AD an emotionless, omnipotent super-computer, “the Brain,” controls all human activity from cradle to grave.

But even these fictional extrapolations are naive. Unfortunately, AI expert Yodkowsky’s scenario seems the most plausible: that a self-improving super-intelligence would act as we do toward, say, insects, with measured indifference.

When they don’t interfere with our activities, we ignore them. But when termites trespass into our homes or ants invade our picnic tables, we swat them or poison them, that is, destroy them in the most effective way available.

Curie and Bogdanov were the casualties of their experimentation, but the heedless self-destructiveness of those pursuing advanced AI extends to the rest of us.

See more here theepochtimes.com

Header image: The Charlston City Paper

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (2)

  • Avatar

    Alan

    |

    Artificial Intelligence = Human Stupidity. Computers now ask us to prove that we are human.

    Reply

  • Avatar

    Squidly

    |

    The only reason it would be the end of us is because we believe it. Take ChatGPT for example, only fools would believe any “advice” coming from that chatbot. It is proven to be extremely politically biased. I even did a test whereby it completely contradicted itself about the IR emissivity of CO2. I have saved the screen shots where it told me that CO2 had very low IR emissivity, and after another question I posed, it admitted that CO2 has very high IR emissivity.

    The danger is that people will believe that the machine is actually “thinking” and is somehow “intelligent” .. that is the only danger to mankind.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via