Silicon Valley Is in a Frenzy Over Bots That Build Themselves

Late last month, a large crowd gathered in downtown San Francisco to demand that the AI industry stop developing more powerful bots

Holding signs and banners reading Stop the AI Race and Don’t Build Skynet, the protesters marched through the city and gave speeches outside the offices of Anthropic, OpenAI, and xAI.

The crowd demanded that these companies halt efforts to create superintelligent machines—and, in particular, AI models that can develop future AI models.

Such a technology, attendees said, could extinguish all human life.

At AI protests and happy hours, inside start-ups and major companies, the tech world is in a frenzy over the same thing: Computers that make themselves smarter. Over the past year, the top AI companies have taken to loudly bragging about internal efforts to automate their own research.

OpenAI recently released a new model it described as “instrumental in creating itself.” Within the next six months, the company aims to debut what it has described as an “intern-level AI research assistant.”

Meanwhile, Anthropic says that as much as 90 percent of the company’s code is already written by Claude.

“We are starting to see AI progress feed back on itself,” Nick Bostrom, an influential Swedish philosopher who studies AI risk, told us. Within Silicon Valley, many insiders believe that we are teetering on the precipice of a world in which AI can rapidly improve its own capabilities.

Instead of waiting for months between new machine-learning breakthroughs, we might wait weeks. Imagine AI advancing faster and faster.

The idea of self-improving bots is nothing new. When the statistician I. J. Good first introduced the concept of recursive self-improvement in the 1960s, he wrote that machines capable of training their own, even more capable successors would be “the last invention” society ever needed to make.

But just a few years ago, any notion of actually making such AI models was on the back burner. When ChatGPT couldn’t reliably add and subtract, let alone search the web, the notion that AI programs would soon be able to do world-class machine-learning research seemed laughable.

Even as tech companies made claims about the imminent arrival of “artificial general intelligence,” the capabilities needed for a bot to accelerate or even direct AI research seemed to exceed those of AGI.

Now, as AI models have become significantly better at coding, Silicon Valley has become hooked on the idea of self-improving machines. AI research involves a lot of grunt work—curating large data sets, running repeated experiments—that can be made more efficient with the help of coding bots.

Dario Amodei, Anthropic’s CEO, has estimated that coding tools speed up his company’s overall workflows by 15 to 20 percent.

At AI protests and happy hours, inside start-ups and major companies, the tech world is in a frenzy over the same thing: Computers that make themselves smarter. Over the past year, the top AI companies have taken to loudly bragging about internal efforts to automate their own research.

OpenAI recently released a new model it described as “instrumental in creating itself.” Within the next six months, the company aims to debut what it has described as an “intern-level AI research assistant.”

See more here theatlantic.com

Header image: Cyber Security Intelligence

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via
Share via