After ChatGPT 5 Launch Fail: Is This As Far As A.I. Can Go?

Sam Altman and OpenAI finally released their long anticipated version 5.0 last week of ChatGPT. So many users complained that they had to quickly restore previous 4.0 versions (Source.)

Almost everyone now in the financial sectors is referring to the AI bubble, either as something that is going to be no big deal, or something worse.

But every once in a while someone says the quiet part out loud: “What if this is as good as its going to get for ‘generative AI?’”

Oh no, that just simply could not be true, given how much money has been invested in it for claims for what it allegedly will do in the future.

That’s like saying “high cholesterol does not lead to heart disease.”

Oh no, that couldn’t be true because pharmaceutical companies made $billions telling people it was true and then selling them drugs to make it true.

There are just too many “truths” in our society that cannot be true, as it would be too costly and too deadly (like arresting mass murderers).

Generative AI is like that a lot. But the problem is you can only fake it for so long before everyone else figures it out too.

The latest one to say the quiet part out loud on AI: Cal Newport of the New Yorker.

What If A.I. Doesn’t Get Much Better Than This?

GPT-5, a new release from OpenAI, is the latest product to suggest that progress on large language models has stalled.

Excerpts:

Much of the euphoria and dread swirling around today’s artificial-intelligence technologies can be traced back to January, 2020, when a team of researchers at OpenAI published a thirty-page report titled “Scaling Laws for Neural Language Models.”

The team was led by the A.I. researcher Jared Kaplan, and included Dario Amodei, who is now the C.E.O. of Anthropic. They investigated a fairly nerdy question: What happens to the performance of language models when you increase their size and the intensity of their training?

Back then, many machine-learning experts thought that, after they had reached a certain size, language models would effectively start memorizing the answers to their training questions, which would make them less useful once deployed.

But the OpenAI paper argued that these models would only get better as they grew, and indeed that such improvements might follow a power law…

A few months after the paper, OpenAI seemed to validate the scaling law by releasing GPT-3, which was ten times larger—and leaps and bounds better—than its predecessor, GPT-2.

Suddenly, the theoretical idea of artificial general intelligence, which performs as well as or better than humans on a wide variety of tasks, seemed tantalizingly close. If the scaling law held, A.I. companies might achieve A.G.I. by pouring more money and computing power into language models.

Within a year, Sam Altman, the chief executive at OpenAI, published a blog post titled “Moore’s Law for Everything,” which argued that A.I. will take over “more and more of the work that people now do” and create unimaginable wealth for the owners of capital. “This technological revolution is unstoppable,” he wrote.

“The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.”

It’s hard to overstate how completely the A.I. community came to believe that it would inevitably scale its way to A.G.I. In 2022, Gary Marcus, an A.I. entrepreneur and an emeritus professor of psychology and neural science at N.Y.U., pushed back on Kaplan’s paper, noting that “the so-called scaling laws aren’t universal laws like gravity but rather mere observations that might not hold forever.”

The negative response was fierce and swift.

No other essay I have ever written has been ridiculed by as many people, or as many famous people, from Sam Altman and Greg Brockton to Yann LeCun and Elon Musk,”

Marcus later reflected.

Over the following year, venture-capital spending on A.I. jumped by eighty per cent.

After that, however, progress seemed to slow. OpenAI did not unveil a new blockbuster model for more than two years, instead focussing on specialized releases that became hard for the general public to follow.

Some voices within the industry began to wonder if the A.I. scaling law was starting to falter.

A contemporaneous TechCrunch article summarized the general mood:

Everyone now seems to be admitting you can’t just use more compute and more data while pretraining large language models and expect them to turn into some sort of all-knowing digital god.”

But such observations were largely drowned out by the headline-generating rhetoric of other A.I. leaders. “A.I. is starting to get better than humans at almost all intellectual tasks,” Amodei recently told Anderson Cooper.

In an interview with Axios, he predicted that half of entry-level white-collar jobs might be “wiped out” in the next one to five years. This summer, both Altman and Mark Zuckerberg, of Meta, claimed that their companies were close to developing superintelligence.

Then, last week, OpenAI finally released GPT-5, which many had hoped would usher in the next significant leap in A.I. capabilities. Early reviewers found some features to like.

Within hours, users began expressing disappointment with the new model on the r/ChatGPT subreddit. One post called it the “biggest piece of garbage even as a paid user.”

In an Ask Me Anything (A.M.A.) session, Altman and other OpenAI engineers found themselves on the defensive, addressing complaints. Marcus summarized the release as “overdue, overhyped and underwhelming.”

In the aftermath of GPT-5’s launch, it has become more difficult to take bombastic predictions about A.I. at face value, and the views of critics like Marcus seem increasingly moderate.

Such voices argue that this technology is important, but not poised to drastically transform our lives. They challenge us to consider a different vision for the near-future—one in which A.I. might not get much better than this.

I recently asked Marcus and two other skeptics to predict the impact of generative A.I. on the economy in the coming years.

This is a fifty-billion-dollar market, not a trillion-dollar market,”

Ed Zitron, a technology analyst who hosts the “Better Offline” podcast, told me. Marcus agreed:

A fifty-billion-dollar market, maybe a hundred.”

The linguistics professor Emily Bender, who co-authored a well-known critique of early language models, told me that “the impacts will depend on how many in the management class fall for the hype from the people selling this tech, and retool their workplaces around it.” She added,

The more this happens, the worse off everyone will be.”

Full articleArchive here.

source  vaccineimpact.com

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (2)

  • Avatar

    Tom

    |

    As A/i retards gobble up all the information in the world, they still cannot separate fact from fiction as they absorb the endless lies as if they were the truth. Then there is still the coin tossing to determine what results might be the truth as any number of possibilities are presented. Then there is for certain fields or inquiries the ignoring of anything that might question the agenda, dogma or established communism (or the distortion of facts). Grade F+.

    Reply

  • Avatar

    JFK

    |

    Well, our “AI” is nothing more than an elaborate parrot.
    It cusumes endless data, adjusts itself to mimic that data, and then responds to semi-novel data.
    Just like a parrot on steroids would do.
    Or, worse than a parrot, since parrots have less “issues” than AI models.
    Humans do this as well, and that’s why we got so excited about it, when seeing it work.
    It is the familiarity of the whole thing.
    But this is not true intelligence.
    In fact, language models are stupid. They know nothing, apart from the rules of the language, but do not even understand the language itself. Just like a toddler.
    The posibilities increase using hybrid methods like RAGs and “Chain-of-Thought prompting”, but it is still no true “general artificial intelligence” behind it. The only intelligence is the non-model parts that are hardcoded into it by humans and the language capabilities which are nothing more than a parrot’s brain on streroids.
    Take the hardcoded parts out, and you get nothing more than a fancy (and often delusional) parrot.
    The same way a human speaking does not mean he is saying intelligent things.
    Or, the fact that, a human speaking may not be fully aware of why he is using some forms of expression or words or methods of presenting things. It just comes out naturally from the long experience, the same way as with singing a song. There is little real intelligence there, only experience from training. Just like in parrots.
    But the driving force behind it in humans is true intelligence. A human can speak no languages at all and still be intelligent. Or he can create languages from scratch, he can evolve them, create and express abstract notions, set goals, experiment with things for no reason, etc. He can transform data in ways an AI cannot. AI requires a “language”, even when non-textual data is to be processed (image, video, etc). A human can form objectives from scratch and decide to do things that cannot be anticipated. An AI model can’t. It always needs an objective given from outside and even that is not enough to keep it away from total screw-ups. A human can be isolated from all civilization and still create one from scratch, either from biological nessesity or pure boredom. Language and civiliazation is the expression of intelligence and its carrier, not intelligence in itself. And RAGs and Chain-of-Thought try to obfuscate this fact by mixing-in some old-fashioned hardcoded logic to a pot of mimicry.
    AI systems have no self-awareness, no need to communicate anything with anyone, no way to identify or compare themselves with foreign entities, no curiocity, no feelings of mission/goal/purpose/meaning/importance, no way to analyse problems outside the mimicry of existing human languages. They are souless, brainless parrots, with some man-made crutches and make-up, provided by us to help them be more likeable to our needs.

    I am not saying that nothing interesting has/will come out of this.
    Nor that we cannot evolve this technology into true AI.
    But we are far away, and the current AI is far inferior to human intelligence, no matter how powerful it seems.
    Although, with the global human IQ on freefall, this can be debatable…
    And, most likely, AI will never be able to fully comform to human intelligence and understanding (for good or for worse), since both of those don’t work the same way between the two.
    I would never trust an AI system of our times.
    Not only because I know how it works and find it insufficient, but also because I have no control over the data and the methods used to train it. Replacing old-fashioned web-search with AI search is pretty fine most of the time though (if you ignore all the censorship, the political correctness BS, the frequent propaganda campains, etc).
    Treat AI like an assistant that will stub you in the back when you are least expecting it. Because that’s exactly what big-tech AI systems are.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via
Share via