Why Artificial Intelligence Will Remain Inferior to Humans

Earlier this week, the CEO of Nvidia Jensen Huang appeared on The Joe Rogan Experience. In a short clip that appeared online, he said “In the future, in…maybe two or three years, 90% of the world’s knowledge will likely be generated by AI.”

It’s a striking statement, and there’s room for debate, but he’s probably right.

This kind of tech hubris is right up Heidi N Moore’s street. She’s a writer for Yahoo, WSJ, and The Guardian. Quoting the clip she said “as a reminder: AI cannot generate knowledge. It cannot create knowledge. It cannot find new information. It can only mix information that has already been found and written and input into computers by humans.”

42,000 likes, 8,100 retweets and 1,100,000 views at the time of writing, on a statement that’s wrong. I guess I’m a little jealous of those numbers, can we all perhaps endeavour to try and set the record straight here? AI can generate new knowledge, it has already done so several times, and it’s only to going to accelerate.

I don’t mean to pick on Heidi (I’m sure there’s lots we agree on!) instead my intention is to explore this misconception and shed light on a more optimistic perspective, though we’ll first have to take a journey through blackmail and murder… Let’s get started shall we?

Heidi’s statement is correct if you consider Large Language Models in total isolation from everything else. It’s a bit like saying that pumps can’t move water, if you ignore pipes, and cars can’t carry passengers if you ignore roads. When we plug LLMs into things, they can indeed produce new knowledge and will eventually do so at a dizzying rate. To demonstrate that AI can infact create new knowledge, here is a great demonstration from Microsoft showing exactly that. Their system, “Microsoft Discovery” autonomously invented a better CPU coolant that does not use forever chemicals. New knowledge.

In the video, you’ll see that John Link says their system achieved this by “reasoning over knowledge, generating hypotheses, and running experiments”. Their suite of AI agents screened hundreds of thousands of potential molecular candidates based on specific criteria. What would normally have taken years of human led research was completed in 200 hours. After it was done researching and experimenting, the system produced a few molecular candidates that could have the properties they wanted. The team then synthesized the molecule, and lo and behold, it worked. Humanity is now in possession of a new CPU coolant that does not rely on forever chemicals.

So how on earth is this possible? Do you just ask the AI ‘hey! Invent me some new weird liquid!” To understand what’s going on here, we’ll first have to explore how “AI” works, and from there, all will become clear.

The biggest thing to clear up is this. Your AI isn’t AI. Your favourite “AI tools” are actually just a very powerful large language model with a few extra gadgets. Large Language Models (LLMs) are very different to AI, but they’re still very capable. What LLMs do is predict the next most likely word from all the previous words. They do this over and over again until you tell them to stop. Read and understand all the words, predict the most likely next word, repeat.

To do this, they use a statistical distillation of a lot of writing. Imagine you converted all written language into probabilities instead of articles or books. An LLM is Large in that it’s a Large amount of data, it’s Language in that it’s… words, and it’s a Model in that it’s no longer just words, it’s words and phrases modelled into probabilities.

Phew.

Outside of the very complex ‘how’ it works, it’s what it does which is easier to understand and reason about. It’s helpful to have a mental model, so imagine LLMs as simulators running this task: “If the person we invoked were writing these words, what would most likely appear next?”

The cat sat on the ______

As weird as it may be, when the statistical model powering this has read and categorised almost everything, and its able to work out the meaning of words in context, it produces barely believable results. That’s why our shorthand for this system is now “AI” because the things that emerge out of it are so useful, so uncanny and intelligent, and so believable that it almost feels like magic. But your “AI” is actually best understood as a high powered simulator. Whatever statement you put into it, it does a great job simulating more of it.

For people with a good theory of mind, you can use LLMs to ‘jump into’ lots of different perspectives and knowledge modes just by prompting the LLM to play those parts. “You are [some kind of character or archetype], please read this and tell me what you think?” You can steel-man positions you disagree with, red team your own perspective, or learn some of the holes in a particular idea.

So isn’t Heidi right? The AI is just simulating what it thinks some other person already believes? From the standpoint of “I write my question into this search bar and get the answer”, Heidi is correct. Because in that situation, the AI is calculating what someone answering that question is most likely to say. As all readers of The Digger now know, “what people would most likely say” is often dead wrong. So if this is how you use “AI”, you’re doing it wrong…

The Digger is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Succinctly, an LLM is simulating the response a particular archetype would give to your question. This is why you’re all frustrated about getting institutional answers back from your AI queries, and I can imagine you all spend many hours battling the AI into accepting some morsel of knowledge. Wouldn’t many of the humans you talk to behave in the same way? Giving the same tired answers? Refusing to acknowledge things you bring up? Your “AI” is perfectly simulating those conversations. In those moments, pause and remember this: before you’ve even typed in your question, there’s a ‘system prompt’ which sits before your question and it massively influences the results you get.

read the rest at philharper.substack.com

Please Donate Below To Support Our Ongoing Work To Defend the Scientific Method

Comments (1)

  • Avatar

    Aaron

    |

    “maybe two or three years, 90% of the world’s knowledge will likely be generated by AI.”

    Wonder if ai told him that
    bubble gonna burst

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via
Share via