AI neural network captures ‘critical aspect of human intelligence’
Scientists have demonstrated that an AI system called a neural network can be trained to show “systematic compositionality,” a key part of human intellect
Neural networks can now “think” more like humans than ever before, scientists show in a new study.
The research, published Wednesday (Oct. 25) in the journal Nature, signals a shift in a decades-long debate in cognitive science — a field that explores what kind of computer would best represent the human mind.
Since the 1980s, a subset of cognitive scientists have argued that neural networks, a type of artificial intelligence (AI), aren’t viable models of the mind because their architecture fails to capture a key feature of how humans think.
But with training, neural networks can now gain this human-like ability.
“Our work here suggests that this critical aspect of human intelligence … can be acquired through practice using a model that’s been dismissed for lacking those abilities,” study co-author Brenden Lake, an assistant professor of psychology and data science at New York University, told Live Science.
Neural networks somewhat mimic the human brain‘s structure because their information-processing nodes are linked to one another, and their data processing flows in hierarchical layers. But historically the AI systems haven’t behaved like the human mind because they lacked the ability to combine known concepts in new ways — a capacity called “systematic compositionality.”
For example, Lake explained, if a standard neural network learns the words “hop,” “twice” and “in a circle,” it needs to be shown many examples of how those words can be combined into meaningful phrases, such as “hop twice” and “hop in a circle.”
But if the system is then fed a new word, such as “spin,” it would again need to see a bunch of examples to learn how to use it similarly.
In the new study, Lake and study co-author Marco Baroni of Pompeu Fabra University in Barcelona tested both AI models and human volunteers using a made-up language with words like “dax” and “wif.”
These words either corresponded with colored dots, or with a function that somehow manipulated those dots’ order in a sequence. Thus, the word sequences determined the order in which the colored dots appeared.
So given a nonsensical phrase, the AI and humans had to figure out the underlying “grammar rules” that determined which dots went with the words.
The human participants produced the correct dot sequences about 80% of the time. When they failed, they made consistent types of errors, such as thinking a word represented a single dot rather than a function that shuffled the whole dot sequence.
After testing seven AI models, Lake and Baroni landed on a method, called meta-learning for compositionality (MLC), that lets a neural network practice applying different sets of rules to the newly learned words, while also giving feedback on whether it applied the rules correctly.
The MLC-trained neural network matched or exceeded the humans’ performance on these tests. And when the researchers added data on the humans’ common mistakes, the AI model then made the same types of mistakes as people did.
The authors also pitted MLC against two neural network-based models from OpenAI, the company behind ChatGPT, and found both MLC and humans performed far better than OpenAI models on the dots test.
MLC also aced additional tasks, which involved interpreting written instructions and the meanings of sentences.
“They got impressive success on that task, on computing the meaning of sentences,” said Paul Smolensky, a professor of cognitive science at Johns Hopkins and senior principal researcher at Microsoft Research, who was not involved in the new study.
But the model was still limited in its ability to generalize. “It could work on the types of sentences it was trained on, but it couldn’t generalize to new types of sentences,” Smolensky told Live science.
Nevertheless, “until this paper, we really haven’t succeeded in training a network to be fully compositional,” he said. “That’s where I think their paper moves things forward,” despite its current limitations.
Boosting MLC’s ability to show compositional generalization is an important next step, Smolensky added.
“That is the central property that makes us intelligent, so we need to nail that,” he said. “This work takes us in that direction but doesn’t nail it.” (Yet.)
See more here livescience
Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method
PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX.
Trackback from your site.
aaron
| #
computer coding is now called AI, a more exiting name to ensnare the unaware
baffle them with BS
Reply
Wisenox
| #
Consciousness is an interface.
Reply
Howdy
| #
“their architecture fails to capture a key feature of how humans think.”
Pray tell me, just how is this, set in stone way that Humans think? I never realized people all think exactly alike, and yet I hate some things other’s find attractive.
The end product of a emulated ‘thinking’ machine if it is anything more than a drone, is that it is a clone of the programmer(s) way of doing things. Apply it to everyday life and see how many times the result complies with other people’s ideal outcome, or the way they think. Novelty, as usual plays a part in the early days, yet anger ensues.
Does your boss annoy you at work because they have different ideas than you? Then I guess they think differently.
A ‘humanoid robot is not human, but so many accept it as such because it is familiar, and even given citizenship in at least one case. Looks are so deceiving, yet yearning is so intoxicating.
The path of Human robots, or their offshoots leads to oblivion.
Microsoft under fire over AI-generated poll about death of 21-year-old woman on its news platform
https://www.businessinsider.com/microsoft-under-fire-ai-generated-poll-death-woman-guardian-2023-11?r=US&IR=T
Hardly surprising from micro$oft, but AI is incredibly stupid. As stupid as the programmer in fact, who apparently dreams of Unicorns as they type.
Reply
schutzhund
| #
THE AMERICAN SCHOOL: WHY JOHNNY CAN’T THINK
Leonard Peikoff Jan 1984
The essential cause of America’s educational failures is an anti-conceptual methodology that corrupts students’ minds. This lecture was delivered at Boston’s Ford Hall Forum in April 1984,
published in the October – December 1984 issues of The Objectivist Forum and anthologized in The Voice of Reason: Essays in Objectivist Thought…
https://courses.aynrand.org/works/the-american-school-why-johnny-cant-think/
Reply