DeepMind’s neural network teaches AI to reason about the world
The world is a confusing place, especially for an AI. But a neural network developed by UK artificial intelligence firm DeepMind that gives computers the ability to understand how different objects are related to each other could help bring it into focus.
Humans use this type of inference – called relational reasoning – all the time, whether we are choosing the best bunch of bananas at the supermarket or piecing together evidence from a crime scene. The ability to transfer abstract relations – such as whether something is to the left of another or bigger than it – from one domain to another gives us a powerful mental toolset with which to understand the world. It is a fundamental part of our intelligence says Sam Gershman, a computational neuroscientist at Harvard University.
What’s intuitive for humans is very difficult for machines to grasp, however. It is one thing for an AI to learn how to perform a specific task, such as recognising what is in an image. But transferring know-how learned via image recognition to textual analysis – or any other reasoning task – is a big challenge. Machines capable of such versatility will be one step closer to general intelligence, the kind of smarts that lets humans excel at many different activities.
DeepMind has built a neural network that specialises in this kind of abstract reasoning and can be plugged into other neural nets to give them a relational-reasoning power-up. The researchers trained the AI using images depicting three-dimensional shapes of different sizes and colours. It analysed pairs of objects in the images and tried to work out the relationship between them.
Accurate answers
The team then asked it questions such as “What size is the cylinder that is left of the brown metal thing that is left of the big sphere?” The system answered these questions correctly 95.5 per cent of the time – slightly better than humans. To demonstrate its versatility, the relational reasoning part of the AI then had to answer questions about a set of very short stories, answering correctly 95 per cent of the time.
Still, any practical applications of the system are still a long way off, says Adam Santoro at DeepMind, who led the study. It could initially be useful for computer vision, however. “You can imagine an application that automatically describes what is happening in a particular image, or even video for a visually impaired person,” he says.
Outperforming humans at a niche task is also not that surprising, says Gershman. We are still a very long way from machines that can make sense of the messiness of the real world. Santoro agrees. DeepMind’s AI has made a start by understanding differences in size, colour and shape but there’s more to relational reasoning that that. “There is a lot of work needed to solve richer real-world data sets,” says Santoro.
Read more at New Scientist
Trackback from your site.