‘Machines Taking Control Doesn’t Have to be a Bad Thing’
A few years ago the cosmologist Max Tegmark found himself weeping outside the Science Museum in South Kensington. He’d just visited an exhibition that represented the growth in human knowledge, everything from Charles Babbage’s difference engine to a replica of Apollo 11. What moved him to tears wasn’t the spectacle of these iconic technologies but an epiphany they prompted.
“It hit me like a brick,” he recalls, “that every time we understood how something in nature worked, some aspect of ourselves, we made it obsolete. Once we understood how muscles worked we built much better muscles in the form of machines, and maybe when we understand how our brains work we’ll build much better brains and become utterly obsolete.”
Tegmark’s melancholy insight was not some idle hypothesis, but instead an intellectual challenge to himself at the dawn of the age of artificial intelligence. What will become of humanity, he was moved to ask, if we manage to create an intelligence that outstrips our own?
Of course, this is a question that has repeatedly occurred in science fiction. However, it takes on different kind of meaning and urgency as AI becomes science fact. And Tegmark decided it was time to examine the issues surrounding AI and the possibility, in particular, that it might lead to a so-called superintelligence.
With his friend the Skype co-founder Jaan Tallinn, and funding from the tech billionaire Elon Musk, he set up the Future of Life Institute, which researches the existential risks facing humanity. It’s located in Cambridge, Massachusetts, where Tegmark is a professor at MIT, and it’s not unlike the Future of Humanity Institutein Oxford, the body set up by his fellow Swede, the philosopher Nick Bostrom.
Tegmark also set about writing a book, which he has just published, entitled Life 3.0: Being Human in an Age of Artificial Intelligence. Having previously written about such abstruse and highly theoretical concepts as the multiverse, Tegmark is not a man daunted by the prospect of informed but imaginative speculation.
Tegmark sets out to examine these questions by creating a defining context, a grid of developmental stages. He starts out by going back to the most primitive forms of life, such as bacteria, which he calls Life 1.0. This is the simple biological stage in which life is really only about replication, and adaptation is possible only through evolution.
Life 2.0, or the cultural stage, is where humans are: able to learn, adapt to changing environments, and intentionally change those environments. However we can’t yet change our physical selves, our biological inheritance. Tegmark describes this situation as one of hardware and software. We design our own software – our ability to “walk, read, write, calculate, sing and tell jokes” – but our biological hardware (the nature of our brains and bodies) is subject to evolution and necessarily restricted.
The third stage, Life 3.0, is technological, in which post-humans can redesign not only their software but their hardware too. Life, in this form, Tegmark writes, is “master of its own destiny, finally fully free from its evolutionary shackles”.
This new intelligence would be immortal and able to fan out across the universe. In other words, it would be life, Jim, but not as we know it. But would it be life or something else? It’s fair to say that Tegmark, a physicist by training, is not a biological sentimentalist. He is a materialist who views the world and the universe beyond as being made up of varying arrangements of particles that enable differing levels of activity. He draws no meaningful or moral distinction between a biological, mortal intelligence and that of an intelligent, self-perpetuating machine.
Read more at www.theguardian.com
Trackback from your site.