Robots and Intelligence
“New robot shows off human-like qualities” says the headline. The media think it’s the greatest thing since sliced bread. My dearest is excited too: finally, she will no longer have to remind me of my chores, that the (yet to be acquired super-duper) robot with its well-programmed memory and a mind of its own will perform without being asked—and even without any snarky comments on the side.
And here comes the latest news: “Artificial intelligence, human brain to merge in the 2030s, says futurist Kurzweil.” I’m not sure how that’s supposed to work but I’ll give it the benefit of doubt. Presumably, it’s not just the physical work I can then palm off to the robots, but the thinking as well! What a relief, no work, no complaints and nothing but bliss; what could possibly go wrong?
Indeed, what could possibly go wrong?
Computers are already running major parts of the communication networks, travel schedules, and other infrastructure controls of much of the world. Even many manufacturing processes, from the cutting of clothes to assembling whole cars are already done by robots of sort. Such specialized machines became of age with the era known as the Industrial Revolution (IR). At first, the IR made short shrift of the most laborious aspects of creating textiles.
The invention of “computer”-controlled weaving patterns started with the Jacquard-loom in the early 1700s. In fact, JM Jacquard’s (1752-1834) invention of mechanically-controlled movements of a loom’s harness and shuttles allowed for large scale production of intricate textiles like brocade. But that was just the beginning. In fact, that technology gave rise to the “Hollerith” machines in the late 19th century and its later owner, the modern day IBM company. The same type of card-based control technology was still prevalent to “program” modern computers as late as the 1970s.
So, what could possibly go wrong with computers running our lives and robots doing all the chores? The answer lies in the proverbial “monkey wrench,” an unforeseen and—un-programmed-for—new event. Some people call it a “black swan” event like a natural catastrophe, or whatever. It does not matter. The point is that the automated system is unable to cope with certain eventualities, simply for lack of a pre-programmed response or capability. The point is that such an event had not been anticipated and, therefore, no response mechanism(s) were available. So, now what?
Now What?
If past experience is any guide, that question, at best, results in a frenzied effort to solve that type of problem from ever occurring again. If you have been around for a while, as I happen to be, you may have had a personal computer (PC) for quite some time. Likely you will have acquired a number of such devices over time, each coming with promises of being the best thing since sliced bread and other notable inventions, having solved previous incarnations’ problems, security flaws and anything else that could possibly have been a cause for concern.
Most importantly, the new hardware and associated software are claimed to make sure that no malicious attacks of your system could possibly take place, much less succeed—ever again. Does that sound familiar to you?
Does it surprise you when “vital” and seemingly never-ending security updates are necessary?
Interestingly, a new study just found that people trust robots, even against their own better judgment. The article describing these experiments at Georgia Tech begins with the following finding:
“Human beings will put too much trust in robots even when those machines are broken or make obvious mistakes. All we need to do is slap the words “emergency” on the robot’s side to make people surrender their logic.”
Clearly, our trust in computers and robots ought to have limits. They work with well-defined tasks for which the steps to be taken are equally clear and pre-programmed. As soon as either condition is no longer the case, the system breaks down.
Another case demonstrates this even better: On February 29, 2016, some 1,200 pieces of luggage were not forwarded at the airport in Düsseldorf, Germany. The reason for that failure is as funny (not necessarily for the affected travelers) as surprising: That date occurs only in leap years, once every four years and had been overlooked in programming the automated handling system.
Still, the question appears to be justified:
Are Robots are Coming of Age?
Apparently so. Not only are there some that pretend to sweep your abode by whizzing about the place, there are now real man-sized robot-creatures that can walk upright or multi-footed models that you can kick in the groin without them taking offense.
I guess it won’t be much longer until the Super-Bowl or the world’s soccer match will be degenerating into a robotic “free-for-all.”
The real question though ought to be: is creating such robots really worth the effort? Will mankind really be better off with such devices? Will they truly become “intelligent,” possibly start wars by themselves, or will they save mankind from a self-inflicted apocalypse?
The philosophers are still debating the point. For example, Stephen Hawking thinks of them as the greatest menace on earth. He may well be right but probably for the wrong reason: In reality, all robots are nothing but dumb machines; it’s the human mind that gives them orders via some computer code. I don’t think that they will ever become “intelligent” in the meaning of being capable to make inventions other than the basis of self-recognized observations. Therefore, it also would be inviting disaster if we would let such machines run our lives.
Yes, you can write computer programs to have a quadruped robot respond to your kick with a move to the side or in another preprogrammed way. But what if to your mind comes up with a totally unexpected and novel situation, what’s the response then?
Then, that “smart” critter may become “dead meat” in a hurry!
By Dr. Klaus L.E. Kaiser — Bio and Archives
Trackback from your site.