If you believe Hollywood’s interpretation, the advent of intelligent robots rarely ends well for mankind. Movies like The Matrix, Terminator, and I, Robot all depict robots as quite intelligent, capable, and horrifyingly destructive machines that either aim to enslave us, or worse, bring about the extinction of the human race. In these cinematic examples, robots are rarely helpful or friendly (with the occasional exceptions like Wall-E or R2-D2).
Intelligent robots with the capacity to take over the world are fine as long as they remain in the realm of science fiction. But are highly capable robots confined to the silver screen? When you read the news and see titles like “Evolving robots learn to lie to each other” [1], “Tiny robots are ready to spy on us” [2], “Are we being watched by flying robotic insects?” [3], “A real-life robotic avatar turns you into a machine” [4], or consider that IBM’s Watson [5] can handily defeat humans on Jeopardy, you might think that it’s time to panic. Or you may wonder what exactly all those reckless scientists are thinking – haven’t they seen the movies? Why don’t they stop before it’s too late?
What do robots actually do?
Before you start stockpiling food and weapons, let’s slow down for a minute and discuss what robots can actually do. Engineers have designed plenty of real-world robots, most of which are on assembly lines in factories doing mundane things. We’re actually very good at building robots that do one specific thing very well, but are incapable of adapting to do anything else. For example, to manufacture a car, factories have robots that bolt doors onto cars – and only bolt doors onto cars – as the vehicles move across the factory floor. While there’s plenty of research on many different types of robots that can interact more intelligently with humans and do useful things in the “real world” outside of the lab, very few of these things have actually made it to market. You can buy robots to clean your pool or vacuum your floor (Figure 1) [6], but unfortunately, you can’t teach the floor-washing robot to fold your laundry or do the dishes – you’d have to build another robot specifically designed for those tasks. Bomb-disarming robots are also heavily used by the military and law enforcement communities [7] – they might look like awesome toys, but probably a bit expensive for your average consumer (Figure 1).
Figure 1. Robots in use today. Clockwise from top left: Roomba, robotic vacuum cleaner; PackBot, military robot for bomb disarming and other dangerous situations; Predator drone. (Photo credit: Wikimedia Commons user Larry D. Moore; US Army; NATO Special Operations Coordination Centre)
As another example, take IBM’s Watson and its predecessor Deep Blue [8]: they are excellent at playing Jeopardy and chess, respectively. But give the machine a new board game or put it on another TV game show, and neither of them will be able to do anything. Each robot was programmed to do one task very well, but lacks the ability to learn how to play a new game without being completely reprogrammed by humans. That’s the enormous gap between humans and robots – put a live human on any game show, and he or she can probably learn the rules in less than a minute. The likelihood that Watson will suddenly figure out how to play the Price is Right, let alone morph into Skynet (the mastermind computer network from the Terminator movies) and start Armageddon, is next to zero. The same goes for those robots at the car factory – they aren’t going to suddenly decide to start throwing wrenches at their human operators instead of bolting doors onto cars.
Making sense of news about robots
So why all the hype? Unfortunately it’s a two-fold issue: journalists need to sell articles, and scientists need to get funding for their research. That leads to a system that’s prone to exaggeration. A flashy title draws more readers to an article even if it’s a bit of a stretch, and sometimes getting funding means you need to attract the attention of politicians without scientific backgrounds. Take, for example, the flying “spy” robots mentioned earlier [2, 3] (Figure 2) – the headlines would have you believe that these robots could literally be in the room with you right now, snapping pictures and sending them off to the FBI. You have to dig deep into the article to find out that a power source for the tiny robots hasn’t been developed yet, so they can only fly while tethered to electrical wires. There’s also the little problem that we haven’t quite figured out how to steer them yet. The robots that “learned to lie” [1] did so in a very specific, programmed environment that allowed them to “lie” about a single thing: whether or not they broadcast the availability of “food” to other robots in the area. Their programs allowed them to figure out that they were better off if they hoarded the food for themselves, but this is a far cry from the (quite versatile) human ability to lie about almost anything.
Figure 2. Robobee, the tiny robot made at Harvard that is supposedly “ready to spy on us”. (Photo credit: Ben Finio / Eliza Grinnell)
This certainly isn’t to exclude the possibility that someone will eventually design robots that are intended to harm humans – coming up with new weapons is certainly something we’re good at as a species. One of the biggest concerns right now is the level of autonomy that should be given to military drones. The Predator drones currently used by the US military are actually remotely piloted and directly controlled by a human when firing weapons. In other words, the decision to fire is not made by computer software. There is, however, the potential that such drones could be allowed to autonomously make the decision to fire, and with that comes the potential that they could make mistakes (but also a chance they could be better at it than humans). The major issue is that computer vision is a tricky problem. For example, human vision and perception can easily discern the difference between an apple and a tomato. For a computer, this task is incredibly difficult – they’re both round and red, so what other information can allow the computer to quantify the differences between them? The same issue could apply to, say, a man holding a vacuum cleaner and a man holding an RPG launcher. Without the ability to tell the two apart, an autonomously-acting drone could aim for the wrong person. Humans don’t have a perfect track record here either – look at the numerous incidents involving civilian casualties in Iraq. With continued improvement in technology, there is a fair chance that robots may one day actually be better at warfare than us. If/when they are eventually allowed to act autonomously, there could be actual ethical problems associated with responsibility – if an autonomous drone killed civilians by mistake, is it the fault of the manufacturer, the programmer, the military team in charge, or do we have to court-martial the drone? (see [9]) What the Predator drones won’t and can’t do, however, is to decide to collectively turn around and invade the United States.
So the next time you read an article touting the imminent rise of our robot overlords, you might want to read past the title and look for details about what the robots actually do. What did the research accomplish? Does it sound like the results are embellished a bit? You can even go so far as to find the original technical publication (many news stories will link to these) where the presentation of information may be a bit less dramatic. This certainly isn’t to say there aren’t some solid pieces of science journalism out there – just be careful when you’re reading. Of course, “never say never”, but for now there’s at least one roboticist who isn’t losing any sleep over a robot take-over.
Ben Finio is a PhD student in the Harvard School of Engineering and Applied Sciences.
References
[1] “Evolving Robots Learn to Lie to Each other” http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other
[2] “Tiny robots are ready to spy on us” http://www.guardian.co.uk/technology/blog/2007/jul/23/tinyrobotsare
[3] “Are we being watched by flying robot insects?” http://popsci.typepad.com/popsci/2007/10/are-we-being-wa.html
[5] “IBM Watson” http://www-03.ibm.com/innovation/us/watson/index.html
[6] “iRobot Home Cleaning Robots” http://store.irobot.com/home/index.jsp
[7] “iRobot Ground Robots” http://www.irobot.com/gi/ground/
[8] “Deep Blue” http://www.research.ibm.com/deepblue/meet/html/d.3.html
[9] “Robotics: Morals and Machines” http://www.nature.com/nature/journal/v481/n7379/full/481026a.html