31 March 2008
Robot War Continues
By Sharon Weinberger
Last week’s post, where I dismissed calls to ban ‘imaginary’ killing machines, sparked a healthy debate, including a drop-in from Noel Sharkey, a roboticist who has advocated banning killer robots. He writes in the comment section:
My point, which I’ll state here more clearly, is that rather than debating the real issue — what are the proper limits on the autonomous operations of weapons? — we are debating something a tad fantastical, an army of self-directing, lethal machines, deciding when they should kill or not. Terminators, in other words. I realize that debating Terminators is more fun, because it provides such sexy headlines as "Automated killer robots ‘threat to humanity,’" but I’m not sure it really addresses the main issue.
I have several questions to those who seem to be warning of the robot warrior onslaught.
My first question, and what prompted the original post, was: Where and when has the Pentagon advocated handing over actual weapons release decisions to an artificial life form? The Predator, SWORDS, and other robotic systems may have a few, limited capabilities to autonomously operate. But the decision to shoot is currently made, quite pointedly, by a human operator. If there has been a sea-change in Pentagon policy, I would like someone to point out a reliable source noting this change. (If there is another country that it taking the man completely out of the loop, again, I’d like to see evidence for this.)
My second question is: how do we define a robot? DANGER ROOM has written about "killer robots" a number of times, but these are not Terminators, since, again, there is a man in the loop. As several commenters pointed out, a Roomba is an autonomous system, so all it takes is a Roomba with a bomb to create a "killer robot." In other words, the capability exists for robots to kill without human intervention. That’s true, but that capability has existed for decades. As another commenter noted: a heat-seeking missile could, by some definitions, be regarded as a robot (particularly if, as the original post noted, we equate land mines with robots). Okay, if a landmine is a robot, then isn’t every guided missile, weapon and bomb a robot (and if so, should we ban them all)?
Some Stanford students created a good website that explores the fuzzy line between autonomous systems and robots. The students note that the World War II V-2 and the 1950s "fire and forget" weapons also ushered in the era of autonomous weapons:
In other words, yes, it makes makes sense to discuss the ethics of autonomous systems, but let’s not pretend the robot army is marching toward us, because that obscures the real issue: autonomization of weapons.
The robot end-of-timers lament that the problem with robots is that they lack ethics.
That’s true, but that leads to my final question: If we ban "robots," are we also banning any weapon that involve some form of non-human guidance or targeting, like the familiar Joint Direct Attack Munition, which provides an "autonomous, conventional bombing capability" using satellite navigation.
Sadly, the GPS constellation, last time I checked, is devoid of any conscience.
Tests for Killer Robots