31 March 2008
Robot War Continues
By Sharon Weinberger

Where is Arthur C. Clarke when you need him? Are the killer robots coming, or aren’t they?

Last week’s post, where I dismissed calls to ban ‘imaginary’ killing machines, sparked a healthy debate, including a drop-in from Noel Sharkey, a roboticist who has advocated banning killer robots. He writes in the comment section:

I approve of Sharon’s take on robot scare stories of the past from academics who have read too much science fiction. But in this case we are talking about near-future science fact.

My point, which I’ll state here more clearly, is that rather than debating the real issue — what are the proper limits on the autonomous operations of weapons? — we are debating something a tad fantastical, an army of self-directing, lethal machines, deciding when they should kill or not. Terminators, in other words. I realize that debating Terminators is more fun, because it provides such sexy headlines as "Automated killer robots ‘threat to humanity,’" but I’m not sure it really addresses the main issue.

I have several questions to those who seem to be warning of the robot warrior onslaught.

My first question, and what prompted the original post, was: Where and when has the Pentagon advocated handing over actual weapons release decisions to an artificial life form? The Predator, SWORDS, and other robotic systems may have a few, limited capabilities to autonomously operate.  But the decision to shoot is currently made, quite pointedly, by a human operator. If there has been a sea-change in Pentagon policy, I would like someone to point out a reliable source noting this change. (If there is another country that it taking the man completely out of the loop, again, I’d like to see evidence for this.)

My second question is: how do we define a robot? DANGER ROOM has written about "killer robots" a number of times, but these are not Terminators, since, again, there is a man in the loop. As several commenters pointed out, a Roomba is an autonomous system, so all it takes is a Roomba with a bomb to create a "killer robot." In other words, the capability exists for robots to kill without human intervention. That’s true, but that capability has existed for decades. As another commenter noted: a heat-seeking missile could, by some definitions, be regarded as a robot (particularly if, as the original post noted, we equate land mines with robots). Okay, if a landmine is a robot, then isn’t every guided missile, weapon and bomb a robot (and if so, should we ban them all)?

Some Stanford students created a good website that explores the fuzzy line between autonomous systems and robots. The students note that the World War II V-2 and the 1950s "fire and forget" weapons also ushered in the era of autonomous weapons:

In today’s reality, we are far from achieving the mass production of autonomous killing machines constructed in the guise of a European body-builder. Indeed, we are still in the formative stages of perfecting experimental robots that can act independently to pick up a glass of water without spilling it. The United States military, largely through the Defense Advanced Research Projects Agency (DARPA) and various other Navy, Army, and Air Force research offices, is the single largest funder of work with robots and artificial intelligence. The benefits of being able to implement autonomous weapons on the battlefield have been made clear by election year politicians and fund-hungry military officers alike: less loss of human life and greater chances of more precise, coordinated, reliable, and successful warfare. But the negative possibilities have been largely ignored or poorly addressed. Can we afford to continue to research such advanced and potentially powerful technology without closely examining the related social issues?

In other words, yes, it makes makes sense to discuss the ethics of autonomous systems, but let’s not pretend the robot army is marching toward us, because that obscures the real issue: autonomization of weapons.

The robot end-of-timers lament that the problem with robots is that they lack ethics.

That’s true, but that leads to my final question: If we ban "robots," are we also banning any weapon that involve some form of non-human guidance or targeting, like the familiar Joint Direct Attack Munition, which provides an "autonomous, conventional bombing capability" using satellite navigation.

Sadly, the GPS constellation, last time I checked, is devoid of any conscience.


* ‘Turing’ Tests for Killer Robots
* 24 More Armed Robot Sentries for Base Patrol
* Killer Drone Panic…. Zzzzzzz…..
* Newbies Build Killer Robots; Fortune Fawns
* Army’s Robotic, Armed Combat Vehicle
* Inside the Robo-Cannon Rampage
* Robot Cannon Kills 9, Wounds 14
* Roomba-Maker unveils Kill-Bot
* New Armed Robot Groomed for War
* Armed Robots Pushed to Police
* Armed Robots Go Into Action
* Cops Demand Drones
* First Armed Robots on Patrol in Iraq
* Unmanned "Surge": 3000 More Robots for War
* Taser-Armed ‘Bot Ready to Zap Pathetic Humans
* Top War Tech #5: Talon Robots
* More Robot Grunts Ready for Duty
* Israel’s Killer ‘Bot: Safe Enough for War?
* Inside the  Baghdad Bomb Squad

Global Network