The Ethics Of Killer Robots

The United Nations Human Rights Council is considering the topic of killer robots — preprogrammed killing machines that operate autonomously on the battlefield.

Although no such devices have been deployed to date, they reportedly are in development.  The UN Council is expected to call for a moratorium on their development so that the ethics of their use can be debated.  (Good luck with that one!  If dictators or “rebels” fighting for control of a country could get their hands on such a weapon, does anyone think for a moment that a moratorium imposed by some powerless UN Council in Geneva would stop them?  But, I digress.)  The argument is that killer robots raise “serious moral questions about how we wage war” and blur the line in the “traditional approach” of a “warrior” and a “weapon.”

This kind of abstract, clinical analysis of where war-making technology has taken us makes me scratch my head.  Romantic notions of a “warrior” and a “weapon” locked in some kind of single-combat situation don’t seem to have a lot to do with modern warfare.  Technological advances not only have made fighting more lethal — David and his slingshot wouldn’t stand much of a chance against a guy with a flamethrower — but also have increasingly divorced the immediacy of death and its consequences from the decision-maker.  Whether it is missiles, or drones, or roadside bombs that kill indiscriminately, we’ve already moved far from the warrior/weapon model.

Killer robots are just the inevitable next step.  All we can hope for is that their developers and deployers have seen enough science fiction to worry about Skynet and giving birth to the Matrix, and know that they better be sure that the soulless robot killers they unleash aren’t capable of turning on their creators.