The concept of a killer robot is both fascinating and scary. For many, Arnold Schwarzenegger’s iconic Terminator immediately springs to mind: the robotic star of a post-apocalyptic science fiction film. Today, this character is more reality than fiction. But a contemporary killer robot does not quite look like the Terminator (not yet, anyway), and the man resisting a robotic takeover isn’t the foresighted John Connors—he’s a New School professor.
Peter Asaro, the director of graduate programs at The New School for Public Engagement’s School of Media Studies, is a killer robot expert, and an active member of the Campaign to Stop Killer Robots, as well as the International Committee for Robot Arms Control. Calling these machines “autonomous lethal weapons systems,” Asaro’s exploration of the ethical issues raised by the technology, as well as what it means for the future of warfare, is the topic of this month’s Research Radio feature, Killer Robots.
Click on the player below to listen to Killer Robots
What exactly is a killer robot? It’s a drone, but with enhanced capabilities. As Asaro explains, it’s “a weapons system that doesn’t have human control or oversight; it decides who gets killed, when, and why.” Depending on your definition of the technology, it can be argued that killer robots have been around for centuries. “You can in many ways label a land mine as a very stupid robot,” says Asaro. The 1999 Ottawa Treaty eventually did ban the latter weaponry, and Asaro wants the same for its evolving cousin—drones equipped with self-governing lethal capabilities.
“In many ways, these technologies are redefining our traditional notion of warfare,” Asaro continues. He maintains that, as products of the “global war on terror,” recent technological developments in drones are further blurring what had been concrete concepts of legal warfare, including national boundaries and the identification of enemy combatants.
This only adds to what is already a fiercely debated topic. Americans are becoming accustomed to the idea of drones patrolling a distant foreign countryside, but what happens when suspected combatants enter population centers abroad, or even at home? Asaro raises many concerns about the ethical nature of drone warfare. He questions the meaning of “responsibility” when drones are tasked with missions historically undertaken by people. “You have to reflect on the implications of your actions,” says Asaro. “There are certain things you are responsible for that you can’t delegate to a machine.”
And that may be the crux of the ethical argument: Will robots ever be truly able to think and reflect and act ethically? The question is as philosophical as it is technological. Asaro says, “There are some similarities between how neurons work and how digital computations work, but there’s also a lot of other stuff going on that doesn’t easily translate.”
A common argument made in defense of using killer robots is the possibility that advances in such technologies could make war more humane. “That’s a reasonable argument, to be sure,” responds Asaro, “but you have to look at the probability and time it would take to get to that level of precision and sophistication. Asaro’s bottom line: “There will be mistakes, and we’re better off without them.”
Don’t forget to subscribe to Research Radio on iTunes.
Research Radio is a New School podcast series that explores academic inquiry at the university. Our faculty and students have been researching pressing social and scientific issues, from sustainability to psychology to politics, for nearly a century–and now you can hear about their latest findings.