From capturing the action at the World Cup to repairing damaged wells in the Gulf of Mexico, robots have already shown themselves capable of improving our daily lives, those with the ability to assist and care for the elderly may even replace nursing homes in the future. But some robots currently in development pose ethical dilemmas and even appear inclined towards malevolence.
Peter Asaro, an assistant professor in the Media Studies program at The New School for General Studies, ventured into this territory while filming his documentary Love Machine in 2000. Asaro wanted to examine how robots could become socially or even emotionally involved with humans, but that inquiry led him to a military robotics specialist, who explained the ethical problems surrounding robot-controlled weaponry.
Asaro discovered that despite having the ability and budget (which, as Asaro notes, is larger than the funding for the National Science Foundation) to create this technology, the military is hesitant to deploy machines that operate without human control. Without any legal or ethical precedent, society is fundamentally unprepared to deal with issues that could arise from using robot arms in combat.
Who is responsible when technology is at fault?, asks Asaro.
One way to answer this difficult question is with more technology. Some scientists have suggested that programming robots with “ethical systems” or an artificial sense of judgment would result in combat that is safer and war that is perhaps more just. Asaro, on the other hand, thinks it is vital that humans remain in control of these machines or “killer robots,” as they are less charitably called. If I’m a robot, there is no notion of self-defense or an awareness of humans and the surrounding area,, says Asaro, I can only do what I’m programmed to do.,
Along with colleagues from Australia, England, and Germany, Asaro founded the International Committee for Robot Arms Control (ICRAC). Its mission, in part, is the prohibition of the development, deployment, and use of armed autonomous unmanned systems,, or, more succinctly: machines should not be allowed to make the decision to kill people.,
Last month, ICRAC invited philosophers, military experts, and peace coordinators to a workshop in Berlin to discuss issues surrounding robot arms control. They decided an international ban is the only way to ensure that, with their lethal possibilities and ethical quandaries, autonomous robot arms remain locked away from humanity in a mechanical Pandora’s box.
The military has seemingly limitless technological resources and is infrequently challenged by ethical and social concerns. In the case of autonomous robot arms, Asaro and ICRAC are hoping to halt progress on this front altogether, and demand that society think a little before granting agency to machines that can’t.