As a society, we’ve greeted the growing prevalence of artificial intelligence with open arms, viewing it as a solution to myriad pressing issues facing our planet.
But there is a dark side to AI that philosophers like Peter Asaro, Associate Professor of Media Studies at The New School, continue to warn against: the potential for AI to be used to create lethal weapons that would then decide to kill completely free of human control.
For years, Asaro, vice chair and co-founder of the International Committee for Robot Arms Control, has been working to stop the creation of what he terms “killer robots.” Recently, he joined the Stop Killer Robots campaign, which hopes to create a new United Nations protocol enshrining the need for humans, not robots, to be behind the kill switches.
Asaro was recently recognized for his work as a finalist in the 2017 World Technology Awards Ethics category. He was previously named a finalist in that category in 2011.
“We are hoping to advance the discussion at the Group of Governmental Experts of the High Contracting Parties to the Convention on Certain Conventional Weapons toward reaching a consensus that human control over all weapons, and every attack, is necessary,” Asaro told Canada’s National Post newspaper recently. “In practical terms, that means getting them to advance the discussion to a more formal treaty negotiation, which would create a new protocol to the existing Convention on Conventional Weapons. It is up to the states how they want to structure the treaty, but we are insisting that the central feature be a requirement to ensure human control.”
Asaro is one of the leading figures on the study of the ethics of AI. His current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones from a perspective that combines media theory with science and technology studies. In 2015, he received a $116,000 grant from the Future of Life Institute, funded by billionaire business magnate and inventor Elon Musk, to study potential threats posed by robots.