Robot and Frank, a sci-fi comedy/drama starring Frank Langella, centers on the relationship between an aging jewel thief and his domestic robot who helps him burgle his neighbors.
While the story may seem far-fetched, Peter Asaro foresees a future in which humanity becomes increasingly reliant on “autonomous artificial agents,” including personal shopping assistants and self-driving cars. And while these agents have the potential to bring countless advantages (like, for example, caring for the elderly), there exists the very real possibility that, as autonomous beings, they may flout the law or otherwise cause harm to society.
Asaro, assistant professor of Media Studies at The New School, is tackling this issue through an evolving body of research that is receiving a boost through a grant from the Future of Life Institute, a volunteer-run research and outreach organization “working to mitigate existential risks facing humanity.” The prestigious $116,000 grant was funded by billionaire business magnate and inventor Elon Musk.
“If we want to allow AIs and robots to roam the Internet and the physical world and take actions that are unsupervised by humans, we must be able to manage the liability for the harms they might cause to individuals and property,” Asaro said. “Resolving this issue will require untangling a set of theoretical and philosophical issues surrounding causation, intention, agency, responsibility, culpability and compensation, and distinguishing different varieties of agency, such as causal, legal and moral.”
Titled “Regulating Autonomous Artificial Agents: A Systematic Approach to Developing AI & Robot Policy,” Asaro’s project aims to “provide a better foundation for developing policies which will enable society to utilize artificial agents as they become increasingly autonomous, and ensure that future artificial agents can be both robust and beneficial to society, without stifling innovation.”
To learn more about Asaro’s work, listen to his interview on Research Radio.