Speaking to a crowd of captivated futurists at the Speakeasy in Downtown Austin at #SXSW, Peter Asaro warned of a future in which “autonomous weapon systems are delegated with the authority to initiate the use of lethal force” — in other words, a world where killer robots get to decide who lives and dies.
Even if you haven’t seen Terminator or The Matrix, that reality isn’t hard to imagine: using existing knowledge and drone technology, states could create weaponry that would independently neutralize military or civilian targets in a way that’s completely faceless and inhuman.
Asaro is working to avert that reality by bringing ethics to bear in the development and implementation of military robotics.
“We have the power to transform norms — what is acceptable and not acceptable in society,” says Asaro, Professor of Media Studies at The New School and a philosopher of science, technology and media. “If we’re deliberate about the norms we want to build around technology, we’ll have a better society.”
Asaro shared his perspective at the XPRIZE Futurecasting Workshop at #SXSW co-hosted by The New School, XPRIZE, and ANA. New School faculty members — Asaro; Ed Keller, professor of Design Strategies at Parsons School of Design; and Eiko Ikegami, Walter A Eberstadt Professor and Professor of Sociology at The New School for Social Research — gave lightning talks and lead a workshop on ways A.I. and other exponential technologies can positively impact society.
Asaro, whose current research focuses on the social, cultural, political, legal and ethical dimensions of military robotics and UAV drones, stressed the need for inclusion in design processes.
“In creating new technologies, we shouldn’t lock ourselves in a room and think we can solve the problem,” said Asaro, who a leading voice in the Stop Killer Robots campaign, which hopes to create a new United Nations protocol enshrining the need for humans, not robots, to be behind the kill switches of military weaponry. “We need the participation of society.”
Next, Ikegami discussed her research on adults with autism spectrum disorder (ASD) in the digital world of Second Life — a study that revealed an “incredible richness of mental life” and a vibrant sense of community among a largely unseen segment of the population. Assuming the form of Kiremimi Tigerpaw, her Second Life avatar, Ikegami spent hundreds of hours identifying and interacting with adults with ASD. She found that Second Life is ideally suited to people with autism as it allows users to come and go as they please — a means of avoiding the real-world threat of sensory overload, a common affliction for people with the disorder — and overcoming difficulties with communicating in real life.
“Avatars aren’t just used for entertainment; they can also be used for health care and improving the lives of people with ASD,” she said.
Understanding adults with autism, and diverse human intelligence more generally, can “enrich knowledge about ourselves and the way we communicate,” Ikegami says.
“It was a very humbling experience,” she added of researching adults with ASD. “I came to understand my own cognitive process. Everyone is on the spectrum. Everyone lives in a bubble that limits our perception.”
Keller, who is also director of the Center for Transformative Media at The New School, discussed the “significant challenge humanity faces in our assumptions that artificial intelligence will be behave in a way that we behave ourselves.” Movies and televisions shows such as Black Mirror and Blade Runner test these assumptions, offering visions of the dangers of A.I.
Keller presented an alternative approach — one based on “non-human and non-standard models of intelligence” such as trees, bacteria, and even slime mold — to the design of A.I.
“There are alternate models of sapience that are adaptable and empathic that can lead to a transformation in the way we think about and design A.I.,” Keller says.
Following the lightning talks, a curated group of 50 people representing the science, technology, and creative communities were asked to identify a problem and consider what the future would look like if it were solved. Maya Wiley, Senior Vice President for Social Justice at The New School, and her group imagined a future in which “artificial intelligence has eradicated social and economic stratification of the globe.”
Said Wiley, “This would allow data to be democratically developed by all populations and generate machine learning that ends stratification.”