School of Media Studies

Choice in the Age of AI: A Conversation with Peter Asaro

By Emma Minor

In the second installment, we sat down with Peter Asaro, Associate Professor of Media Studies and a prominent voice at the intersection of media and technology.

This October, he will be honored with the prestigious Golden Dove (Colomba d’Oro) for Peace Award for his contributions to a campaign against autonomous weapons—lethal arms not controlled by humans. His forthcoming book will delve deeper into these themes, exploring the ethical and societal challenges posed by AI-generated deception and manipulation.

Throughout our conversation, Peter emphasized the importance of meaningful engagement with technology and resisting the trend of letting algorithms dictate our future. Drawing from his background in philosophy and computer science, he shared thoughts on how media professionals can navigate these complex dynamics.

You are part of the Stop Killer Robots movement, which fights against AI-operated weapons. Could you speak more about this area of focus?

I have a long history of working at the interface of AI, robotics, and social policy, particularly around autonomous weapons. Over the past decade, I’ve been focused on a campaign to stop “killer robots.” I was one of the founders, and serve on the steering committee, and am currently Vice Chair of an international coalition of non-governmental organizations (NGOs). This coalition has more than 250 member organizations representing over 70 countries, and we’re working to secure a treaty at the United Nations (UN). Right now, we’re preparing for the upcoming General Assembly in New York.

I’m also working on a new project. Last year, while on sabbatical at the University of Washington in Seattle—home to one of the country’s leading disinformation research centers—I was investigating AI-generated deception and manipulation. I developed a proposal and submitted it to the National Endowment for the Humanities. I’ve just received a grant for the next two years to write a book about AI disinformation and how we might regulate it. I’m pretty excited, and for our students, I’ll be teaching a seminar on this topic next semester.

Have views on autonomous weapons changed over time among the stakeholders you’re speaking with? Would you say that attention to the issues Stop Killer Robots addresses has grown, or is it only now coming to the forefront?

When we started back in 2012, we framed this as a preemptive prohibition. Autonomous weapons weren’t out there yet. There was widespread  belief that this was only an issue for the wealthier nations with advanced technological capabilities. Most militaries weren’t thinking about it, and many people seemed to think we should just wait to see how the technology unfolded. There was also a lot of tech optimism, with some claiming these systems would be able to distinguish civilians from combatants more precisely, reducing civilian casualties. We’ve argued against those claims for many reasons.

The rise of ChatGPT kind of woke everyone up to AI. There’s this huge AI bubble now—whether you call it hype or enthusiasm—but it’s created broad awareness that AI is here, it’s changing things in radical and unpredictable ways, and it’s obviously going to be used by the military. This has prompted those involved in international discussions to make autonomous weapons a higher priority. In fact, the President of the International Committee of the Red Cross (ICRC) and the UN Secretary-General made a joint statement calling for negotiations on a treaty for autonomous weapons last year. That’s only happened two or three times in history. The ICRC, as the guardians of the Geneva Conventions, usually argue that international law already covers these issues. But in this case, they’ve said we need new legal frameworks (link to ICRC statement).

Then, of course, the conflicts in Ukraine and Gaza really changed perspectives. We’ve seen the use of drones, automated weapons, and automated targeting systems, particularly in Gaza. Those conflicts have made people aware that military AI is a real issue – and it’s here, it’s not science fiction or off in the distant future.

Can you share the key points of the treaty and the protections it aims to establish against autonomous weapons?

The ICRC has proposed a two-tiered approach. The first tier would outright ban certain weapons, including systems that are unpredictable or use AI to learn and adapt in the field – which change their behavior and programming once deployed. These systems can’t be properly tested or predicted and are inherently dangerous. Other weapons that would be banned are those explicitly designed to target humans. When a system is made to detect human forms, it must create a digital representation of a person. Given the potential for bias and violations of the fundamental human rights of life and dignity, and the intrinsically reductive nature of any automated representation, creating such weapons is essentially dehumanizing and fundamentally wrong.

The second tier covers weapons that would be allowed but regulated—those with some autonomy in targeting and engagement but still requiring meaningful human control. These systems target military objects by nature like tanks, aircraft, and bases. It would also prohibit targeting civilian infrastructures, or dual-use targets, where civilian buildings or vehicles might be used for military purposes. Determining that a civilian object is a military target, or making a proportionality decision about the risk to civilians posed by targeting a nearby military target requires human moral and legal judgment. 

Ensuring human responsibility for these weapons is crucial, particularly in conflicts like Gaza, Lebanon and Ukraine, where we see a great deal of civilian infrastructure is being targeted. This is sometimes permitted in international law, but these are moral and legal decisions that a human needs to make in the specific context and situation, and should always be responsible for.

Turning to your work on AI manipulation, can you provide an example of what this manipulation looks like?

AI-driven deception builds on what Shoshana Zuboff discusses in The Age of Surveillance Capitalism—how traditional advertising and marketing evolved from broad targeting to highly personalized strategies. AI has taken this even further by using personal surveillance data to tailor messages to individuals, whether for ads or political campaigns, creating much more powerful tools for influence, as well as capturing more of our attention.

Looking ahead, I’m concerned about where this technology could go in the next generation or two of AI. Manipulation could be far more dangerous, using deep insights into people’s behavior and psychology to coerce or deceive them on a very personal level. A system could threaten the things you value most or impersonate friends or family, sending messages based on intimate knowledge of your life. We’ve already seen criminals using this intelligence to create fake ransom notes. We need to regulate this now, and we need to create much stronger privacy policies to keep personalized data out of personalized AI models of our psychology.

With AI increasingly shaping social and cultural narratives, how do you prepare students for roles in the media industry?

Pedagogically, I focus on sharing these developments with my students and encouraging us to think together about how we can engage with these challenges. Media production is a socially created and shared form of communication. The danger, particularly with the rise of targeted marketing, is individualization–that we let algorithms define how we relate to each other. 

The more we crave customization and personalization—whether it’s search results, music playlists, or news feeds—the more we’re divided into isolated compartments. This separation leads to filter bubbles, where everyone has a different media experience and engages with a world viewed through a lens created by algorithms. Basically, you’re separating people. It’s no surprise that divisive politics and extremism then occurs. In the past, art and even mass media created shared narratives and stories and brought people together, even if imperfectly.

Now, I think a task for media producers and analysts is to consider how we use, share, and create media as communities. That will be crucial for turning the tide. The opportunity for media producers is to engage socially with communities and other creators in ways that ensure the algorithms are serving their collective interests rather than the algorithms, or the platforms, or the entities that control those things.

How have creative media practices evolved in recent years, and how are you incorporating that into your teaching?

Technology is a major factor. When I first joined The New School, only a few students knew how to edit videos on their computers before starting their master’s programs. Today, everyone’s done it—often on their phones. From an educational standpoint, it’s a big advantage because we don’t have to spend so much time on basic technical skills. We can dive more quickly into how to tell stories and create meaning through them. Of course, there’s also the rise of AI technologies, like ChatGPT and image generation tools. While these are just getting started in terms of capabilities, the systems are growing rapidly, and they will undoubtedly transform the entertainment and media industries.

For content creators—whether writers or filmmakers—the challenge will be making sure your work remains meaningful enough that a machine can’t replace you. And at the same time, we need to figure out how to use new technologies to advance our careers while still telling human and social stories that build community and society, because AIs aren’t going to do that. We already know that algorithms do the opposite: splitting society apart.

Who are some of the writers, creators, and researchers that have inspired you?

It’s a long list. Throughout my career, I’ve been influenced by many thinkers. In pursuingPh.D.s in both philosophy and computer science, I was particularly influenced by the pragmatists, especially John Dewey (a co-founder of The New School). His work, Art as Experience, forms the basis for my section on aesthetics in my Media Theory. If we look at his account of aesthetics, there’s a very clear critique of AI, especially in the context of how individuals express ideas and emotions that can’t always be put into words but are conveyed through media – be it painting or dance or writing or film. AI can mimic historical artistic patterns of expression, but it lacks the emotional life and reflection that is the foundation of true artistic expression. AI in its current form is doomed to imitation.

In my graduate studies, I was also influenced by science and technology thinkers, like Bruno Latour, who emphasized how society and technology are intertwined in complex socio-technical systems. More recently, Hannah Arendt has been a critical influence, especially in my work on autonomous weapons. Her Book On Violence, considers their relationship between power, the state, the police and military, and the student protests of the 1960s. In that book she also argues that autonomous weapons, or push-button assassination, could be even more dangerous than nuclear weapons, and highlights the risks of that kind of technology leading to even more extreme forms of authoritarianism and totalitarianism than we witnessed in the 20th century. I take a lot of inspiration from her work. And she taught for many years at The New School, which is also pretty cool.

What advice would you give aspiring filmmakers and media professionals who want to specialize in your field? 

I believe it’s essential to think about your work in relation to society. Consider how you can use the opportunities available to move toward a world of shared meaning and social progress. There are so many ways to do this and paths you can take in academia, industry, civil society, or nonprofits — be creative with those possibilities. The kind of work you do can influence any of these areas, and always keep a clear vision of your values  and where you want to see yourself and the world heading.

If you are an SMS faculty member interested in being interviewed for this content series, please email us at: smscommsandevents@newschool.edu. We’d love to hear from you!

Take The Next Step

Submit your application

Undergraduate

To apply to any of our Bachelor's programs (Except the Bachelor's Program for Adult Transfer Students) complete and submit the Common App online.

Graduates and Adult Learners

To apply to any of our Master's, Doctural, Professional Studies Diploma, Graduates Certificate, or Associate's programs, or to apply to the Bachelor's Program for Adult and Transfer Students, complete and submit the New School Online Application.

Close