Main content start

Beyond Moderation: Exploring Tech Strategies to De-Escalate Political Conflict

Conference attendees gathered around circular tables and panelists at the front of the room at the Beyond Moderation conference.

Photo by Benjamin Xie

In a time of intense political conflict, how can technology de-escalate rather than amplify turmoil? The Beyond Moderation conference, organized by Diana Acosta Navas* and Ting-an Lin**, gathered an interdisciplinary group of experts to discuss how digital technologies can enhance civic debate and de-escalate political conflict. Hosted by the Ethics, Society, and Technology Initiatives of the McCoy Family Center for Ethics in Society, the one-day conference aimed to look beyond content moderation as the principal means of regulating the proliferation of online hate speech and misinformation. It provided a much-needed space to explore and analyze alternative technological strategies for addressing the rise of political conflict online.

The organizers reflected on their motivations behind the Beyond Moderation conference, and we also offer highlights from the discussions and next steps below. 

Building Beyond Moderation

Born and raised in Taiwan, Ting-an Lin described how her experience encountering constant information warfare, including cyber attacks and disinformation campaigns, led to her concerns with technology’s disruptive effects, such as increasing polarization and fueling political conflicts. “As a philosopher whose research considers how to address structural injustice, I am convinced that technology bears a crucial power in shaping the sociotechnical structure where we are situated, and there is always a hope that emerging technologies could shape it toward a more just form,” Lin said. As a result, her goal for the conference was to facilitate cross-disciplinary discussions on how to mitigate the negative consequences of technology and explore how technology may positively impact society. 

Diana Acosta Navas and Ting-An Lin
Photo of Diana Acosta Navas (left) and Ting-an Lin (right). Photo by Benjamin Xie.

Diana Acosta Navas shared how her experience growing up in Colombia, a country at war throughout her childhood, has shaped her awareness of how people’s informational environments can shape their perception of conflict by normalizing, legitimizing, and enabling violence. “My research on content moderation has revealed to me that just as online spaces can channel political conflict toward democracy, justice, and equality, they also have the ability to channel it toward atrocity and violence, stir hatred, and further marginalize oppressed communities. It has also shown me that the potential impact of content moderation in mitigating conflict and preventing violence appears to be limited at best and counterproductive at worst, especially as it acts in tandem with the logic of engagement optimization,” said Acosta Navas.

For Lin and Acosta Navas, organizing Beyond Moderation was a meaningful event to open spaces for creative and thoughtful discussion about all the potential ways in which technology can mitigate political conflict and channel it in constructive directions. 

Setting the Stage for Ethical Perspectives

Featuring a keynote address from Colin Megill (Pol.is), the conference began with an exploration of the possibilities of applying machine learning toward deliberative democracy – to resist dichotomous discussions and foster a “high dimensionality” of public discourse. Then, the first panel, moderated by Johanna Rodehau-Noack (Stanford) and featuring Renée DiResta (Stanford), Thunghong Lin (Stanford, Academia Sinica), and Jonathan Stray (UC Berkeley), investigated the ethical and societal perspectives on using content moderation to de-escalate global tensions. 

The Q&A from that panel highlighted the legitimacy and effectiveness of content moderation tools and their impact on human behavior. Panelists described the phenomenon of users migrating from platforms with strict moderation to platforms with more lenient policies as one example of this impact. Wide deplatforming is not without consequence, panelists shared, and there are fears that individuals finding a collective identity as ‘the deplatformed’ and ‘the censored’ can have widespread ethical and social harms.

Commenting on key takeaways, Diana Acosta Navas said, “Managing our informational environments responsibly requires more than the moderation of content.” She also stated, “Panelists concurred on the importance of data and design-focused approaches to mitigating conflict online.”

Envisioning Technological Opportunities

The second panel, moderated by Veronica Rivera (Stanford), discussed technological opportunities with online platforms and featured insights from Susan Benesch (Harvard), Amy X. Zhang (University of Washington), Michael Bernstein (Stanford), and Deepti Doshi (New_Public). 

The panelists highlighted the nuanced ways in which platform affordances, societal norms, and online governance interact to influence online discourse and community dynamics. Panelists shared perspectives on how the design of online spaces and technologies has the power to shape the interactions between people online. The discussion made clear that the design of digital spaces has far-reaching implications for the behaviors of social media users.

Understanding Political and Global Impacts

The final panel of the day, moderated by Avshalom Schwartz (Stanford), focused on the political and global impact of content moderation, featuring perspectives from Niousha Roshani (The Black Entrepreneurs Club), Oladeji M. Tiamiyu (University of Denver), Ravi Iyer (University of Southern California), and Marietje Schaake (Stanford). From exploring who defines narratives to highlighting more holistic methods of regulating micro-targeted advertisements, the conversation challenged attendees to think beyond the U.S. and consider how to engage the knowledge of the broader world.  

Collectively, the panel also addressed the scalability of content moderation with AI advancements, the impact of China’s AI-informed ‘Smart-Courts,’ and the potential for AI regulation to better align generative AI developments with democratic principles. Although the path of least resistance in addressing such complex questions would be to continue with the status quo, panelists approached these challenges by centering discussions on justice and accountability. 

Overall, the conference panelists found common ground on multiple goals, including instilling democratic values in the development of digital technologies, fostering and enabling constructive dialogue while preventing the escalation of violence, and the critical importance of design-centered approaches to mitigating conflict online. 

Moving Beyond Moderation

What are the next steps for moving beyond moderation? Acosta Navas emphasized the importance of maintaining “a dialogue aimed at integrating insights from different disciplines and contexts” to achieve better solutions. While the Beyond Moderation conference was a successful first step, Acosta Navas also called for the creation of a strong community of researchers and practitioners who are tasked with understanding how emerging technologies can strengthen and shape democracy and how these technologies can respond to rising violence and humanitarian crises. 

Interested in learning more about tech ethics? Stanford students can apply for the Tech Ethics & Policy Fellowships. Round 2 applications are due February 1, 2024.

 


*Diana Acosta Navas is an assistant professor at the Quinlan School of Business, Loyola University Chicago. She is a past Embedded Ethics Fellow at the McCoy Family Center for Ethics in Society in partnership with the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the Computer Science Department. 

**Ting-an Lin is an Interdisciplinary Ethics Fellow at the McCoy Family Center for Ethics in Society in partnership with the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

Makenzy Caldwell is a Research Associate in the Tech Ethics and Policy Rising Scholars Program at the McCoy Family Center for Ethics in Society. She works with the Embedded Ethics Program and the Ethics and Society Review. 

Benjamin Xie is an Embedded Ethics Fellow at the McCoy Family Center for Ethics in Society in partnership with the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the Computer Science Department. He is a computing education researcher who uses human-computer interaction, psychometric, and social science methods to design critical and equitable interactions with data.

This blog post is adapted from Benjamin Xie’s article “Beyond Content Moderation” on Medium.