Main content start

Michael Swenson

Headshot of Omar Swenson in front of the Golden Gate Bridge with short, dark hair, glasses, and a red checkered shirt
Fellowship Year
2025

Bio

Michael R. Swenson serves as the Senior Community Manager for Strategic Response at Reddit. In this role, they support countless Reddit communities and moderator teams by developing the strategic response pillar and driving preventative crisis management initiatives.

Prior to Reddit, Michael was a Community Lead at the Integrity Institute and a Safety Policy Manager for Child Safety and Wellbeing at Meta. They also previously led a team of program managers at Discord, where they managed the moderator ecosystem: a collection of programs, resources, and initiatives focused on community moderators. This work bridged community moderators with product teams, academic researchers, and civil society organizations focused on fostering safety and belonging within online communities.

Michael holds an M.S. in Computer Science with a focus on human-computer interaction from the Georgia Institute of Technology, and a Master of Divinity with a focus on ethics and society from Duke University Divinity School. At these institutions, Michael delved into the intersections of ethics, technology, and their broader societal implications for online communities.

Fellowship Project

The internet is at a critical crossroads, where the increasing complexity of digital discourse, evolving regulatory landscapes, and persistent safety concerns place unprecedented demands on moderators across platforms. Content moderators — including those who volunteer as community moderators and administrators, and paid Trust & Safety professionals — are often the first line of defense against harmful content, harassment, and misinformation, and they face significant challenges. Central to this project is also the dichotomy rising between human-centered moderation and the rise of novel AI-based approaches to content moderation, which both promise to aid in the most nefarious challenges moderators are facing, but also poses almost existential concerns about what the future of online, and especially volunteer community-based moderation, will look like. Further exacerbating these challenges is a lack of standardized resources, inadequate professional and especially volunteer support, and limited collaboration across platforms. While individual companies and academic institutions have made strides in developing the Trust & Safety industry, the absence of centralized knowledge hubs and communities for volunteer contributors limits the scalability and effectiveness of moderation practices. 

This project seeks to bridge that gap by fostering a dedicated community of practice—one where paid and unpaid moderators, researchers, civil society, and potentially even tech companies can collaborate to develop and share critical resources, tools, and knowledge about how to best foster thriving digital spaces. The lack of structured support for moderators not only exacerbates issues like burnout but also impacts the safety and quality of online spaces, disproportionately affecting marginalized communities who experience higher rates of online abuse. Without a shared repository of knowledge, each platform is left to reinvent solutions in silos, leading to inconsistent enforcement and opaque decision-making. 

By pursuing the potential of  an open-access database of moderation resources, publishing policy briefs, and hosting expert-led discussions, this initiative aims to provide both immediate practical guidance and long-term systemic improvements. A thriving, self-sustaining community of practice will not only empower moderators with the tools they need but also drive meaningful policy and industry-wide shifts. Addressing this issue is essential for the broader tech ethics community, as the integrity of digital spaces directly impacts democracy, mental health, and online safety at scale.

This project seeks to address the fragmentation in online moderation by building a structured, collaborative community of practice where moderators across platforms can share knowledge, tools, and best practices. By developing a public repository of moderation resources — encompassing case studies, guides on how to handle situations, technical tutorials and auto-moderation tools, and expert-led discussions on advanced topics — this initiative will provide moderators with the guidance and infrastructure they currently lack. While previous efforts to support moderation, such as platform-specific training programs and industry roundtables, have been limited in scope or access, this project takes an open, cross-platform approach, ensuring that insights and best practices are widely available. Rather than focusing solely on reactive moderation tactics, this initiative will also emphasize proactive strategies, such as content governance frameworks and community-based approaches to safety, fostering more sustainable online spaces that can navigate the complexifying landscape of social online spaces.

Timing is critical. As regulatory scrutiny on platform governance increases and trust in online spaces declines, there is an urgent need for more cohesive, transparent, and effective moderation strategies. Existing solutions — ranging from internal platform guidelines to ad-hoc academic research — have largely been siloed, leaving community and paid moderators to navigate challenges without a shared knowledge base. By integrating insights from academia, civil society, and industry, this project offers a scalable, inclusive, and dynamic approach to online safety. Success will be measured not only by the creation of tangible resources but also by the depth of engagement within the community, ensuring that moderators are not just recipients of knowledge but active contributors to the evolution of digital safety practices.