Skip to content Skip to navigation

Collaborative Research & Projects

ERB: Developing an Ethics Review Board for Artificial Intelligence

Project leads: Michael Bernstein (Computer Science), Margaret Levi (CASBS; Political Science), David Magnus (Medicine), Debra Satz (Philosophy)

 

Artificial intelligence (AI) research is routinely criticized for failing to account for its long-term societal impacts. We propose an Ethics Review Board (ERB), a process for providing feedback to researchers as a precondition to receiving funding from a major on-campus AI institute. Researchers write an ERB statement detailing the potential long-term effects of their research on society—including inequality, privacy violations, loss of autonomy, and incursions on democratic decision-making—and propose how to mitigate or eradicate such effects. The ERB panel provides feedback, iteratively working with researchers until the proposal is approved and funding is released. Our goal is to grow the ERB to help support a large number of proposals.

Amplifying Linguistic Biases at Scale: The Effects of AI-Mediated Communication on Human Discourse

PI: Hannah Mieczkowski (PhD Candidate, Communciation) 

Collaborators: JEFFREY HANCOCK (COMMUNICATION); James Landay (Computer Science)

 

Over the past few years, artificial intelligence has played an increasingly large role in how people communicate with each other. In some types of communication, such as email or instant messaging, the technology acts only as a channel for the messages. Now AI-Mediated Communication (AI-MC) is interpersonal communication that is not simply transmitted by technology, but modified or even generated by AI to achieve communication goals. AI-MC can take the form of anything from text response suggestions to deepfake videos. The influence of AI in human communication could extend far beyond the effects of a single conversation on any platform, but there has been minimal research on this topic to date. 

Text-based AI, such as Google’s Smart Replies for example, tends to be trained on linguistic data from users of different platforms, whose demographics are strongly skewed towards white males with higher socioeconomic status. At best, the widespread nature of AI-generated language may be removing crucial cultural variations in language, and at worst it may be prioritizing certain gendered and racialized types of discourse, reinforcing systemic inequalities. We are interested in how these potential biases may affect social dynamics of human communication at scale and over time.

Bio Futures Fellows

Project Lead: Megan Palmer (Bioengineering; Bio.Polis)

Collaborators: Carissa Carter (d.school), Paul Edwards (STS), Connor Hoffmann (FSI-CISAC), Peter Dykstra (Bioengineering), Drew Endy (Bioengineering)

Biology is a defining technology of the 21st century. Our abilities to alter, engineer and even create entirely new living systems are rapidly maturing. To ensure advances serve public interests, we need to equip leaders who can wisely guide innovations in both technologies and public policies. Currently graduate students and postdoctoral scholars have few opportunities to substantively engage issues at the intersection of biotechnology, ethics and public policy, despite being well-positioned to benefit and bring together interdisciplinary interests. 
 
This grant supports a pilot cohort of graduate student and postdoctoral “Bio Futures Fellows” (BFFs). The part-time program aims to recognize young leaders from a diversity of disciplines and empower them to engage these issues at Stanford and in the next stages of their careers. The fellows will work closely with Dr. Megan J. Palmer, Executive Director of Bio Policy & Leadership Initiatives, as well as faculty and staff in Bioengineering, the Program in Science, Technology and Society (STS), the d.school, and other units. They will co-design teaching, research and engagement initiatives at the interface of biotechnology, ethics and public policy. These initiatives aim to seed and inform future programs at Stanford, ensuring trainee interests are factored into their design from conception.

Can Affective Digital Defense Tools Be Used to Combat Polarization and the Spread of Misinformation Across the Globe?

Project Leads: Tiffany Hsu (Psychology), Jeanne Tsai (Psychology), Brian Knutson (Psychology) 

Collaborators: Mike Thelwall (Data Science- University of Wolverhampton), Jeff Hancock (Communication), Michael Bernstein (Computer Science), Johannes Eichstaedt (Psychology), Yu Niiya (Psychology- Hosei University), Yukiko Uchida (Psychology- Kyoto University)

Recent research demonstrates that affect and other emotional phenomena play a significant role in the spread of socially disruptive processes on social media around the world. For instance, expressions of anger, disgust, hate, and other “high arousal negative affect” have been associated with increased political polarization and the spread of fake news on U.S. social media. However, despite the clear link between specific types of affect and socially disruptive processes, governments and social media companies have not used this knowledge to defend users. Users are literally left to their own devices.

Our understanding of why specific types of affect are particularly viral in the U.S. and other countries is also limited. In the proposed work, we will develop an affectively-oriented “digital self-defense” tool that consumers can use to reduce their exposure to specific types of affective content, and examine whether this tool reduces users’ political polarization and exposure to fake news. We will also test the effectiveness of this tool in different cultural contexts, starting with the U.S. and Japan, with the aim of developing culturally-sensitive algorithms that can be used to combat political polarization and the spread of misinformation across the globe.