2022 Collaborative Research & Projects
Ethical, Responsible, and Human-Centered Data for Machine Learning Software Development
- HARI SUBRAMONYAM (EDUCATION; INSITUTE FOR HUMAN-CENTERED AI)
- DIANA ACOSTA NAVAS (ETHICS IN SOCIETY)
- KATIE CREEL (ETHICS IN SOCIETY)
- NEEL GUHA (COMPUTER SCIENCE)
- MICHAEL BERNSTEIN (COMPUTER SCIENCE)
- MITCHELL STEVENS (EDUCATION)
Data is the backbone of machine learning applications (e.g., voice assistant, face-ID, product recommendation). Yet, a fundamental challenge for software product teams is specifying ethical, representative, and human-centered data for machine learning. Consequently, end-users encounter frustrating experiences that range from systematic injustice to system failure.
Our goal is to establish best practices for specifying data requirements for machine learning applications. We propose conducting a series of design workshops with diverse stakeholders, including engineers, user researchers, end-users, domain experts, ethicists, and legal experts, to develop guidelines and processes for machine learning data. We expect the collaborative and participatory nature of our workshop to surface new ethical concerns and, in turn, broaden and enhance our understanding of the ethical dimensions of data specification, collection, and usage.
Insights from this work will inform our future research on developing collaboration support tools for data specifications and validation, applying our findings to machine learning and data science courses, and contributing to the practice of ethical machine learning software development.
Teaching Dark Patterns: A cross-disciplinary approach
- JENNIFER KING (INSTITUTE FOR HUMAN-CENTERED AI)
- LUCY BERNHOLZ (DIGITAL CIVIL SOCIETY LAB)
Teaching Dark Patterns will be the first university class to focus on the growing problem of dark patterns in everyday life. Dark patterns (manipulative and deceptive designs in digital systems) are found everywhere online and increasingly in our built environment. We will use the Dark Patterns Tip Line website to introduce students to the concept of dark patterns, ask them to identify and classify them, and produce proposed plans for technological, design, and/or public policy solutions to them.
The class will address the ethical implications of certain choices in design and business objectives, as well as teach students to be critical consumers of digital technologies.
We intend to build the Tip Line into a university-wide/cross-university hub for research and teaching. Regulatory interest in dark patterns is on the rise in the U.S. (FTC), Europe (German government Dark Pattern Decoder Project) and globally (New Web Foundation Tech Policy Lab). However, we have not found classes dedicated to Dark Patterns at any universities. While this class may be a first, we will work with colleagues at UCLA, UVA, and Harvard who are interested in developing curriculum, sharing pedagogical activities, and conducting research using the Tip Line.
CS+Social Good Fellowships
- OLIVIA YEE (COMPUTER SCIENCE, SYMBOLIC SYSTEMS)
- JESSICA YU (COMPUTER SCIENCE)
- ANDY JIN (COMPUTER SCIENCE)
CS+Social Good, founded in 2015 and now running for its seventh year, is a student group on campus dedicated to maximizing the benefits of technology while minimizing its harms. The CS+Social Good Fellowship supports Stanford undergraduates in pursuing 9-week, full-time work experiences at organizations around the world that use technology to address social issues. Fellows gain firsthand experience working on social impact technology under the mentorship of industry experts, through which they discover impactful ways to leverage their technical skills in government, nonprofit, company, and job sectors that serve as alternatives to traditional tech roles. In partnership with the Haas Center, the CS+Social Good Fellowship provides stipends to cover students’ work expenses, and it has supported over 30 summer fellows in the past 5 years.
The fellowship program gives students the opportunity to grapple with the ethical implications of technology through direct work. This EST Hub Grant is supporting CS+Social Good to meet the incredibly high and increasing student demand for the fellowship program, as well as to increase programming for fellows to reflect on and meaningfully address the practical ethical challenges they encounter in their internship experiences.
Developing and refining new conceptualizations of algorithmic bias through cognitive research and curricular design
- VICTOR LEE (EDUCATION)
- PARTH SARIN (CENTER FOR EDUCATION POLICY ANALYSIS)
- TAMARA SOBOMEHIN (EDUCATION)
- DANIELA GANELIN (EDUCATION)
- VICTORIA DELANEY (EDUCATION)
- HARI SUBRAMONYAM (EDUCATION)
Awareness of “algorithmic bias” as a major ethical problem related to contemporary data science and artificial intelligence is growing, but what it means to for someone to conceptualize and understand the problems of “algorithmic bias” is unclear. We need both foundational research on how people recognize and explain socially problematic uses of big data and to help people think about this challenge in ways that lead to solutions.
This project has dual aims. One is to conduct cognitive interview-based research with a mix of high school students and adults with different amounts of formal training in artificial intelligence, computing, and data science. Using these interviews, we will generate new research information about what people think is happening in situations that are commonly labeled as having problematic algorithmic bias.
Second, we will be developing and testing a concise curriculum and interactive tools primarily for high school-aged students to help improve understandings of algorithmic bias and how larger systems of data, algorithms, and social inequities are differently implicated. Through combined participation of practicing high school teachers, graduate and undergraduate students, education, psychology, and computer science researchers, the long-term goal is to advance “Artificial Intelligence Literacy” for all.
SSTAR:QUEST - Sustainable Space Travel And Research: Questioning and Understanding the Ethics of Space Technologies
- SONIA TRAVAGLINI (AERO & ASTRO ENGINEERING)
The SSTAR:QUEST (Sustainable Space Travel And Research: Questioning and Understanding the Ethics of Space Technologies) project will host a series of symposia to initiate conversations about the ethics of space technologies, space exploration, and sustainable exploitation of space resources, featuring a panel of expert guest speakers considering key topics, followed by student-facilitated discussions.
The recent rapid expansion of space technology, including satellite constellation systems and privately funded commercial space missions, has increased the focus on emerging ethical issues generated by advances in space technologies. From the question of who is responsible for cleaning up space debris, to the challenges of equity in space exploration and habitation, a wide range of ethical and legal questions are emerging for engineers and a range of other disciplines to consider.
The SSTAR:QUEST project, organized by Dr. Sonia Travaglini (Department of Aeronautical and Astronautical engineering), will create an interdisciplinary forum within Stanford for undergraduate students, graduate students, researchers, staff and faculty in the department of Aeronautical and Astronautical engineering, the Law School, the department of Archaeology, and various other disciplines, to spark and foster conversations on ethical topics connected to space, and to collaboratively explore the impact of technologies developed for extraterrestrial exploration.
Towards an Inclusive Machine Learning Model for Race Talk: Developing Computational Tools Informed by the Psychology of Race
- CINOO LEE (PSYCHOLOGY)
- ESIN DURMUS (COMPUTER SCIENCE)
Many online communication platforms (e.g., Facebook, Twitter) rely on machine learning algorithms to screen for and predict which content contains bias and incivility. Yet, current moderation algorithms have trouble reliably distinguishing between toxic race conversations and productive race conversations. For instance, posts from Black users who share their experiences of racism are often wrongly censored as hate speech. This biased moderation technology can lead to at-scale marginalization and exclusion of Black users’ voices and experiences, which causes psychological and economic harm and stifles important and productive race conversations that we need to have to better function as a diverse society.
We propose to develop a more inclusive and nuanced racism detection model, informed by the psychology of race and race relations. We will create a more informed taxonomy of race talk using real-world conversation data from social media. This taxonomy will include categories of constructive race talk (e.g., disclosing personal experiences of discrimination) and toxic race talk (e.g., racism) that are common on online platforms. Building this fine-grained taxonomy will help to develop more nuanced models and allow us to analyze the language of constructive vs. toxic race talk, which will be helpful to further inform future moderation work.
BMIR Multidisciplinary Workgroup: Ethical learnings from developing an EHR-based lung cancer database for predictive oncology
- MADELENA NG (BIOMEDICAL INFORMATICS)
- TINA HERNANDEZ-BOUSSARD (BIOMEDICAL INFORMATICS RESEARCH)
- ANNA DE HOND (BIOMEDICAL INFORMATIC RESEARCH)
- MARIEKE VAN BUCHEM (BIOMEDICAL INFORMATICS RESEARCH)
- VAIBHAVI BHAVIESH SHAH (MEDICINE)
- SEAN DOWLING (MEDICINE)
The Stanford Center for Biomedical Informatics Research (BMIR) Multidisciplinary Workgroup will bring together the diverse multidisciplinary community at BMIR to collectively grapple with ethical concerns embedded in patient data-dependent projects and promote a more standardized approach to model development across the AI lifecycle. We hope the workgroup may serve as a unifying space for synergistic learning, transparent discussion, and ad hoc collaboration for early stage AI researchers.
We propose leading the workgroup with real-time illustrative challenges encountered through the step-by-step development of an EHR-based lung cancer database for predictive oncology purposes. Ethical learnings from previously developed databases and frameworks at BMIR will also help guide our initial discussions. Through this approach, we hope to establish feedback loops across all BMIR projects to help expedite the identification of their unique ethical and societal implications on patient care. Furthermore, the workgroup also aims to equip the next generation of AI researchers to be proactive in the objective scrutiny of their data projects and the spearheading of ethical and trustworthy AI endeavors.
- DIANA ACOSTA NAVAS (ETHICS IN SOCIETY)
- TING-AN LIN (ETHICS IN SOCIETY)
- HENRIK KUGELBERG (ETHICS IN SOCIETY)
- LINDA EGGERT (OXFORD UNIVERSITY)
Questions over content moderation, its ability to prevent violence, and its impact on public speech are pressing. Recent experiences around the world evince how digital platforms may enable hate speech and misinformation to quickly propagate and escalate political and ethnic conflicts. Automated moderation promises to enhance platforms’ ability to timely detect toxic speech and remove it from public discussion. At the same time, there is growing concern regarding the effect of automated moderation on free speech and its enablement of censorship and surveillance. This discussion fits within a long-standing philosophical debate regarding the appropriate balance between protecting freedom of speech and safeguarding the public interest. However, this chapter of the debate raises new questions regarding the impact of automation in speech-related decisions; the legitimacy of corporate actors; and the distinctive strategies for moderation enabled by platform architecture. Addressing these questions requires an interdisciplinary dialogue that integrates technical, ethical, political, social, and global aspects.
This project aims to convene a group of experts from diverse fields to discuss the viability, potential impact, and ethical dimensions of employing different kinds of content moderation as tools for conflict amelioration.
Revisiting Nuclear Ethics
- SCOTT SAGAN (POLITICAL SCIENCE)
- HERB LIN (CENTER FOR INT’L SECURITY AND COOPERATION)
- NITISH VAIDYANATHAN (FSI - CISAC)
We plan to analyze emerging issues related to the ethics of nuclear weapons programs and potential use in an era in which lower-yield weapons and more accurate missiles are being deployed. There are two components to this: a workshop discussing nine article manuscripts written by current and recent Stanford scholars, and a public event, featuring former Secretary of Defense William Perry, to discuss the issues of nuclear ethics more generally. Joseph Nye will also deliver a keynote address on his book “Nuclear Ethics” to workshop participants.
The central aim of the workshop is to critique nine draft papers written by current and recent Stanford scholars with Stanford and outside discussants, bringing an interdisciplinary approach to the nuclear debate. The papers presented represent this methodological diversity. Three papers use public opinion surveys to generate insight on drivers of citizens’ willingness to use nuclear weapons. Other papers examine ethical issues around nuclear weapons use, as countervalue targeting, leadership targeting, the usage of force to stop nuclear proliferation, and obstacles to abolition of nuclear weapons. Finally, two papers examine emergent issues such as first amendment concerns around nuclear weapons and interaction between cyberattacks and nuclear escalation.
Scenes from the Anthropocene - A video documentary of diverse voices and views on conservation from across society
- TATIANA BELLAGIO (BIOLOGY)
- KRISTY MUALIM (GENETICS)
- AVERY HILL (BIOLOGY)
- LAUREN GILLESPIE (COMPUTER SCIENCE)
- MEGAN RUFFLEY (CARNEGIE INSTITUTE)
- SHANNON HATELEY (CARNEGIE INSTITUTE)
Preserving biodiversity and ensuring the sustainability of natural resources are critical 21st century challenges. While a number of scientific solutions and social movements address these problems, efforts remain siloed within disciplines. Cooperation between experts, frontline communities, and the public is essential for developing comprehensive strategies to address the loss of biodiversity and unsustainable practices. We believe that inclusive communication will create locally-adapted solutions to our increasingly complex and urgent land management needs.
To empower collaboration while giving voice to those most impacted by land and nature management decisions, we aim to host panel discussions at Stanford focused on local issues highlighting major aspects of conservation across the Bay Area. The panels will include academics working in conservation as well as community representatives, facilitating conversations that prioritize equity and diverse opinions. The discussions will be recorded and incorporated into an engaging video documentary. Film screenings will take place at Stanford and across the Bay Area, showcasing local conservation issues to the public. The panels and documentary will provide a template for future symposia and documentaries at other locales. Our fundamental goal is to provide a platform to educate and spark discussions surrounding the diverse voices and views on conservation.
Convening on purposeful entrepreneurship and ethical innovation for equity, health and sustainability
- NARGES BANIASADI (BIOENGINEERING, EMERGENCE)
- KARI HANSON (INSTITUTE FOR COMPUTATIONAL AND MATHEMATICAL ENGINEERING)
- ANDREA CARAFA (EMERGENCE)
Emergence hosts an annual convening in spring on purposeful entrepreneurship and ethical innovation for tackling global challenges in the areas of climate change, inequity and societal health. We are inviting faculty, researchers, and students who are working in these areas as well as ecosystem partners in the investment and business community to join our conversation. The convening will have guest speakers, discussion sessions, and brainstorming sessions aimed at creating new partnerships between the participants, thought leadership, and interdisciplinary collaboration.
By bringing together stakeholders and galvanizing collaboration, we intend to build momentum and grow a community of researchers, entrepreneurs, business leaders, investors, and cross-sector innovators to catalyze the emergence of new innovations and ventures driven by purpose and a sense of urgency in addressing the most pressing challenges that humanity is facing. Together, we can create more opportunities for such an ecosystem to grow from the fringes into the mainstream of science and technology entrepreneurship.
The Pursuit of Good Work in Tech: Youth Participatory Action Research with Computer Science & Engineering Students at Stanford
- ASHLEY LEE (POLITICAL SCIENCE)