Past Collaborative Research & Projects
2022-23 Collaborative Research & Projects
Ethical, Responsible, and Human-Centered Data for Machine Learning Software Development
PROJECT LEAD: HARI SUBRAMONYAM (EDUCATION; INSITUTE FOR HUMAN-CENTERED AI)
COLLABORATORS: DIANA ACOSTA NAVAS (ETHICS IN SOCIETY), KATIE CREEL (ETHICS IN SOCIETY), NEEL GUHA (COMPUTER SCIENCE), MICHAEL BERNSTEIN (COMPUTER SCIENCE), AND MITCHELL STEVENS (EDUCATION)
Data is the backbone of machine learning applications (e.g., voice assistant, face-ID, product recommendation). Yet, a fundamental challenge for software product teams is specifying ethical, representative, and human-centered data for machine learning. Consequently, end-users encounter frustrating experiences that range from systematic injustice to system failure.
Our goal is to establish best practices for specifying data requirements for machine learning applications. We propose conducting a series of design workshops with diverse stakeholders, including engineers, user researchers, end-users, domain experts, ethicists, and legal experts, to develop guidelines and processes for machine learning data. We expect the collaborative and participatory nature of our workshop to surface new ethical concerns and, in turn, broaden and enhance our understanding of the ethical dimensions of data specification, collection, and usage.
Insights from this work will inform our future research on developing collaboration support tools for data specifications and validation, applying our findings to machine learning and data science courses, and contributing to the practice of ethical machine learning software development.
Teaching Dark Patterns: A cross-disciplinary approach
PROJECT LEADS: JENNIFER KING (INSTITUTE FOR HUMAN-CENTERED AI) AND LUCY BERNHOLZ (DIGITAL CIVIL SOCIETY LAB)
Teaching Dark Patterns will be the first university class to focus on the growing problem of dark patterns in everyday life. Dark patterns (manipulative and deceptive designs in digital systems) are found everywhere online and increasingly in our built environment. We will use the Dark Patterns Tip Line website to introduce students to the concept of dark patterns, ask them to identify and classify them, and produce proposed plans for technological, design, and/or public policy solutions to them.
The class will address the ethical implications of certain choices in design and business objectives, as well as teach students to be critical consumers of digital technologies.
We intend to build the Tip Line into a university-wide/cross-university hub for research and teaching. Regulatory interest in dark patterns is on the rise in the U.S. (FTC), Europe (German government Dark Pattern Decoder Project) and globally (New Web Foundation Tech Policy Lab). However, we have not found classes dedicated to Dark Patterns at any universities. While this class may be a first, we will work with colleagues at UCLA, UVA, and Harvard who are interested in developing curriculum, sharing pedagogical activities, and conducting research using the Tip Line.
CS+Social Good Fellowships
PROJECT LEADS: OLIVIA YEE (COMPUTER SCIENCE, SYMBOLIC SYSTEMS), JESSICA YU (COMPUTER SCIENCE), AND ANDY JIN (COMPUTER SCIENCE)
CS+Social Good, founded in 2015 and now running for its seventh year, is a student group on campus dedicated to maximizing the benefits of technology while minimizing its harms. The CS+Social Good Fellowship supports Stanford undergraduates in pursuing 9-week, full-time work experiences at organizations around the world that use technology to address social issues. Fellows gain firsthand experience working on social impact technology under the mentorship of industry experts, through which they discover impactful ways to leverage their technical skills in government, nonprofit, company, and job sectors that serve as alternatives to traditional tech roles. In partnership with the Haas Center, the CS+Social Good Fellowship provides stipends to cover students’ work expenses, and it has supported over 30 summer fellows in the past 5 years.
The fellowship program gives students the opportunity to grapple with the ethical implications of technology through direct work. This EST Hub Grant is supporting CS+Social Good to meet the incredibly high and increasing student demand for the fellowship program, as well as to increase programming for fellows to reflect on and meaningfully address the practical ethical challenges they encounter in their internship experiences.
Developing and refining new conceptualizations of algorithmic bias through cognitive research and curricular design
PROJECT LEAD: VICTOR LEE (EDUCATION)
COLLABORATORS: PARTH SARIN (CENTER FOR EDUCATION POLICY ANALYSIS), TAMARA SOBOMEHIN (EDUCATION), DANIELA GANELIN (EDUCATION), VICTORIA DELANEY (EDUCATION), AND HARI SUBRAMONYAM (EDUCATION)
Awareness of “algorithmic bias” as a major ethical problem related to contemporary data science and artificial intelligence is growing, but what it means to for someone to conceptualize and understand the problems of “algorithmic bias” is unclear. We need both foundational research on how people recognize and explain socially problematic uses of big data and to help people think about this challenge in ways that lead to solutions.
This project has dual aims. One is to conduct cognitive interview-based research with a mix of high school students and adults with different amounts of formal training in artificial intelligence, computing, and data science. Using these interviews, we will generate new research information about what people think is happening in situations that are commonly labeled as having problematic algorithmic bias.
Second, we will be developing and testing a concise curriculum and interactive tools primarily for high school-aged students to help improve understandings of algorithmic bias and how larger systems of data, algorithms, and social inequities are differently implicated. Through combined participation of practicing high school teachers, graduate and undergraduate students, education, psychology, and computer science researchers, the long-term goal is to advance “Artificial Intelligence Literacy” for all.
SSTAR:QUEST - Sustainable Space Travel And Research: Questioning and Understanding the Ethics of Space Technologies
PROJECT LEAD: SONIA TRAVAGLINI (AERO & ASTRO ENGINEERING)
The SSTAR:QUEST (Sustainable Space Travel And Research: Questioning and Understanding the Ethics of Space Technologies) project will host a series of symposia to initiate conversations about the ethics of space technologies, space exploration, and sustainable exploitation of space resources, featuring a panel of expert guest speakers considering key topics, followed by student-facilitated discussions.
The recent rapid expansion of space technology, including satellite constellation systems and privately funded commercial space missions, has increased the focus on emerging ethical issues generated by advances in space technologies. From the question of who is responsible for cleaning up space debris, to the challenges of equity in space exploration and habitation, a wide range of ethical and legal questions are emerging for engineers and a range of other disciplines to consider.
The SSTAR:QUEST project, organized by Dr. Sonia Travaglini (Department of Aeronautical and Astronautical engineering), will create an interdisciplinary forum within Stanford for undergraduate students, graduate students, researchers, staff and faculty in the department of Aeronautical and Astronautical engineering, the Law School, the department of Archaeology, and various other disciplines, to spark and foster conversations on ethical topics connected to space, and to collaboratively explore the impact of technologies developed for extraterrestrial exploration.
Towards an Inclusive Machine Learning Model for Race Talk: Developing Computational Tools Informed by the Psychology of Race
PROJECT LEAD: CINOO LEE (PSYCHOLOGY) AND ESIN DURMUS (COMPUTER SCIENCE)
Many online communication platforms (e.g., Facebook, Twitter) rely on machine learning algorithms to screen for and predict which content contains bias and incivility. Yet, current moderation algorithms have trouble reliably distinguishing between toxic race conversations and productive race conversations. For instance, posts from Black users who share their experiences of racism are often wrongly censored as hate speech. This biased moderation technology can lead to at-scale marginalization and exclusion of Black users’ voices and experiences, which causes psychological and economic harm and stifles important and productive race conversations that we need to have to better function as a diverse society.
We propose to develop a more inclusive and nuanced racism detection model, informed by the psychology of race and race relations. We will create a more informed taxonomy of race talk using real-world conversation data from social media. This taxonomy will include categories of constructive race talk (e.g., disclosing personal experiences of discrimination) and toxic race talk (e.g., racism) that are common on online platforms. Building this fine-grained taxonomy will help to develop more nuanced models and allow us to analyze the language of constructive vs. toxic race talk, which will be helpful to further inform future moderation work.
BMIR Multidisciplinary Workgroup: Ethical learnings from developing an EHR-based lung cancer database for predictive oncology
PROJECT LEAD: MADELENA NG (BIOMEDICAL INFORMATICS)
COLLABORATORS: TINA HERNANDEZ-BOUSSARD (BIOMEDICAL INFORMATICS RESEARCH), ANNA DE HOND (BIOMEDICAL INFORMATIC RESEARCH), MARIEKE VAN BUCHEM (BIOMEDICAL INFORMATICS RESEARCH), VAIBHAVI BHAVIESH SHAH (MEDICINE), AND SEAN DOWLING (MEDICINE)
The Stanford Center for Biomedical Informatics Research (BMIR) Multidisciplinary Workgroup will bring together the diverse multidisciplinary community at BMIR to collectively grapple with ethical concerns embedded in patient data-dependent projects and promote a more standardized approach to model development across the AI lifecycle. We hope the workgroup may serve as a unifying space for synergistic learning, transparent discussion, and ad hoc collaboration for early stage AI researchers.
We propose leading the workgroup with real-time illustrative challenges encountered through the step-by-step development of an EHR-based lung cancer database for predictive oncology purposes. Ethical learnings from previously developed databases and frameworks at BMIR will also help guide our initial discussions. Through this approach, we hope to establish feedback loops across all BMIR projects to help expedite the identification of their unique ethical and societal implications on patient care. Furthermore, the workgroup also aims to equip the next generation of AI researchers to be proactive in the objective scrutiny of their data projects and the spearheading of ethical and trustworthy AI endeavors.
PROJECT LEAD: DIANA ACOSTA NAVAS (ETHICS IN SOCIETY)
COLLABORATORS: TING-AN LIN (ETHICS IN SOCIETY), HENRIK KUGELBERG (ETHICS IN SOCIETY), AND LINDA EGGERT (OXFORD UNIVERSITY)
Questions over content moderation, its ability to prevent violence, and its impact on public speech are pressing. Recent experiences around the world evince how digital platforms may enable hate speech and misinformation to quickly propagate and escalate political and ethnic conflicts. Automated moderation promises to enhance platforms’ ability to timely detect toxic speech and remove it from public discussion. At the same time, there is growing concern regarding the effect of automated moderation on free speech and its enablement of censorship and surveillance. This discussion fits within a long-standing philosophical debate regarding the appropriate balance between protecting freedom of speech and safeguarding the public interest. However, this chapter of the debate raises new questions regarding the impact of automation in speech-related decisions; the legitimacy of corporate actors; and the distinctive strategies for moderation enabled by platform architecture. Addressing these questions requires an interdisciplinary dialogue that integrates technical, ethical, political, social, and global aspects.
This project aims to convene a group of experts from diverse fields to discuss the viability, potential impact, and ethical dimensions of employing different kinds of content moderation as tools for conflict amelioration.
Revisiting Nuclear Ethics
PROJECT LEADS: SCOTT SAGAN (POLITICAL SCIENCE), HERB LIN (CENTER FOR INT’L SECURITY AND COOPERATION), AND NITISH VAIDYANATHAN (FSI - CISAC)
We plan to analyze emerging issues related to the ethics of nuclear weapons programs and potential use in an era in which lower-yield weapons and more accurate missiles are being deployed. There are two components to this: a workshop discussing nine article manuscripts written by current and recent Stanford scholars, and a public event, featuring former Secretary of Defense William Perry, to discuss the issues of nuclear ethics more generally. Joseph Nye will also deliver a keynote address on his book “Nuclear Ethics” to workshop participants.
The central aim of the workshop is to critique nine draft papers written by current and recent Stanford scholars with Stanford and outside discussants, bringing an interdisciplinary approach to the nuclear debate. The papers presented represent this methodological diversity. Three papers use public opinion surveys to generate insight on drivers of citizens’ willingness to use nuclear weapons. Other papers examine ethical issues around nuclear weapons use, as countervalue targeting, leadership targeting, the usage of force to stop nuclear proliferation, and obstacles to abolition of nuclear weapons. Finally, two papers examine emergent issues such as first amendment concerns around nuclear weapons and interaction between cyberattacks and nuclear escalation.
Scenes from the Anthropocene - A video documentary of diverse voices and views on conservation from across society
PROJECT LEADS: TATIANA BELLAGIO (BIOLOGY), KRISTY MUALIM (GENETICS), AVERY HILL (BIOLOGY), LAUREN GILLESPIE (COMPUTER SCIENCE), MEGAN RUFFLEY (CARNEGIE INSTITUTE), AND SHANNON HATELEY (CARNEGIE INSTITUTE)
Preserving biodiversity and ensuring the sustainability of natural resources are critical 21st century challenges. While a number of scientific solutions and social movements address these problems, efforts remain siloed within disciplines. Cooperation between experts, frontline communities, and the public is essential for developing comprehensive strategies to address the loss of biodiversity and unsustainable practices. We believe that inclusive communication will create locally-adapted solutions to our increasingly complex and urgent land management needs.
To empower collaboration while giving voice to those most impacted by land and nature management decisions, we aim to host panel discussions at Stanford focused on local issues highlighting major aspects of conservation across the Bay Area. The panels will include academics working in conservation as well as community representatives, facilitating conversations that prioritize equity and diverse opinions. The discussions will be recorded and incorporated into an engaging video documentary. Film screenings will take place at Stanford and across the Bay Area, showcasing local conservation issues to the public. The panels and documentary will provide a template for future symposia and documentaries at other locales. Our fundamental goal is to provide a platform to educate and spark discussions surrounding the diverse voices and views on conservation.
Convening on purposeful entrepreneurship and ethical innovation for equity, health and sustainability
PROJECT LEADS: NARGES BANIASADI (BIOENGINEERING, EMERGENCE), KARI HANSON (INSTITUTE FOR COMPUTATIONAL AND MATHEMATICAL ENGINEERING), AND ANDREA CARAFA (EMERGENCE)
Emergence hosts an annual convening in spring on purposeful entrepreneurship and ethical innovation for tackling global challenges in the areas of climate change, inequity and societal health. We are inviting faculty, researchers, and students who are working in these areas as well as ecosystem partners in the investment and business community to join our conversation. The convening will have guest speakers, discussion sessions, and brainstorming sessions aimed at creating new partnerships between the participants, thought leadership, and interdisciplinary collaboration.
By bringing together stakeholders and galvanizing collaboration, we intend to build momentum and grow a community of researchers, entrepreneurs, business leaders, investors, and cross-sector innovators to catalyze the emergence of new innovations and ventures driven by purpose and a sense of urgency in addressing the most pressing challenges that humanity is facing. Together, we can create more opportunities for such an ecosystem to grow from the fringes into the mainstream of science and technology entrepreneurship.
The Pursuit of Good Work in Tech: Youth Participatory Action Research with Computer Science & Engineering Students at Stanford
PROJECT LEAD: ASHLEY LEE (POLITICAL SCIENCE)
2021-22 Collaborative Research & Projects
Beyond Greenwashing: Rethinking the Role of the Fossil Fuel Industry for a Decarbonized World
PROJECT LEAD: KYLE DISSELKOEN (CHEMISTRY)
COLLABORATORS: SALLY BENSON (ENERGY RESOURCES ENGINEERING), MATT TIERNEY (ENERGY RESOURCES ENGINEERING), CHRISTOPHER CHIDSEY (CHEMISTRY), ADAM ZWEBER (PHILOSOPHY), SARAH HOLMES (CHEMISTRY), KEN SHOTTS (BUSINESS; POLITICAL SCIENCE), MELISSA ZHANG (BUSINESS), DAVID HARRISON (BUSINESS), JOSH PAYNE (BUSINESS), and HALEN MATTISON (MECHANICAL ENGINEERING)
While global efforts to reduce carbon emissions are imperative, the fossil fuel industry has contributed to decades of carbon build-up in the atmosphere and will continue to do so in the foreseeable future. Therefore, our project will organize and facilitate a summit that brings together leaders across the fossil fuel industry to engage in open and honest dialogue about the future of the industry, create a strategic blueprint, and commit to investments in technology with the potential to not just mitigate carbon emissions, but reverse them. Invited Stanford students will actively participate, preparing them for leadership positions where they will confront questions at the intersection of technology and ethics.
We believe that fossil fuel companies, by introducing carbon to the energy system, have both the ethical obligation and a unique position from which to tackle the challenge of re-capturing carbon at scale. Our project aims to advance not just our understanding of a many-faceted ethical problem, but also provide a meaningful component of the solution to the problem. Combating climate change requires all of us, and we need to bring the fossil fuel industry into the fight.
A Design Approach to the Anticipation and Communication of Ethical Implications of Ubiquitous Computing
PROJECT LEADS: NAVA HAGHIGHI (COMPUTER SCIENCE), MATTHEW JÖRKE (COMPUTER SCIENCE)
COLLABORATORS: JAMES LANDAY (COMPUTER SCIENCE), CAMILLE UTTERBACK (ART AND ART HISTORY), AND JENNIFER KING (HAI PRIVACY AND DATA POLICY FELLOW)
Machine learning and pervasive sensing have vast potential for social good. However, as increasingly invisible sensors collect increasingly intimate data, analyzed using complex and uninterpretable algorithms, the risk of ethics and privacy violations is growing as rapidly as technological progress. Anticipation and communication of ethical threats of technology remain a significant challenge for both technologists and legal scholars.
We believe that threat identification should be anticipatory, involving the imagination of possible futures. We draw on speculative design methodologies (often used in the arts, science fiction, and design) as a framework for imagining possible threats before they arise. We also propose the concept of implication design, whereby products intuitively embed the ethical and privacy threats they contain in their design, instead of solely relying on privacy policies or legislation.
We will be conducting an interdisciplinary workshop to examine how our proposed methodologies can best be used for anticipating and communicating ethical implications of technology by practitioners. We will use the insights from this workshop to identify design patterns, develop new standards, and offer a set of reusable practices and methodologies as a means for conveying real-world consequences of emerging technology.
Bio Jam: Growing Community through Art, Culture, and BioMaking
PROJECT LEADS: CALLIE CHAPPELL (BIOLOGY), BRYAN BROWN (EDUCATION), MEGAN PALMER (BIOENGINEERING), ROSHNI PATEL (GENETICS), WING-SUM LAW (MECHANICAL ENGINEERING)
COLLABORATORS: RODOLFO DIRZO (BIOLOGY), BRIANA MARTIN-VILLA (BIOENGINEERING), JONATHAN HERNANDEZ, CAROLINE DAWS (BIOLOGY), KELLEY LANGHANS (BIOLOGY), JOSUÉ GIL-SILVA (MECHANICAL ENGINEERING), PALOMA VAZQUEZ JUAREZ (HUMAN BIOLOGY), PAGÉ GODDARD (GENETICS), MARCO PIZARRO (COMPUTER SCIENCE)
BioJam is a whole-year academic program that engages high school students from underserved communities in the Bay Area of California in bioengineering and human-centered design. Specifically, we collaborate with teens in the East Bay, San Jose, Salinas, and South Monterey County. The BioJam leadership team integrates Stanford undergraduates, PhD students, and professors with Bay Area educators and community organizations.
Our mission is to engage teens through their own creativity and culture in bioengineering/biomaterial design and create pathways for them to share their learning in their home communities. Our vision is to: (1) Nurture teen knowledge, confidence, and curiosity as they grow into science practitioners and educators, (2) provide a research and training opportunity for scientists, and (3) create accessible entry points for community engagement in biotechnology.
Our program starts with a 2-week synchronous summer camp and continues with academic year programming where teens develop community engagement activities based on what they learned during camp. Our focus includes grown biomaterials, biomaterial recipes, circuitry components and live mycelium.
Stanford Consensus Conference on the Ethical Use of Social Media to Reduce COVID-19 Vaccine Disinformation
PROJECT LEADS: DR. MICHAEL GISONDI (MEDICINE) AND RACHEL BARBER
COLLABORATORS: DANIEL CHAMBERS, JEFFREY HANCOCK (COMMUNICATION), JONATHAN BEREK (MEDICINE), MATTHEW STREHLOW (MEDICINE), SEEMA YASMIN (MEDICINE), TOUSSAINT NOTHIAS (DIGITAL CIVIL SOCIETY LAB)
Social media and other digital platforms are routinely leveraged to misinform the public about a range of social and political issues. In light of the COVID-19 public health crisis, there is an ethical imperative for social media companies to prevent the exploitation of their platforms in ways that further disinformation. We hypothesize that vaccine misinformation, accessed via social media and other digital platforms, will reduce public vaccine acceptance and vaccination rates. Social media platforms may be the most powerful tools we have for educating the public and improving vaccine acceptance.
We will host a consensus conference on the ethical obligations of social media companies to mitigate COVID-19 vaccine disinformation. The consensus conference will engage experts in the fields of biomedical ethics, public health, and cyber policy with representatives from social media companies, popular blog sites, and the public. We hope to establish (1) an ethical mandate to address vaccine disinformation, (2) best practices for conducting a vaccine safety campaign on social media, (3) improved public confidence in vaccine safety, and (4) a prioritized research agenda to sustain future work on this topic.
Student-Facilitated Ethics Training for Life Scientists
PROJECT LEADS: JOSH TYCKO (GENETICS), RACHEL UNGAR (GENETICS), SEDONA MURPHY (GENETICS), OLIVIA DE GOEDE (GENETICS), ROSHNI PATEL (GENETICS), EMILY GREENWALD (GENETICS)
The Genetics Advocacy Committee (TGAC) is a trainee-led organization of students, post-docs, and faculty collaborators that advocates for long-term improvements to our training program and organizes community solidarity actions. TGAC recently surveyed ~100 past and present PhD students in which 85% of trainees responded that there should be more ethics training. The existing offerings, while helpful, are not sufficient to cover the breadth of critical topics relevant to 21st century biosciences trainees.
We will meet the need for more ethics training for life scientists by 1) incorporating ethics into first-year training camp, 2) engaging our entire department in monthly facilitated conversations with experts in bioethics followed by small group breakout discussions, and 3) developing a student-facilitated course on ethics for life scientists. This 3-part approach creates engaging ethics training opportunities for everyone in our community, and will shift our culture towards the integration of life science ethics into the research environment itself. Importantly, the student-led model of TGAC prioritizes the requests of trainees, and is particularly concerned with equity. We will generate curricula, speaker lists, and actionable steps which student advocates in other departments could use to enhance their efforts, multiplying our impact on this common problem across training environments.
Technology and Racial Justice Graduate Fellowship Program
PROJECT LEAD: DANIEL MURRAY (CENTER FOR COMPARATIVE STUDIES IN RACE AND ETHNICITY)
COLLABORTORS: LUCY BERNHOLZ (CENTER ON PHILANTHROPY AND CIVIL SOCIETY), SHARAD GOEL (MANAGEMENT SCIENCE AND ENGINEERING), MICHELE ELAM (ENGLISH), MICHAEL BERNSTEIN (COMPUTER SCIENCE), IRENE LO (MANAGEMENT SCIENCE AND ENGINEERING), DAN JURAFSKY (LINGUISTICS), JENNIFER BRODY (THEATER AND PERFORMANCE STUDIES)
The Technology and Racial Justice Graduate Fellowship Program will catalyze cross-disciplinary research and interventions, while developing a pipeline of scholars engaged in critical issues at the intersection of technology and racial justice. The program will create an interdisciplinary space to workshop graduate student research while supporting public-facing collaborations that expand the understanding of racial justice and technology. This multiracial cohort of graduate fellows will participate in a bi-weekly workshop overseen by faculty from engineering, social sciences, humanities, and other fields.
The impacts of new technologies on racial justice range from system design (bias in datasets and models, the implications of tech workforce diversity) to application (surveillance, affect recognition, smart cities) to policy/politics (local ordinances, electoral politics, and social movements). Avoiding harm and advancing racial justice requires interdisciplinary, cross-sector research and interventions by those engaged in AI system design and those who research the power and politics that shape society.
In addition to shaping graduate student research, the program supports students to develop interventions such as workshops, tools and position papers designed to help academic researchers better understand the racial justice implications of their research, as well as products designed for non-academic audiences and marginalized communities.
The program is led by the Center for Comparative Studies in Race & Ethnicity, in partnership with Stanford Digital Civil Society Lab, Stanford HAI, and the School of Engineering Office of Diversity and Inclusion.
Stanford Public Interest Tech Student Leadership Committee
PROJECT LEADS: LESLIE GARVIN (HAAS CENTER FOR PUBLIC SERVICE), DANIEL MURRAY (CENTER FOR COMPARATIVE STUDIES IN RACE AND ETHNICITY)
COLLABORATORS: COLLIN ANTHONY CHEN (CENTER FOR ETHICS IN SOCIETY), ASHLYN JAEGER (ETHICS, SOCIETY, AND TECHNOLOGY HUB)
This grant will support the coordination and initiatives of the Stanford Public Interest Tech Student Leadership Committee (SLC), which is comprised of the leaders of PIT-related student organizations. Participating organizations have included Code the Change, CS + Social Good, Stanford Social Entrepreneurial Students' Association, the Stanford Society of Black Scientists and Engineers, The PIT Lab, and Women in Computer Science.
The purpose of the SLC is to collaborate to make PIT more visible and accessible to students in various fields of study; to enhance PIT student organization recruitment and member engagement; and, to promote and facilitate PIT research, internship, mentorship, and fellowship and career opportunities. A key deliverable of the SLC will be the completion of the Stanford PIT Guide. The Stanford PIT Guide is a response to growing interest in the general field of Public Interest Technology among Stanford students from all schools and majors, but a simultaneous lack of cohesion and student understanding of how to get involved in PIT. The PIT Guide will feature student organizations and their PIT-related initiatives, PIT research projects in departments and centers, PIT courses, and PIT-related internships and fellowships at Stanford.
Ethical Partnerships among Humans, Nature, and Machines
PROJECT LEADS: JAMES HOLLAND JONES (STANFORD EARTH), MARGARET LEVI (CASBS; POLITICAL SCIENCE), ZACHARY UGOLNIK (CASBS)
We are in a crisis of our own design. Our relationship with nature and machines is unsustainable. Greenhouse gas emissions—largely from power generation, transportation, agriculture, and industry—are changing the climate at an alarming rate. We witness these effects daily from melting ice caps to prolonged drought, deforestation, forest fires, plant and animal extinction, rising sea levels, worsening air quality, and an increased rate of emergence of novel infectious diseases. COVID-19, for example, is intensifying economic inequality at the same time as it increases our technological dependence. As automation increases its impact upon more sectors of society, we must ask how jobs—and the lives we build around them—will be transformed. How do we ensure sustainable jobs and how do we optimize our relationship with machines to best serve our values and the planet? If we are to survive, we need new relationships: an ethical partnership with nature, machines, and each other.
Great work is currently being done on the human-machine relationship and the human-nature relationship. But few efforts combine these sectors. This multi-year project fills that niche. This first phase facilitates interdisciplinary collaboration in the behavioral and social sciences, evolutionary biology, neuroscience, and artificial intelligence.
Precise AND Accurate - Learning to Support Individual Identity, Autonomy and Justice in Precision Medicine
PROJECT LEAD: CHRISTIAN ROSE (MEDICINE) AND JENNIFER NEWBERRY (MEDICINE)
Precision medicine relies on accurate data. In the face of growing hunger for data to feed our clinical models, issues of accessibility immediately permeate our data sets. But how do we think about people and their histories when we attempt to convert their spectrum of experiences and unique traits into quantifiable numbers? This ethical conundrum may have deeper, insidious effects due to how we value each other, implicitly or explicitly, as evidenced by how we record and interpret data. Precise models trained on data skewed by years of limited access to quality care or structural racism may find erroneous correlations or exacerbate disparities instead of mitigating them.
With the support of the EST Hub Seed Grant, we plan to bring together a diverse, interdisciplinary group of scholars and employ a three-step modified Delphi method to identify key ethical challenges in the use of big data for precision medicine in the emergency care setting. After identifying and coming to consensus on the most pressing challenges, we will then host a speaker series to delve further into possible solutions and foster ongoing discussion so that we can continue to provide the best possible patient-centered care in the digital age.
2020-21 Collaborative Research & Projects
ERB: Developing an Ethics Review Board for Artificial Intelligence
PROJECT LEADS: MICHAEL BERNSTEIN (COMPUTER SCIENCE), MARGARET LEVI (CASBS; POLITICAL SCIENCE), DAVID MAGNUS (MEDICINE), DEBRA SATZ (PHILOSOPHY)
Artificial intelligence (AI) research is routinely criticized for failing to account for its long-term societal impacts. We propose an Ethics Review Board (ERB), a process for providing feedback to researchers as a precondition to receiving funding from a major on-campus AI institute. Researchers write an ERB statement detailing the potential long-term effects of their research on society—including inequality, privacy violations, loss of autonomy, and incursions on democratic decision-making—and propose how to mitigate or eradicate such effects. The ERB panel provides feedback, iteratively working with researchers until the proposal is approved and funding is released. Our goal is to grow the ERB to help support a large number of proposals.
Amplifying Linguistic Biases at Scale: The Effects of AI-Mediated Communication on Human Discourse
PROJECT LEAD: HANNAH MIECZKOWSKI (PHD CANDIDATE, COMMUNCIATION)
COLLABORATORS: JEFFREY HANCOCK (COMMUNICATION); JAMES LANDAY (COMPUTER SCIENCE)
Over the past few years, artificial intelligence has played an increasingly large role in how people communicate with each other. In some types of communication, such as email or instant messaging, the technology acts only as a channel for the messages. Now AI-Mediated Communication (AI-MC) is interpersonal communication that is not simply transmitted by technology, but modified or even generated by AI to achieve communication goals. AI-MC can take the form of anything from text response suggestions to deepfake videos. The influence of AI in human communication could extend far beyond the effects of a single conversation on any platform, but there has been minimal research on this topic to date.
Text-based AI, such as Google’s Smart Replies for example, tends to be trained on linguistic data from users of different platforms, whose demographics are strongly skewed towards white males with higher socioeconomic status. At best, the widespread nature of AI-generated language may be removing crucial cultural variations in language, and at worst it may be prioritizing certain gendered and racialized types of discourse, reinforcing systemic inequalities. We are interested in how these potential biases may affect social dynamics of human communication at scale and over time.
Bio Futures Fellows
PROJECT LEAD: MEGAN PALMER (BIOENGINEERING; BIO.POLIS)
COLLABORATORS: CARISSA CARTER (D.SCHOOL), PAUL EDWARDS (STS), CONNOR HOFFMANN (FSI-CISAC), PETER DYKSTRA (BIOENGINEERING), DREW ENDY (BIOENGINEERING)
Biology is a defining technology of the 21st century. Our abilities to alter, engineer and even create entirely new living systems are rapidly maturing. To ensure advances serve public interests, we need to equip leaders who can wisely guide innovations in both technologies and public policies. Currently graduate students and postdoctoral scholars have few opportunities to substantively engage issues at the intersection of biotechnology, ethics and public policy, despite being well-positioned to benefit and bring together interdisciplinary interests.
This grant supports a pilot cohort of graduate student and postdoctoral “Bio Futures Fellows” (BFFs). The part-time program aims to recognize young leaders from a diversity of disciplines and empower them to engage these issues at Stanford and in the next stages of their careers. The fellows will work closely with Dr. Megan J. Palmer, Executive Director of Bio Policy & Leadership Initiatives, as well as faculty and staff in Bioengineering, the Program in Science, Technology and Society (STS), the d.school, and other units. They will co-design teaching, research and engagement initiatives at the interface of biotechnology, ethics and public policy. These initiatives aim to seed and inform future programs at Stanford, ensuring trainee interests are factored into their design from conception.
Can Affective Digital Defense Tools Be Used to Combat Polarization and the Spread of Misinformation Across the Globe?
PROJECT LEADS: TIFFANY HSU (PSYCHOLOGY), JEANNE TSAI (PSYCHOLOGY), BRIAN KNUTSON (PSYCHOLOGY)
COLLABORATORS: MIKE THELWALL (DATA SCIENCE- UNIVERSITY OF WOLVERHAMPTON), JEFF HANCOCK (COMMUNICATION), MICHAEL BERNSTEIN (COMPUTER SCIENCE), JOHANNES EICHSTAEDT (PSYCHOLOGY), YU NIIYA (PSYCHOLOGY- HOSEI UNIVERSITY), YUKIKO UCHIDA (PSYCHOLOGY- KYOTO UNIVERSITY)
Recent research demonstrates that affect and other emotional phenomena play a significant role in the spread of socially disruptive processes on social media around the world. For instance, expressions of anger, disgust, hate, and other “high arousal negative affect” have been associated with increased political polarization and the spread of fake news on U.S. social media. However, despite the clear link between specific types of affect and socially disruptive processes, governments and social media companies have not used this knowledge to defend users. Users are literally left to their own devices.
Our understanding of why specific types of affect are particularly viral in the U.S. and other countries is also limited. In the proposed work, we will develop an affectively-oriented “digital self-defense” tool that consumers can use to reduce their exposure to specific types of affective content, and examine whether this tool reduces users’ political polarization and exposure to fake news. We will also test the effectiveness of this tool in different cultural contexts, starting with the U.S. and Japan, with the aim of developing culturally-sensitive algorithms that can be used to combat political polarization and the spread of misinformation across the globe.
PROJECT LEAD: JASON ZHAO (PHILOSOPHY; COMPUTER SCIENCE )
COLLABORATORS: ENYA LU, RAYAN KRISHNAN, MATTHEW KATZ (COMPUTER SCIENCE), IRENE HAN (COMPUTER SCIENCE; ENGLISH), EMILY ZHONG (ENGINEERING-MASTERS), CRYSTAL NATTOO (ELECTRICAL ENGINEERING), BEN ESPOSITO (POLITICAL SCIENCE; PHILOSOPHY), ALESSANDRO VECCHIATO (POLITICAL SCIENCE; CYBER POLICY CENTER; STANFORD PACS), KAYLIE MINGS (PRODUCT DESIGN), CHRISTOPHER TAN (COMPUTER SCIENCE), MARK GARDINER (ANTHROLPOLOGY; PWR)
Stanford Rewired is a digital magazine where technology and society meet. We publish themed issues quarterly, with articles, illustrations, and multimedia sourced from the Stanford community. Make sure to check out our first issue, Governance, at https://stanfordrewired.com/ and sign up for our email list to receive updates about events and open submissions!
As an organization, we're rewiring the conversation to create a central space for thoughtful, accessible, and community-driven discourse around tech. Our five core values are: (1) Emphasize human and societal impact, (2) Amplify marginalized groups and voices, (3) Bridge academic and practical disciplines, (4) Curate a diverse collection of individual perspectives, and (5) Produce accessible, just, and intentional narratives. In line with our values, contributors of all backgrounds are welcome--no journalism or writing experience required. If you are interested in contributing, submit to us when our next round opens or reach out at hello [at] stanfordrewired.com (hello[at]stanfordrewired[dot]com).
The PIT Lab at Stanford: Empowering the Next Generation of Thoughtful and Informed Public Interest Technologists
PROJECT LEADS: CONSTANZA HASSELMANN (SOCIOLOGY) AND NIK MARDA (POLITICAL SCIENCE; COMPUTER SCIENCE)
COLLABORATORS: JEFF ULLMAN (COMPUTER SCIENCE) AND MARGARET HAGAN (LAW; D.SCHOOL)
The Public Interest Technology Lab (PIT Lab) is a newly-formed, student-facing organization that reflects on and advocates for a more thoughtful approach to the development and role of technology. They recognize that lines of code and chips of silicon are powering, and therefore shaping, systems and structures that affect real people. Hence, they believe Stanford students should not build technology in a vacuum, but rather should grapple with the broader political, economic, and social forces at play.
The PIT Lab envisions a Stanford community with the interdisciplinary and diverse perspectives needed to successfully build technology in the public interest. To contribute to this larger ecosystem, they host a broad range of discussions, classes, projects, research, and advocacy to help students explore public interest technology, grapple with its interpretations, and improve its implementation in practice. This includes projects around tech recruitment pipelines, research into racial justice in the tech ecosystem, and courses at the intersection of tech and policy. In turn, they hope to steer the development of technology towards the improvement of societal structures and systems.
Stanford Existential Risks Initiative
PROJECT LEADS: VINJAI VALE (MATHEMATICS & COMPUTER SCIENCE), AMY DUNPHY (ELECTRICAL ENGINEERING & HISTORY), KUHAN JEYAPRAGASAN (MATHEMATICAL AND COMPUTATIONAL SCIENCE)
COLLABORATORS: HENRY BRADLEY (PUBLIC POLICY), ZIXIAN MA (COMPUTER SCIENCE AND BIOLOGY), FELIPE CALERO FORERO (COMPUTER SCIENCE), JACK RYAN (MATHEMATICS), MAURICIO BAKER (POLITICAL SCIENCE), LUCAS SATO (MATHEMATICS, PHILOSOPHY, AND COMPUTER SCIENCE), JASPREET PANNU, (MEDICINE), HARSHU MUSUNURI (CHEMISTRY AND COMPUTER SCIENCE), SYDNEY VON ARX (COMPUTER SCIENCE), STEVE LUBY (PROFESSOR OF MEDICINE AND HEALTH RESEARCH AND POLICY; FSI), PAUL EDWARDS (CENTER FOR INTERNATIONAL SECURITY AND COOPERATION; SCIENCE, TECHNOLOGY & SOCIETY)
The Stanford Existential Risks Initiative (SERI), which was founded in January of 2020, is a collaboration between faculty and students dedicated to mitigating existential risks. Existential risks are defined as things that could threaten the long-term potential of humanity, ranging from risks causing suboptimal trajectories where ideal futures are not realized to ones causing extinction of the human species. Examples of the most pressing of these risks include risks from transformative AI, biosecurity/pandemics, nuclear risks, and extreme climate change.
SERI focuses its efforts in three areas -- student career development, longtermism and existential risk advocacy, and faculty engagement. In summer of 2020 we held our inaugural summer undergraduate research fellowship, through which a cohort of twenty Stanford undergraduates carried out research projects related to existential risk. We look forward to funding a second undergraduate research fellowship this winter, and to hosting a conference in the spring to aimed at bringing together existential risk-focused people. In the long run, we hope to initiate discussions about the importance of impact-oriented research addressing the most pressing issues, and to instigate a shift in the priorities of the Stanford community towards existential risk-focused research and careers.
CS+Social Good Fellowships
PROJECT LEADS: JULIA MELTZER (ETHICS IN SOCIETY; SYMBOLIC SYSTEMS), JESSICA YU, SASANKH MUNUKUTLA (COMPUTER SCIENCE), STONE YANG (ECONOMICS), ANDY JIN (COMPUTER SCIENCE)
COLLABORATORS: VALERIE CHOW (HAAS CENTER FOR PUBLIC SERVICE)
CS+Social Good, founded in 2015 and now running for its sixth year, is a student group on campus dedicated to maximizing the benefits of technology while minimizing its harms. The CS+Social Good Fellowship supports Stanford undergraduates in pursuing 9-week, full-time work experiences at organizations around the world that use technology to address social issues. Fellows gain firsthand experience working on social impact technology under the mentorship of industry experts, through which they discover impactful ways to leverage their technical skills in government, nonprofit, company, and job sectors that serve as alternatives to traditional tech roles. In partnership with the Haas Center, the CS+Social Good Fellowship provides stipends to cover students’ work expenses, and it has supported over 24 summer fellows in the past 4 years.
The fellowship program gives students the opportunity to grapple with the ethical implications of technology through direct work. This EST Hub Grant is supporting CS+Social Good to meet the incredibly high and increasing student demand for the fellowship program, as well as to increase programming for fellows to reflect on and meaningfully address the practical ethical challenges they encounter in their internship experiences.
Opening the Loop and the AI from Above
PROJECT LEAD: MUHAMMAD KHATTAK (ECONOMICS; PHILOSOPHY; COMPUTER SCIENCE)
COLLABORATORS: JOE KHOURY (CALIFORNIA INSTITUTE FOR THE ARTS), MICHELLE ELAM (HUMANITIES; HAI), ROB REICH (POLITICAL SCIENCE; HAI), RUSSELL BERMAN (HUMANITIES; COMPARATIVE LITERATURE), RUTH STARKMAN (WRITING & RHETORIC)
Opening the Loop is the tentative name for a documentary film being directed and produced by Muhammad Khattak (Stanford) and Joe Khoury (California Institute of the Arts). The film seeks to critically interrogate how broader issues regarding power, culture and our ethical priorities influence current developments of artificial intelligence. Primarily focusing on AI’s regulatory uses, it aims to shift away from predominant attitudes that technology is merely a neutral tool and focuses on AI’s slow expansion into the public realm. This is part of a broader project to promote non-technical engagements in AI.
In conjunction with the film, Joe and Muhammad are collaborating with Stanford artists and educators to organize an AI art exhibit and high school outreach program about the societal implications of AI. The latter is meant to introduce ethical questions about AI to the realm of secondary education whereas the former serves to promote unique engagements in current technological changes. Together, these three projects focus on the particular experiences that are oftentimes marginalized in popular representations of AI and aim to endorse more open-ended, philosophical and artistic means of describing AI in current conversations.
The Stanford Tech History Project
PROJECT LEADS: JULIA INGRAM (ENGLISH), NIK MARDA (POLITICAL SCIENCE; COMPUTER SCIENCE)
The Stanford Tech History Project seeks to document how Stanford’s tech ecosystem has changed over the last decade. The project will crowdsourcing contributions from students with expertise in ten domains essential to the tech ecosystem: administration, culture, curricula, diversity, entrepreneurship, ethics, external relationships, funding, recruitment, and research. The students’ deep expertise within the communities and topics they are studying aims to provide a more detailed and nuanced perspective inaccessible to outside researchers and journalists. Contributors will analyze historical data and conduct interviews with alumni and faculty, culminating in a final report that will be published Spring 2021.
Ultimately, we aim to conduct the first comprehensive analysis of trends within Stanford’s tech ecosystem over the last ten years, while drawing conclusions about Stanford’s values, priorities, attitudes, and role in the broader tech ecosystem and society at large. The final report will propose recommendations for University decision-makers with an eye toward maintaining Stanford’s status as a top innovation and engineering hub, increasing diversity and inclusion, balancing the conflicting interests of external stakeholders, and creating more technology with ethics and public interest in mind.
Crossing Narratives: Tech and Freedom in a Pandemic-Torn World
PROJECT LEAD: ARIANNA TOGELANG (ECONOMICS)
COLLABORATORS: MICHELLE LY (ETHICS IN SOCIETY; SYMBOLIC SYSTEMS), ALISHA ZHAO (HISTORY; POLITICAL SCIENCE; INT'L COMPARATIVE AND AREA STUDIES), CHARLES EELSEY (MS&E)
In a time of extreme catalyzation of our nation's rate of technologization and polarization, our project will explore and address the gaps that remain in student understanding of free speech across interdisciplinary fields.
Adopting legal, technological, and ethical lenses, the project will compile key insights from both individuals on-campus and community leaders working directly on these issues in government, non-profit, and private sectors to bring voices to the forefront of a disconnected student body. We hope to discover insights into questions such as, how do experiences with speech, freedom, and equity as it pertains to technology differ across student and expert perspectives? How does the growing role of Big Tech impact our different communities from human rights, economic, and legal standpoints? How does all this directly affect our students, and how can we best mobilize our student body to begin working towards these issues in their day-to-day life?
The project will serve as a multimedia platform to collect, analyze, and disperse crucial information on the impact of evolving technology on communities across the nation. We will work closely with students across all disciplines to delve into formerly unexplored gaps in how we think about these issues and integrate into them research-based information and interviews with community leaders, ultimately building a narrative of today’s freedom of speech that is comprehensive, powerful, and constantly evolving.
2020-21 Covid Rapid Response Grants
Exploring the COVID Journey through Photovoice: Experiences Across the Socioeconomic Spectrum
PROJECT LEAD: MALATHI SRINIVASA (MEDICINE)
COLLABORATORS: CATHERINE JOHNSON (MEDICINE), LATHA PALANIAPPAN (MEDICINE), CHRISTOPHER SHARP (MEDICINE), STACIE VILENDRER (MEDICINE), KENJI TAYLOR (MEDICINE), JONATHAN SHAW (MEDICINE), MAJA ARTANDI (MEDICINE), CARLA PUGH (MEDICINE), SONOO THADANEY (MEDICINE)
The 2020 COVID-19 pandemic has unveiled the enormous disparities in disease exposure and care received by vulnerable communities. We want to understand barriers to high quality healthcare, and improve health equity for our vulnerable populations.
Through this grant, we will conduct a photovoice study to understand the healthcare journey of 24 Stanford COVID-19 patients, including people who are medically vulnerable (safety net insurance) and well resourced (private insurance) in 4 ethnic groups (African American, Latin X, Asian and non-Hispanic White). Photovoice qualitative research invites participants to take photos of objects, places, situations in their lives based on guiding questions – in this case, about their preCOVID, COVID, and post-COVID life, and hopes for the future. We then discuss COVID-19 experiences and insights with participants based on their photos, focusing on the National Academy of Medicine “Quintuple Aim”, which prioritizes equity and inclusion. Then, we will convene a national conference to identify relevant, reportable Equity Metrics for healthcare organizations, based on our findings. We will also use study results to improve equity in our Virtual Health/telemedicine program. Finally, we will create a website “The COVID Journey” to share participants’ stories along their COVID journey.
StageCast: A Digital Theater Tool
PROJECT LEAD: MICHAEL RAU (THEATER AND PERFORMANCE STUDIES)
COLLABORATORS: TSACHY WEISMAN (ELECTRICAL ENGINEERING); KEITH WINSTEIN (COMPUTER SCIENCE); DUSTIN SCHROEDER (GEOPHYSICS)
Due to the recent coronavirus crisis, nearly all theaters across the US have been shuttered, and the artists who work on these stages have been laid off. To help these artists continue to make theatrical performances an interdisciplinary team of Stanford faculty, grad students, and undergraduates are collaborating to develop a series of interlinked technologies to allow theater artists to make performances online. These technologies range from improving the audio and video streaming technologies, to developing new tools for live video switching and editing, to applications involving machine learning and computer vision. We will be using this funding to develop prototypes of these new technologies as well as hold analysis sessions and seminars with faculty at Stanford and industry professionals to evaluate and discuss the ways in which this technology could affect the theater industry.
Real-time COVID-19 Education and Preparing for Future Crises Education
PROJECT LEADS: JASSI PANNU (MEDICINE), RISHI MEDIRATTA (PEDIATRICS), KARA BROWER (BIOENGINEERING), ALANA O'MARA (MEDICINE), KIMBERLY DEBRULER (MEDICINE)
COLLABORATORS: YVONNE (BONNIE) MALDONADO (PEDIATRICS; HEALTH RESEARCH AND POLICY), DAVID RELMAN (MEDICINE; MICROBIOLOGY; IMMUNOLOGY), PETE KLENOW (ECONOMICS POLICY), DAVID LEWIS (PEDIATRICS; IMMUNOLOGY), SEEMA YASMIN (MEDICINE)
The COVID-19 Elective was opened to all Stanford students Spring 2020, and was arranged in less than 10 days after Stanford students were notified that all in-person classes were cancelled. Hundreds of undergraduate and graduate students enrolled, demonstrating their intense desire to understand the pandemic, have information from trustworthy sources, and engage with other students during this tumultuous time of self-isolation. The course featured lectures from experts in the field, as well as Stanford students actively engaging in COVID-19 work. It was conducted via Zoom and Slack, with live expert Q&A and student discussion sections.
This year, the COVID-19 Elective (PEDS 220) will return with lectures focusing on the societal and technological aspects of COVID-19. Vaccine developments, racial disparities, mental health, and the economic effects of the pandemic on low and middle income countries are just a few of the topics that will be covered. Seed funding has allowed us to expand this course to an optional 2-credit version that includes student projects. With these projects, we aim to combat isolation, COVID-19 fatigue, and empower our students to continue to take action on this ongoing crisis.
Zoom Fatigue: Understanding the Effects of Videoconferencing on Well-Being
PROJECT LEAD: JEFFREY HANCOCK (COMMUNICATION)
COLLABORATORS: JEREMY BAILENSON (COMMUNICATION) MUFAN LUO (COMMUNICATION), GERALDINE FAUVILLE (COMMUNICATION), ANNA QUEIROZ (EDUCATION)
Given that society will continue to rely on videoconferencing technology for “distant socializing” during the COVID-19 pandemic, a core issue is to understand the effects of videoconferencing at this scale on human society. Using survey methods our project will address two inter-related but distinct questions: 1) How does videoconferencing affect psychological well-being over the medium and long term? and 2) How and why does videoconferencing lead to exhaustion.
Our findings will shed lights on the longitudinal effects of distant socializing on loneliness and other well-being outcomes during the COVID19 pandemic, and will examine the nature of 'Zoom fatigue' so often reported in the media. The project will illuminate the societal impact of the world-wide shift to videoconferencing for social interactions in terms of the immediate effects of exhaustion on mental health and the long term effects on psychological well-being. While filling this hole in our knowledge is urgent, our ultimate goal is to create new guidelines and best practices for how families, businesses and students and teachers can use videoconferencing in ways that enhance our well-being and minimize the possible risks, such as exhaustion, that come with the large-scale shift to videoconferencing.