From Algorithms to Accountability: How Veronica Rivera Makes Technology Safer for Everyone
Photography by Patrick Beaudouin
A high-profile investigation into Uber’s sexual assault crisis revealed how algorithmic choices can trigger profound real-world harm. Researcher Dr. Veronica Rivera studies those same risks, offering a framework for safer, more equitable technology built on transparency, collaboration, and ethical reflection.
Earlier this year, a New York Times investigation drew public attention to a troubling pattern inside Uber’s core matching system. The reporting outlined how rider-driver pairing created quiet channels for harm, often leaving survivors with limited recourse and little transparency around corporate response. Many proposed safety features remained sidelined for financial or legal reasons. Safety concerns rose while user (and worker) trust fell.
This very dilemma defines Dr. Veronica Rivera’s work. Rivera, a recent Embedded Ethics Postdoctoral Fellow (2023–2025) at Stanford’s McCoy Family Center for Ethics in Society, studies how digital platforms introduce strangers to one another, and how those algorithmic introductions impact people’s physical safety. She calls this phenomenon algorithmically mediated offline introductions—when digital systems shape how strangers meet in physical space. A ride-share match, a dating app encounter, or a delivery assignment can carry unseen risks, especially for women.
Rivera began with a desire to understand people rather than code alone. Her early training in computer science sparked curiosity about the lived conditions surrounding digital tools. That interest led her into human-computer interaction and eventually into research focused on harm, risk, and ethical responsibility in digital systems. “I wanted to keep doing computer science,” she said, “but I really wanted to focus on people and how computer science could help society.”
Uber’s case illustrates the stakes of her approach. When platform leaders treat safety as an add-on rather than a design requirement, users shoulder the burden. Survivors must document incidents, navigate reporting channels, and protect themselves through a patchwork of individual strategies. That pattern mirrors what Rivera’s research uncovers again and again. Digital systems often push the burden of safety onto users while platforms maintain control over the very conditions that create risk.
In recent months, Uber has taken steps to address some of these longstanding concerns. A new feature, piloted in San Francisco, allows women riders to request women drivers, a move widely seen as a response to criticism around rider safety. For Rivera, such tools are welcome but insufficient on their own. They reflect a familiar pattern: placing responsibility to manage risk on individual users rather than addressing the broader structural conditions that create it.
From Code to Care: Rethinking Digital Safety
Rivera’s research amplifies the idea of post-digital safety, a framework that treats digital and physical harms as inseparable. She draws inspiration from earlier scholars who pushed for broader definitions of security beyond phishing, scams, or malware. Their work emphasized how technologies influence relationships, power dynamics, and personal safety–factors that traditional cybersecurity often overlooks.
Digital life and daily life, in Rivera’s view, are intertwined. She argues that platforms must understand how a notification, a match, a message, or a rating system influences in-person encounters. Digital systems rarely account for those dynamics, yet users navigate them constantly.
Her research on online dating and gig work shows how risk multiplies when platforms treat offline harm as outside their scope. Users often adopt protective behaviors—screening, self-disclosure, and careful documentation—out of necessity, not preference. According to Rivera, these actions reveal gaps in platform responsibility, not user failure.
To restore organizational responsibility, Rivera calls for a shift in engineering culture: honest reporting systems, clearer disclosures, and safeguards grounded in the experiences of those most affected. These include women, immigrants, and precarious workers.
Self-protection becomes a lot harder when platforms are not transparent about the risks that users might experience.
Designing for Tension, Not Perfection
A recurring theme in Rivera’s work involves the productive value of ethical tension. She studies how organizations balance conflicting priorities, and how engineers weigh values like transparency, privacy, autonomy, convenience, equity, and cost. Rivera rejects the idea of a perfect system, seeing ethical design as the result of navigating competing values and making compromises.
Gig work serves as a clear example. Her research on women gig workers reveals how platform systems downplay or ignore women’s experiences. Women drivers often avoid reporting harassment because they fear lower ratings or de-platforming, outcomes that threaten their income. Their silence reflects structural tension rather than personal choice, a predictable outcome of platform design.
Quiet Activism and the Ethics of Practice
Rivera uses the phrase quiet activism to describe how practitioners advocate for ethical choices from within organizations. While teaching ethics in a professional HCI program at UC Santa Cruz, she used this concept to guide students to recognize how their role might uniquely position them to raise ethical concerns in ways that are both effective and sustainable within institutional constraints. Her research with UX professionals suggests improving user experience, more effective communication, or thoughtful design could serve as powerful entry points for ethical influence.
Quiet activism focuses on communication. It involves understanding the motivations of stakeholders, learning how to speak in terms that resonate across teams, and calling attention to risks that colleagues may overlook. As she explained, “Usually these ethical dilemmas arise from a mismatch of different goals and priorities that different stakeholders might have.” Education forms a large part of her ethics work, encouraging practitioners to advocate for user safety while navigating product timelines, limited resources, and organizational pressures.
Quiet activism means advocating for users and helping others in an organization understand the risks they might overlook.
Toward a Safer Digital Future
Rivera is optimistic about the future of ethics in technology. She takes inspiration from the growing number of students, designers, and engineers who want to advocate for user well-being. She also points to embedded ethics programs in universities, including Stanford’s own efforts, as signs of cultural change. These programs teach future technologists how to analyze harm, recognize tension, and design with care.
The Uber case demonstrates the consequences of treating safety as optional. Rivera’s work offers a different path. Her research encourages companies to acknowledge tension, empower users, and adopt transparent reporting practices.
In Rivera’s view, ethical technology grows not from isolated innovation, but from shared understanding and mutual accountability among technologists, users and their communities, and policymakers. Her collaborative approach invests in long-term partnerships with communities that are often excluded from technical decision-making. Rather than treating participants as data sources, Rivera centers their insights, concerns, and lived experiences.
Her vision calls for technology that supports people and locates responsibility within the systems that shape digital life, not just on individual users. Her work offers both a critique of current practices and a hopeful blueprint for what safer, more equitable technology can become.
Dr. Veronica Rivera is an incoming assistant professor at the Georgia Institute of Technology in the School of Cybersecurity and Privacy. She is currently a postdoctoral researcher at the Max Planck Institute for Security and Privacy. Her research is in human-centered security and privacy. She uses empirical and design methods to make digital technologies safer and more equitable for everyone. Previously, Dr. Rivera was an Embedded Ethics Postdoctoral Fellow (2023–2025) at Stanford University. During her time at Stanford, she also worked with the Empirical Security Research Group in the Computer Science Department. Dr. Rivera received her PhD in Computational Media from the University of California, Santa Cruz, and her bachelor's degree in Computer Science and Math from Harvey Mudd College, where she was a President’s Scholar.
The McCoy Family Center for Ethics in Society is committed to bringing ethical reflection to bear on important social issues through research, teaching, and community engagement. Drawing on the established strengths of Stanford’s faculty and students, the Center develops interdisciplinary ethics initiatives that relate to pressing public problems.