Artem Trotsyuk: AI, Health Data and Ethics in Medicine

Welcome back to our blog series featuring current and former Ethics Center Graduate Fellows! Our goal is to highlight the often under-recognized ethics-related work graduate students are doing across the university, whether it be through their research, advocacy, mentoring, or community building.

While completing his PhD in Bioengineering and his Masters in Computer Science with a specialization in Artificial Intelligence, Artem Trotsyuk became increasingly interested in the ethical ramifications of using patients’ health data to advance science and create more personalized medicine at the cost of patient privacy. This interest led him to apply for a McCoy Family Center for Ethics in Society graduate fellowship in 2020. Inspired by his peers in the fellowship, he decided to further pursue this area of research with the Stanford Center for Biomedical Ethics (SCBE) after completing his graduate studies. As a postdoc at SCBE, Artem is exploring the role of AI in biomedical research, namely, the dual-use implications of algorithm development — if the data and/or algorithms could be used to cause harm.

The Center for Ethics in Society spoke with Artem about his research into dual-use implications of AI algorithms and his work on creating guidelines for them. An edited version of the conversation appears below.

What led you to researching the intersection of AI, genomic data and ethics at Stanford, and how did the Ethics Center fellowship influence your thinking?

I began college as a premed student, but as a laboratory scientist, you're mostly thinking about how to get your project to work, your algorithm to function, and the data you’ll need to make all of that happen. That means you don’t have much time to take a step back and analyze what's going on more broadly. So I eventually came to realize that I was more interested in thinking about the broader implications of using medical research data and sourcing it ethically.

When I went to grad school, I was looking for an opportunity to expand on my undergrad interest in translational research — taking something from the laboratory bench and moving it to the bedside. This led to my PhD work researching wearable electronics and constructing smart bandages that would allow patients to get a sense of how their wounds were healing so they could get appropriate treatments.

But in order to develop smart bandages, we needed to use patient health data to build algorithms, and this led me back to thinking about the ethical ramifications of using such data. In other words, what’s the research behind these algorithms? Because these questions fall within the biomedical ethics space, I applied for a fellowship at the McCoy Family Center for Ethics in Society to study them in the company of scholars from a variety of disciplines. This community helped me think about the broader implications of patient data and privacy, specifically around my PhD work.

Now, I am building on this foundational work by studying the dual-use implications of AI algorithms in biomedical research.

What are dual-use implications?

Essentially, dual-use implications are when research that was conducted to serve the public or private good can also be used to create harm. For instance, imagine that someone is writing an algorithm to figure out how to develop a new drug. They’ll need to access a lot of data to do this and we assume that they will actually use it to develop new drugs. But, someone could potentially access the algorithm and the data on which it is based and use it to create the next COVID-19 virus. Unfortunately, not all researchers think about the potentially negative implications of their research.

To proactively think about this, I am analyzing research pipelines to help researchers recognize and acknowledge how their algorithms could be used to produce negative outcomes and determine what they can do to prevent that from happening.

Have you and the team you’re working with figured out how to safeguard algorithms against such uses?

The short answer is no. This is a fairly new project and will probably take a few years to complete. The goal of the project is to produce guidelines that would be adopted by regulatory bodies and impact policy. Guidelines already exist in other spaces, such as gene editing, but I am focusing exclusively on AI algorithms within biomedical research.

Institutional boards, such as the US National Research Council and the US National Institutes of Health (NIH), are addressing these issues, but they are typically only making suggestions. I am trying to figure out how to have researchers adhere to certain principles and codes of conduct that prevent harm while also allowing for robust research.

One possibility is if the FDA, for instance, had a set of specific guidelines for AI algorithm approval that said “if you’re going to get approved as an AI algorithm, this is what you need to do in order to prevent XYZ from happening,” it could be a more directed way of getting people to proactively address certain unintended consequences of their work. I am also thinking about how to communicate this work to policymakers who can influence law, since laws are more enforceable than guidelines.

We look forward to bringing you more stories that focus on the vast array of ethics-focused research being done by grad students affiliated with the Center.
 


Donna Hunter is a freelance writer, editor, and tutor living in San Francisco. She has a Ph.D. in English from UC Berkeley and was an Advanced Lecturer in Stanford’s Program in Writing and Rhetoric.