Katherine Ortiz
Bio
Katherine Ortiz is currently a Technical Program Manager at a major technology company working in Cloud enterprise data governance and compliance. She is also a Major in the United States Army Reserves, where she has been serving as an Intelligence Officer and an Information Operations Planner for over 18 years. She has a BA in History from Barnard College and an MS in Computer Science from the Naval Postgraduate School in Monterey, California.
Katherine is thrilled to be a part of the inaugural Ethics and Technology Practitioner Fellowship. She is a proud alumna of the Stanford Online Ethics, Technology + Public Policy for Practitioners course, which kick-started her passion to dig deep into grappling and exploring new ways of interacting with technology as it continues to shape and transform our daily lives. She lives with her husband, two kids, two dogs and 2 cats in Connecticut and when she’s not working she enjoys adventuring with her family, eating good food, biking and reading.
Fellowship Project
Current consent frameworks used in human-technology transactions are outdated, static, and lack meaning for end users. Examples of such transactions range from the mundane process of signing up for a new digital service or filing an insurance claim online to more sensitive interactions, like speaking with a medical chatbot, an AI companion, or a doctor in an office equipped with data collecting technology. Engaging in such transactions often leaves us feeling powerless in our ability to make decisions regarding the nature and terms of such transactions.
In most cases, we are provided lengthy terms of service notices that seemingly “inform” us of the terms of such transactions (which often come with problematic clauses on data sharing and use) with little recourse for us to provide an opinion or any form of counter-condition based on our expectation of the transaction and our personal values. Choices for these transactions are limited or non-existent resulting in coerced “consent” flows designed in ways that undermine human decision making, autonomy, and power in digital spaces.
The vast majority of existing consent frameworks are grounded in the theory of “informed consent”, a concept that originated within the medical field over a century ago when technology did not play as dominant a role in our daily lives as it does today. Our lives are now saturated in technology, and it is rapidly taking on more dimensions in the form of virtual reality, agents, robotics, etc. Given this new reality, it is imperative that we seek out new ways to obtain consent in human and social-technical interactions that can provide value and agency to the humans involved. If we don’t, we risk further erosion of human power in digital spaces and a perpetuation of transactions that do not prioritize human values.
This project seeks to explore alternative ways to think about human values and expectations within human-technical transactions and to develop some alternative ways of designing consent frameworks that prioritize the human(s) engaged in such transactions. To do this, we will conduct a comprehensive literature review of existing consent theory, applications, limitations, and considerations within the digital space and then apply this body of knowledge to a set of consent-related scenarios that we will develop a series of questions around. These questions will then be answered through a combination of individual and community discussions and workshops.
This project aims to kick-start a dialogue on technology consent frameworks based on human values and expectations rather than on non-negotiable informational terms. We do not anticipate being able to fully solve the problem we have articulated above, but we do hope to provide some valuable data for follow on research, while also providing individuals and communities with some tools to be able to better advocate for their agency in certain human-technical interactions.
There is a great deal of prior work in consent studies that we can draw inspiration from as we conduct this project. The work of Helen Nissenbaum and her concept of “contextual integrity” is a foundation on which we will continue to build. This project is different in that it is not being conducted from the domain-vantage point of data privacy but rather from a normative context of human values and community expectations. Much of the work around consent has focused on how to improve upon the idea of “informed consent” with the assumption that information and knowledge are the driving factors of a valuable consent framework. This project looks to challenge these assumptions through individual and community dialog around specific scenarios to try and get a deeper understanding of what actually makes consent meaningful.