Outgoing Postdoc Spotlight: Kathleen Creel

While some debate whether AI will develop “consciousness” in the future, Kathleen Creel, Stanford’s first Embedded EthiCS fellow at the McCoy Family Center for Ethics in Society, is much more compelled by the ways that “machine learning can help humans to better understand ourselves and our world and therefore better arrange our social, epistemic, and scientific practices.” 

Creel has been interested in the connections between humans and computers since she double majored in philosophy and computer science as an undergraduate at Williams College. After graduation, she worked as a software engineer at MIT Lincoln Laboratory, where she enjoyed solving challenging new puzzles every day. But she longed to do “what philosophers do best … figure out how individual puzzles fit into a broader epistemic and normative context.” As she and her team developed next generation satellite terminals, she wondered about the essential opacity of software systems “so large and complex that no one person could keep the whole codebase in their mind.” “What,” Creel was asking herself, “would we need to know about such systems to feel that they were trustworthy and reliable? What about transparency is epistemically valuable?” When she went to philosophy graduate school at Simon Fraser University (MA) and the University of Pittsburgh (PhD), these questions became foundational to her research and to her first publication, “Transparency in Complex Computational Systems,” wherein she identifies three forms of knowledge about opaque computational systems.

Still invested in understanding complex software systems, one of Creel’s primary projects as an Embedded EthiCS fellow in partnership with the Institute for Human-Centered Artificial Intelligence (HAI) has been exploring the “the role of homogenization and standardization that results from using the same or similar machine learning algorithms across a whole sector or domain of life.” For example, when companies use machine learning to screen resumes, systems typically use an algorithm to eliminate 90-95% of resumes before any human evaluates the job applications that remain. Creel surmises that “if such algorithms are making some systemic mistake, the same individual people could be rejected for many different jobs, compounding the group-level biases that we already know result in lower consideration rates, on average, for people from marginalized groups.” She’s been collaborating across disciplines with computer scientists and a labor economist to ascertain whether this phenomenon is happening in real algorithmic hiring systems. And once Creel has demonstrated that it is, she “can conduct a careful philosophical analysis of why, and in what cases, this phenomenon is morally wrong and a concern for democracy.” 

In addition to pursuing her own research interests, Creel has been working directly with computer science faculty members to help them weave ethics into their core courses as an Embedded EthiCS fellow. Rather than taking standalone ethics classes that suggest ethics and philosophy are ancillary to computer science, the goal is designing core courses which ask students to “think about the ethical considerations that come with each new learned technical capacity.” While Creel thinks this work is important to society, she also finds that students “really want to know how to do the right thing and how to avoid making mistakes or oversights they would regret.” 

Reflecting on her two past years at Stanford, Creel shares that: “It's been a great joy straddling these two worlds of computer science and philosophy. And one of the fantastic things about joining the Center right out of grad school has been developing a clearer way to bring my two intellectual loves together: to figure out what philosophy can bring to computer science and how computer science can help us do better philosophical analysis.” This fall, Creel is excited to share these insights with her colleagues and students as an assistant professor at Northeastern University in the Khoury College of Computer Sciences and the Department of Philosophy and Religion.

 


Donna Hunter is a freelance writer, editor, and tutor living in San Francisco. She has a Ph.D. in English from UC Berkeley and was an Advanced Lecturer in Stanford’s Program in Writing and Rhetoric.