Tech Ethics & Policy Summer Fellowships

Sponsored by the Ethics, Society, & Technology Initiatives and the Stanford Institute for Human-Centered Artificial Intelligence (HAI)

This page is for undergraduates only. Please visit the Tech Ethics and Policy Summer Fellowships for Graduate Students, if you are looking for graduate student summer opportunities. Coterms are eligible to apply to both opportunities.

Tech Ethics and Policy Fellowship for Undergrads

Through the Tech Ethics & Policy Fellows Program, Stanford undergraduate students have the opportunity to engage in the technology field as it intersects with public policy and social impact. The program runs from April to October and includes a course on ethics, technology, and public policy, as well as a paid summer opportunity at a technology company, civil society organization, federal agency, or Congressional office. Fellows will also be matched with mentors from the technology, ethics, and public policy fields.

Applications for Summer 2024 are now closed.  Check back in Fall 2024 for the next round of applications.

Throughout the program, students will learn directly from faculty and an array of affiliated mentors with long experience in the tech industry, civil society, and government. 

Please review the position descriptions from previous years for a sense of potential placement opportunities. 

Fellows will interview for summer internship placements during Winter Quarter. Each Fellow who matches with an unpaid internship placement receives a base stipend of $7,500 to cover most of the essential costs associated with an unpaid service experience. Up to $2,000 in financial aid and supplemental funding are available to students who qualify. Fellows who match with paid internship placements may also be eligible for supplemental funding. 

These fellowships are made possible in part by Frank McCourt in association with Stanford’s partnership with the Project Liberty Institute.

Eligibility and Requirements


Undergraduates from all academic disciplines are encouraged to apply, and applicants may vary in academic interests, technical experience, and policy knowledge. 


Selected fellows are expected to begin service following the completion of spring quarter classes and no later than July 1, 2024. All fellows are required to work with their host organizations at least 35 hours/week for nine consecutive weeks. Fellows are expected to work on-site at their host organizations when possible, but hybrid or fully virtual experiences are also available. Fellows must have a designated full-time professional staff member on-site as their supervisor/mentor.

Other commitments include the following:

Spring Quarter

  • Enroll in Spring Fellowship Course: ETHICSOC 85/ ETHICSOC 285 Introduction to Tech Ethics and Policy Career Pathways (Wednesdays 1:30 PM-3:20 PM)
  • **We also strongly recommend students enroll in (or have previously taken) one of the following courses:
    • CS 182: Ethics, Public Policy, and Technological Change (COMM 180, ETHICSOC 182, PHIL 82, POLISCI 182, PUBLPOL 182)
    • CS 152: Trust and Safety Engineering
    • COMM 230A: Digital Civil Society (AFRICAAM 230A, CSRE 230A)
    • AFRICAAM 151: Ethical STEM: Race, Justice, and Embodied Practice (ARTSINST 151C, CSRE 151C, ETHICSOC 151C, STS 51D, SYMSYS 151D, TAPS 151D)
    • COMM 154: The Politics of Algorithms (COMM 254, CSRE 154T, SOC 154, SOC 254C)
    • DESIGN 283Q: Tinkering with Inequity in Emerging Tech
  • Fellows must meet with their assigned TEP mentor at least once during Spring quarter


  • Share a learning plan with their site supervisor and update accordingly.
  • Arrange a visit for other Fellows with their site and with their mentor/supervisor, if possible.
  • Meet with their TEP mentor at least once.
  • Attend summer workshop(s).
  • Submit a final report, complete a program evaluation, and correspond with fellowship donor(s) as requested by fellowships program staff.

Fall Quarter

  • Participate in the Fall closing reception. 
  • Participate in outreach activities to share experiences and help publicize the program.

Selection Process

This fellowship is intended for individuals whose application and interview demonstrate

  • an integration of the fellowship experience with applicant’s academic, personal and/or career goals
  • prior demonstrated interest or involvement in the areas of ethics, technology, public policy and/or social impact, including related coursework
  • strong potential for the fellowship experience to enlarge a candidate’s understanding of an identified issue or challenge in the field of ethics, technology, and public policy

Round 1 applications:

  • Due on Friday, December 1 at 11:59pm PT. Students who apply in this round will have the best chance at securing their preferred placements. Once positions are filled during Round 1, they will not be available during Round 2. Interviews for finalists will take place the week of January 8 for early applicants.

Round 2 applications:

  • Due on Thursday, February 1 at 11:59pm PT. Interviews for finalists will take place the week of February 19 for Round 2 applicants.

Complete applications are screened, finalists interviewed, and fellows selected by the host organizations staff with the intention to award fellowships in mid-January for Round 1 applicants and the end of February for Round 2 applicants. In the event that the fellowship spots are filled after Round 1, applications will no longer be accepted. If this is the case, we will update the SOLO, HAI, and EST Initiatives websites to reflect this change. It is strongly encouraged that students apply early.

Applicants should respond promptly (within 48 hours) via email to a fellowship offer, or the offer will be rescinded. Once an applicant accepts a fellowship offer, the student should promptly notify all other Stanford and non-Stanford programs to which they have applied that they have accepted another offer and to withdraw their candidacy.


November 1  Round 1 Applications Open

December 1  Round 1 Applications Due

January 8-12  Round 1 Finalists Interviews

January 15  Round 1 Fellows Selected

January 17 Round 2 Applications Open

February 1   Round 2 Applications Due

February 12-16   Round 2 Finalists Interviews

February 19  Round 2 Fellows Selected 

April 3-June 14  Spring quarter course

June 17  Summer Internships Begin

For those who seek assistance, advising is available at the Ethics Center to help students develop their applications.

For questions, please contact estinitiatives [at] (estinitiatives[at]stanford[dot]edu)

Tech Ethics and Policy Fellowships for Graduate Students

Please visit the Tech Ethics and Policy Summer Fellowships for Graduate Students, if you are looking for graduate student summer opportunities. See the comparative chart below for differences between the two programs. Coterms are eligible to apply to both opportunities. 

Comparative Chart

 Graduate and Coterm StudentsUndergraduate and Coterm Students 
Number of Fellows10-1520-25

Graduate and Coterm Students

  • those who have completed at least one computer science or symbolic systems course (with a preference for multiple courses), or multiple courses in the School of Engineering.

Undergraduate and Coterm Students

  • from all academic disciplines are encouraged to apply, and applicants may vary in academic interests, public service involvement, and experience.                        
Job Placement (Summer 2024) Federal government agencies, U.S. Congressional offices, think tanks and civil society organizations in DCTechnology companies, non-profit organizations, non-governmental organizations, or public agencies
Compensation / HousingA base stipend of $10,000 for all fellows in addition to roundtrip airfare to DC and housing in DC.A base stipend of $7,500 to cover most of the essential costs associated with an unpaid service experience. Up to $2,000 in financial aid and supplemental funding are available to students who qualify.
Location Washington, D.C.In-person, Remote, Hybrid

Tech Ethics & Policy Mentors

Samidh Chakrabarti

Samidh has spent his career at the intersection of technology and social impact. He most recently founded and led the Civic Integrity team at Facebook (now Meta) where he was responsible for keeping the 3B person community safe from societal-level harms, including driving the company's product work on election integrity and humanitarian crises. Prior to Facebook, Samidh was at Google where he led work on Civic Engagement products. Currently, Samidh is the Chief Product Officer at Groq where he is bringing responsibility to the core of a novel AI computing platform. Samidh holds graduate degrees from MIT in Artificial Intelligence, from Oxford in Modern History, and from Cambridge in Public Policy. He is still working to perfect his wild mushroom risotto.

Deepti Doshi

Deepti is the Co-Director of New_ Public. Her work has focused on the intersection of social media, community organizing, and leadership development over the last two decades. Prior to New_ Public, she set up Meta's New Product Experimentation team and established Meta’s Community Partnerships team to build products (namely, Groups), programs, and partnerships that support community leaders. Deepti is a graduate of the Harvard Kennedy School and the Wharton Business School and holds a bachelors degree in Psychology. She is a TED Fellow, an Aspen Institute First Movers Fellow and Ideas Scholar, and her work has been featured in multiple publications.

Camille François

Camille Francois specializes in how organized actors use digital technologies to harm society and individuals. Her work to understand and mitigate digital harms spans from cyberwarfare to online harassment. She has advised governments and parliamentary committees on both sides of the Atlantic—from investigating Russian interference in the 2016 U.S. presidential election on behalf of the U.S. Senate Select Intelligence Committee, to leading the French government’s 2022 inquiry into the economic opportunities and social challenges presented by the metaverse. She currently serves as the Senior Director for Trust & Safety at Niantic.

Margaret Gould Stewart

Margaret Gould Stewart is a global leader in the field of user experience design and has led some of the most consequential design teams in consumer technology at companies like Meta/Facebook, YouTube, and Google over the past 20 years. She was named by Fast Company Magazine as one of the Most Creative People in Business. Margaret currently acts as an independent investor and advisor to a range of venture capital funds, startups, and nonprofits.

Margaret spent over ten years as a design and product leader at Meta/Facebook, where she most recently served as VP of Product Design & Responsible Innovation. During her tenure there, she established Facebook’s Responsible Innovation team to help surface and address potential harms to people and society in the company's products. She also led design and user research for a variety of teams including AI, Privacy, Workplace, and New Product Experimentation. Earlier in her time at Meta/Facebook, she built the design practice on the business side of the company from the ground up, growing it to a team of over 450 global design and user research professionals world wide.

Prior to her time at Meta/Facebook., Margaret served as Global Head of Design for YouTube where she led all aspects of user experience, including consumer, creator and advertiser products across desktop and devices. She and her team architected the first-ever global redesign of YouTube. Before YouTube, Margaret led design and research for Google Search and Consumer Products, including Google News, Google Finance and Google Developer Tools. Additional previous roles included serving as VP of Design & Usability at Wachovia Bank, and as the founding Creative Director and later the General Manager of Tripod, Inc, a first generation startup.

Margaret currently serves on the Board of Trustees of the Smithsonian Cooper Hewittt National Design Museum as well as the Board of Trustees of the Williamstown Theatre Festival, a Tony-award-winning summer theatre festival in the Berkshire Mountains of Massachusetts. Margaret holds a Master’s degree in Interactive Telecommunications from NYU’s Tisch School of the Arts. She earned her B.A. in Communication & Theatre from Boston College. She is a frequent speaker at conferences such as TED, Grace Hopper, CHI and AIGA.

Jon Iwata

Jon Iwata is a Founder and the Practice Leader of the Yale Program on Stakeholder Innovation and Management (Y-SIM) at Yale School of Management, where he is an Executive Fellow and Lecturer. Y-SIM was established in 2022 to help leaders more effectively create economic and societal value. The program is based on the work Jon and his collaborators led exploring stakeholder capitalism's impact on leadership.

Jon is also Founding Executive Director of the Data & Trust Alliance, a not-for-profit organization which brings together leading businesses to develop and adopt responsible data and AI practices. Its first initiative, announced in 2021, was the development of algorithmic anti-bias safeguards for workforce practices, adopted by its member companies including American Express, CVS Health, GM, Humana, Mastercard, Meta, Nike and Walmart. Over a 35-year career at IBM, Jon held multiple roles, including Senior Vice President, Chief Brand Officer, and leader of the company's global marketing, communications and citizenship organization. He reported to three IBM CEOs during two decades of significant transformation. He was chairman of IBM’s corporate strategy committee and established the company's values and policy committee. According to Interbrand, IBM became the second most valuable brand in the world during Jon’s tenure as CMO.

Elaine Brechin Montgomery

Elaine is an award winning and patented Design Director at Meta/Facebook and has spent the last 8.5 years there leading teams in Ads, Business Integrity and Privacy. Currently based in Seattle, she has over 27 years of industry experience in design research, product design and management of products and services. Elaine is Meta’s leader in Youth Privacy design and Regulatory Readiness, and co-leads Meta’s Trust, Transparency & Control Labs ( Trust, Transparency & Control Labs is a deep collaboration between Meta Design and Policy teams that brings together global experts from a wide range of disciplines: academics, designers, industry peers, legal, policy makers and data regulators, to co-create solutions for the industry-wide challenge of data use across the internet. Elaine is passionate about bringing creative design methods to hard problems and enjoys enabling a broad mix of expertise to collaborate in these Design Jams.

Prior to her time at Meta/Facebook, Elaine has spent most of her career in Silicon Valley and London leading design teams in corporate banking at Deutsche Bank and Barclays, leading consumer products at Google UK and USA (Blogger, Picasa, Wiki, Mobile Search), WebEx, Cooper Design and Elaine’s first role in 1996 at Paul Allen’s Interval Research, reimagining the digital living room. 

Originally from Scotland, Elaine holds a Master’s degree in Computer Related Design (now Design Interactions) from The Royal College of Art in London, the first graduate course in interaction design in the world. She earned her B.A. in 3D Design from Robert Gordon University in Aberdeen, Scotland. She thought she was going to be a fine art painter!

Joaquin Quiñonero Candela

Joaquin Quiñonero Candela is a Technical Fellow at LinkedIn where he focuses on AI -- both the technology as well as ensuring its responsible use.

Joaquin is also a non-resident Senior Fellow at the Harvard Belfer Center,  a member of the Spanish Government’s Advisory Board on Artificial Intelligence and, until recently, served on the Board of Directors of the Partnership on AI, a non-profit partnership that focuses on using AI to advance positive outcomes for people and society.

Prior to LinkedIn, Joaquin worked at Facebook (now Meta) for more than nine years where he built and led the AML (Applied Machine Learning) team, and drove product impact at scale through applied research in machine learning, language understanding, computer vision, computational photography, augmented reality and other AI disciplines. This work was foundational to the unified AI platform that powers all production applications of AI across the family of Meta products, today. And more recently, Joaquin served as the Distinguished Technical Lead for Responsible AI at Meta, where he led the technical strategy for areas like fairness and inclusiveness, robustness, privacy, transparency and accountability

Before Facebook, Joaquin built and taught a new machine learning course at the University of Cambridge (together with Carl Rasmussen), worked at Microsoft Research, and conducted postdoctoral research at three institutions in Germany, including the Max Planck Institute for Biological Cybernetics. Joaquin received my PhD from the Technical University of Denmark.

Outside work, Joaquin is an avid trail runner and triathlete, paella cook and amateur guitar player and folk singer.

Irene Solaiman

Irene Solaiman is an AI safety and policy expert. She is Policy Director at Hugging Face, where she is conducting social impact research and building public policy. She also advises responsible AI initiatives at OECD and IEEE. Her research includes AI value alignment, responsible releases, and combating misuse and malicious use. Irene formerly initiated and led bias and social impact research at OpenAI, where she also led public policy. Her research on adapting GPT-3 behavior received a spotlight at NeurIPS 2021. She also built AI policy at Zillow Group and advised policymakers on responsible autonomous decision-making and privacy as a fellow at Harvard’s Berkman Klein Center. Outside of work, Irene enjoys her ukulele, making bad puns, and mentoring underrepresented people in tech. Irene holds a B.A. in International Relations from the University of Maryland and a Master in Public Policy from the Harvard Kennedy School.

Nirav Tolia

Nirav Tolia is the founder and former CEO of Nextdoor. He has spent the last 24 years creating and leading early-stage, pioneering consumer Internet companies. After beginning his career as employee #84 at Yahoo!, he founded and served as CEO for three successive companies: Epinions (IPO on Nasdaq in 2004), Fanbase, and most significantly, Nextdoor (IPO on NYSE in 2021). Under Nirav’s leadership, Nextdoor grew into the world's largest local social network and was adopted by over 220k neighborhoods in 11 countries worldwide, including 90% of American neighborhoods. Nirav is currently Executive Chairman of Hedosophia, a leading technology investment firm. He holds a BA from Stanford University, and returned to his alma mater in 2019 as a visiting instructor for the Stanford in Florence program.

Meredith Whittaker

Meredith Whittaker is the President of the Signal Foundation and serves on their board of directors. The Signal Foundation’s subsidiary, Signal Messenger LLC, created the Signal messaging app and Signal protocol, which work to enable encrypted instant messaging. Whittaker has had an esteemed career. She was formerly the Minderoo Research Professor at New York University (NYU), and the co-founder and faculty director of the AI Now Institute — an institute which worked to research the social implications of artificial intelligence and related technologies. In the governmental realm, she was Senior Advisor on AI to Chair Lina Khan at the Federal Trade Commission. She also has testified to multiple congressional committees (including the powerful House Oversight Committee) on artificial intelligence, facial recognition, and the political economy of technology (among other things). Whittaker was also employed at Google for over a decade, founding Google’s Open Research group and M-Lab. Whittaker left Google in 2019 after being one of the key organizers of the Google Walkouts, in which employees demanded five concrete changes from the company: an end to forced arbitration; a commitment to end pay inequality; a transparent sexual harassment report; an inclusive process for reporting sexual misconduct; and elevate the Chief of Diversity to answer directly to the CEO and create an Employee Representative. Whittaker has published widely in prestigious periodicals such as Nature and The Nation. 

Dave Willner

Dave Willner is the Head of Trust and Safety at OpenAI, the creator of ChatGPT. He was previously the head of community policy at Airbnb, where he built Airbnb’s global community policy team from scratch. Before that, he was head of content policy at Meta (then-Facebook), where he built and managed the team responsible for Facebook’s Community Standards.