Main content start

Beyond Intelligence: The Impact of Advanced AI Agents

Recent trends in AI point toward the development and proliferation of increasingly capable AI agents that can communicate fluently, engage in sophisticated reasoning, and act with little human supervision.  AI assistants, a subset of AI agents that are more closely tethered to their users, are also advancing rapidly and may take the form of personal secretaries, life coaches, assistant researchers, creative partners or digital companions and even caregivers. As these more agentic AI systems are developed, they will present opportunities and challenges in a complex, highly networked world.

On June 17, the McCoy Family Center for Ethics in Society, in collaboration with Google DeepMind, convened a group of technologists, philosophers, academics, and ethicists in a Chatham House Rule workshop to discuss the potential benefits, harms, and ethical implications of agentic AI systems. Following are some of the main takeaways from the workshop, including suggestions on how to manage the coming challenges as tech companies race to add more infrastructure and sophisticated AI to their offerings.

Advanced AI Agents Will Not Be Like Siri And Alexa

AI agents will be very different from AI voice assistants like Siri and Alexa or chatbots like ChatGPT. While those systems are capable of some automated thinking, AI agents are expected to combine thinking with doing – autonomously taking actions across many domains and in many environments. They will be capable of completing complex tasks without step-by-step instructions and with minimal human supervision. They will be able to integrate with robotics and other systems, and so will be increasingly general-purpose. With minimal instruction, they will be able to interact in human negotiations. 

As AI Agents Proliferate, Their Impact Will Be Difficult To Predict

AI agents and personal assistants are designed to carry out the intentions and fulfill the needs of their users. That alignment may be somewhat clear-cut when the AIs are closely tethered to an individual. It becomes much less so as AI agents proliferate, serving developers and individual users. Over time, there could potentially be millions of agents interacting with one another in the wild, interacting autonomously with each other as they negotiate to book appointments or schedule activities on behalf of different people. The impact of such a highly complex, AI-agent ecosystem would be difficult to model due to the vast number of unpredictable and emergent behaviors that would result at this scale. 

Mitigating Potential Harms by AI Agents

AI agents also pose potential governance challenges. How are policymakers to decide what levels of autonomy AI agents should have or how much ambition or goal-directedness they should be allowed to exhibit in their efforts to satisfy users? What actions should they be allowed to take or not take? How anthropomorphic should AI assistants be?

Some participants wondered whether there should be legal frameworks that would limit the autonomy of an AI agent or prohibit certain functions. Some of the questions raised during the workshop include: Should the power of AI agents be limited? Should they have an “off switch?” Should they be banned from certain domains of interaction? Should we not be building them at all? 

One solution that many participants agreed with is that AI agents should have some means of identification. They could then be registered in a system that establishes reputation, which would create an infrastructure that would both influence how AI agents interact with each other and create an incentive for users and companies to buy or use the services of a registered agent. “At a minimum, we need a way to connect an agent with a human person,” one participant said. A registry would allow “traceability” – a way of establishing liability and giving some visibility to regulators. 

Balancing Interests

Building an agentic AI system requires a careful balancing of interests – sometimes competing interests – including those of the developer (sometimes a company whose goal is to make a profit); the interests of society; and those of the user. As AI agents proliferate, there will be ever greater opportunities for misalignment as AI agents working on the same project effectively communicate and cooperate with each other or work at cross purposes. 

As developers build AI agents, they need to be aware of nuance and context. If  AI agents are to be competent caregivers, for instance, they need to balance the goal of taking care of a person with the goal of empowering them. Typically, care relationships are asymmetrical; one person has the ability to carry out tasks that the other person can’t. The goal of caring, however,  is not to do everything for the person being cared for but to enable them to be independent.  AI caregivers must be built to know the difference. We need to make sure the sophistication of AI never forgets its human connection, and that we humans remain in the loop. 

To learn more about the ethics of advanced AI assistants, read the recent Google DeepMind paper

 


Betsy Morris is an editor and writer, and former bureau chief who spent most of her career at Fortune Magazine and the Wall Street Journal. One of her assignments at the Journal was to write about tech. 

This event was funded by the Project Liberty Institute.