ContractsProf Blog

Editor: Jeremy Telman
Oklahoma City University
School of Law

Thursday, June 6, 2024

What’s All the Fuss About? Governing AI Agents

Periodically, when a new article shoots up the SSRN Top Ten charts, we find ourselves asking, “What’s all the fuss about?”  This column is where you can find the answers.  Before the series even had a name, we wrote about Yonathan Arbel and David Hoffman’s Generative Interpretation. Our first official post in this series was on Lawyering in the Age of AI, by Jonathan ChoiAmy Monahan, and Dan Schwarcz. Most recently, we posted about Debt Tokens, by Diane Lourdes DickChris Odinet, and Andrea Tosato. Today, we tackle Governing AI Agents by Noam Kolt.

Noam_kolt_0As has been the case with prior iterations of the What’s All the Fuss About feature, once you read the article, you will see immediately why everyone is downloading it.  Professor Kolt (left) is among the first to address an issue that has come upon us unawares and for which he have yet to develop appropriate legal doctrines and models. After a comprehensive and insightful but mercifully compressed review of the issues associated with AI Agency, he offers a comprehensive approach to the problem.  It is very self-consciously a first draft towards thinking about how to adapt our theoretical constructs, economic and legal, for addressing human agency so as to accommodate the challenges that AI Agency poses.

Professor Kolt begins by reviewing a case we discussed here, in which Air Canada was held liable for misinformation that its bot provided to a customer about the availability of bereavement fares.  He defines AI Agents as “AI systems that have the technical capacity to autonomously plan and execute complex tasks with only limited human oversight”(9). He looks at these AI Agents through two analytical frameworks: the economic theory of principal-agent problems and the common law agency doctrine (6), although Professor Kolt notes that the latter is merely an analytic tool, given the apparent consensus that AI Agents are not considered agents under the common law (10 & n. 26).

Chatbot1
Image by DALL-E

The article makes three unique contributions: it identifies and characterizes problems arising from AI Agents; it addresses problems when principal-agent principles are applied to AI agents; and it explore the implications of agency theory for designing and regulating AI agents (7-8). After parts devoted to the development of the technology behind AI agents and explorations of the relevant legal doctrines, Professor Kolt argues that a new technical and legal infrastructure is needed to address the reliability, safety, and ethical challenges posed by AI agents (9).

In Part I, Professor Kolt tells us what AI Agents are and what they can do (11-17).  In short, they can do a lot.  Increasingly, they can do stuff autonomously, which makes it tempting to delegate tasks to them.  However, as they become more autonomous, they may do things that their human principals would not authorize, ranging from hacking websites, colluding with other AI Agents to fix prices, or . . . let your sci-fi-inflected imagination run riot. Professor Kolt then seeks to deploy economic theory of agency problems and common law agency doctrine to address some of the risks associated with AI Agents.

In Part II, Professor Kolt explores problems in delegation to AI Agents (17-29).  The basic problem is the same as that in any principal-agency relationship – the efficiency gains achieved through delegation may be offset or negated because the agent does not conduct the principal’s business as the principal would.  To take a simple example, an AI Agent might be instructed to maximize profit. It might do so in a way inconsistent with the principal’s ethics. It would be very difficult for the principal to foresee in advance all of the potential ethical issues that might arise and accordingly difficult to train the AI Agent in advance to avoid ethical pitfalls.

First, the problem of information asymmetry is especially acute with respect to AI Agents.  Users may not know the AI Agent’s capabilities, and the AI Agent may not have the capacity to comply with the expected common-law disclosure duties that obtain in the usual principal/agent relationship (20-22). Second, because instructions to the AI Agent will always contain gaps, there can be problems involving AI Agents exceeding their authority (23-24). Third, AI Agents might not be as easily bound by the fiduciary duty of loyalty as human agents can be. In part, this is because AI Agents are designed by for-profit corporations interested in the continued development of their technology.  Loyalty to the client might not be the AI Agent’s sole or even the primary objective (25-27).  Finally, AI Agents can and do delegate to sub-agents to assist in their tasks, multiplying the pre-existing complexities attendant to AI Agency.  Professor Kolt suggests that common-law rules governing use of sub-agents can be helpful in addressing the problems of AI sub-agents, but they do not offer comprehensive solution (28-29).

Chatbot2
Image by DALL-E

Part III addresses three common-law mechanisms for addressing human agency problems and assesses their suitability to governing AI agency (30-37). The incentive design mechanism is a poor fit for AI-Agents, because they are not incentivized the way human agents are (30-32). The monitoring mechanism seems equally fraught. Monitoring gobbles up the savings that delegation is supposed to produce.  Human agents may not be capable of monitoring AI Agent, and using AI monitors just creates new monitoring problems (32-35). Even if you could monitor AI Agents, you would also need an enforcement mechanism, and there, just as with the incentive design mechanism, we run up against the problem that it is hard to design effective ways to discipline AI Agents (35-37).

Moving beyond the traditional mechanisms for taming agency problems, Professor Kolt recommends in Part IV a bespoke governance strategy for AI Agents, centered around the guiding principles of inclusivity, visibility, and liability (37-46).  Ordinarily, we want the agent’s interests aligned with those of the principal as much as possible.  However, that alignment might be undesirable with respect to AI Agents because of externalities that affect third parties and society at large. Hence, the first component of Professor Kolt’s governance strategy involves inclusivity (37-40). The second component is visibility, which involves tracking and monitoring use of AI Agents. There are considerable technological challenges involved here, but Professor Kolt introduces a number of strategies for visibility that are already being developed (40-42). Finally, Professor Kolt proposes liability rules so that natural or legal persons can be held accountable for the harms caused by their AI Agents (43-46).

Professor Kolt is modest in his aims.  At this point in the development of the technology, one can only foresee potential problems and grope towards solutions.  Nonetheless, he has provided a framework that can get the conversation started, and it is a conversation in which legal minds, business leaders, experts in technology, and legislators/regulators desperately need to engage.

https://lawprofessors.typepad.com/contractsprof_blog/2024/06/whats-all-the-fuss-about-governing-ai-agents.html

Commentary, Contract Profs, E-commerce, Recent Scholarship, Web/Tech | Permalink

Comments