Wednesday, December 6, 2023
I’ve been stewing over the power struggle at OpenAI for a couple of weeks, not sure what to think about it. It is either the biggest nonprofit law story of the decade, or not. And, unfortunately, we may never know which it is.
For those not in the know, OpenAI is the company that release ChatGPT about a year ago, revolutionizing the public perception of how far advanced AI technology is, and deeply freaking out professors who give open-internet exams. I didn’t know before a couple of weeks ago that OpenAI is a nonprofit/for-profit joint venture, and therefore a subject of academic interest to me, even if it doesn’t end up creating the robot overlords I will one day serve. OpenAI, Inc. was created as a 501(c)(3) organization in 2015 “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by the need to generate financial return.” (That’s quoted from OpenAI Inc.’s first Form 990). OpenAI, Inc. raised over $130 million in tax-deductible contributions for that mission. However, according to OpenAI’s website, “[i]t became increasingly clear that donations alone would not scale with the cost of computational power and talent required to push core research forward, jeopardizing our mission.” So, in 2019, OpenAI Inc. formed a joint venture with for-profit providers of equity capital (almost exclusively Microsoft), which is naturally called “OpenAI.” (They then began referring to the original OpenAI Inc. as “Nonprofit OpenAI,” not to be confused with a wholly owned subsidiary of Nonprofit OpenAI that serves as the “manager” of OpenAI called OpenAI GP LLC). A couple of weeks ago, OpenAI’s board fired its founder Sam Altman for undisclosed reasons. Altman was immediately hired by Microsoft, many employees and key figures in OpenAI threatened to leave (possibly to go to Microsoft) unless the board re-hired Altman, which it immediately did as part of an agreement under which most of the board would be replaced by new board members.
If this is the nonprofit law story of the decade, it’s because of the federal law of nonprofit joint ventures. First it is important to distinguish between inurement (the possibility of nonprofit insiders benefiting themselves) and private benefit (the basis of the IRS’s rules about nonprofit joint ventures). My fellow blogger posted some thoughts on the risk of inurement in the OpenAI story, an issue I have worried about in general as well. But the OpenAI story is probably not primarily an inurement story; it is more likely a story about “private benefit.” The law on private benefit deals not primarily with the risk of insiders providing themselves with financial benefits, but rather with the risk that a charity could be diverted from its core charitable mission for other reasons, including benefiting outsiders. The worry is that, even without insiders financially benefiting themselves, the charity might abandon its mission. The law of joint ventures is derived from this doctrine, and at the risk of wild simplification, that doctrine can be summed up in a single word – control. In a string of revenue rulings and court cases in the late 1990s and early 2000s, the defining characteristic of a joint venture was determined to be whether the nonprofit controlled the joint venture. If a nonprofit and a for-profit formed a joint venture to carry out the nonprofit’s charitable mission and also provide profits to other members of the venture, it is permissible so long as the nonprofit effectively controls the venture and impermissible if the for-profit partners effectively control it. There was frustratingly indeterminate litigation about what exactly constitutes effective control on the margin, but it is clear that the nonprofit has sufficient control (as a legal matter) if a majority of the board of the venture is constituted by directors who are “independent,” meaning they have no financial interest in the venture. The control question is even more clear when the day-to-day management of the venture is controlled by a company controlled by the nonprofit rather than a company controlled by the for-profit partners. The embedded assumption is that so long as the venture is controlled by disinterested board members with a fiduciary duty to the charitable mission of the nonprofit, they serve as an adequate check on the nonprofit being diverted from its charitable mission to maximize the financial gains of the partners.
The OpenAI website states proudly that, “[w]hile our partnership with Microsoft includes a multibillion dollar investment, OpenAI remains an entirely independent company governed by the OpenAI Nonprofit. Microsoft has no board seat and no control.” At least formally, OpenAI’s independent board members did not have a financial interest in OpenAI and so were unconflicted in their duty to pursue OpenAI’s charitable mission. If this is the nonprofit story of the decade, it would go like this: OpenAI was created as a nonprofit joint venture, with 130 million dollars of charitable contributions. But, when there was a conflict between the guardians of its charitable mission and Microsoft, Microsoft won. Microsoft's champion, Sam Altman, returned to continue leading the venture, and the nonprofit board members stepped down, leaving the field open to the real goal of maximizing profit. In other words, the joint venture doctrine’s reliance on formal control just doesn’t work. If we care about protecting the integrity of the nonprofit sector, we need to find another legal doctrine to do so.
The key question about the OpenAI kerfuffle then is whether that story is true. I know extremely little about what actually is happening, and the best analysis I’ve found is a podcast by Ezra Klein. The actual best coverage I’ve found is this, but because I have been a fan of Klein and his work for a long time, I care about the fact that Klein says he is not convinced by the depressing nonprofit story I just. For example, he very briefly discusses this issue (at minute 38:18) and takes seriously the idea that Altman’s return is not a concession by the nonprofit board, but instead a victory for the nonprofit in which, after the conflict, “maybe they have a stronger board that is better able to stand up to Altman.” (at 39:20). So, who knows. I assume someone is writing a book about this that will appear in a few minutes and then several minutes after that, we’ll get to watch a pretty exciting movie about it, hopefully starring Jonah Hill (who, by the way, I also think should play Sam Bankman Fried).
In addition to the question of what The Law should do about nonprofit joint ventures in the future, there is an equally intriguing question to me about what for-profit investors will do. We know that Microsoft is the primary for-profit investor in the OpenAI joint venture, and we could be tempted to think about why Microsoft agreed to make a “multibillion dollar investment” in a venture that is expressly devoted to charitable purposes rather than maximizing Microsoft’s profits. I’m guessing Microsoft rarely makes naïve or stupid multibillion dollar investments. Maybe they thought that when push came to shove, their investment gave them sufficient functional control that it would all work out, and maybe their takeaway from the kerfuffle is that they were right. If other investors conclude the same, then I think we may see a significant strain on the credibility of the nonprofit signal. (See my post yesterday if you don’t know what I mean). But what if investors take away the lesson that the kerfuffle was a loss for Microsoft, and they decide to avoid partnerships with nonprofits unless they too deeply value the charitable purpose more than their financial returns? That would be a win for the nonprofit sector.
Then, of course, the most interesting question is why OpenAI was formed as a charitable nonprofit in the first place. I’m hesitant to question Sam Altman’s charitable bona fides, but another founder of OpenAI was Elon Musk, who has very conveniently become an easily recognizable villain in the years since OpenAI’s founding. We don’t know who contributed the 130 million dollars of charitable funds that the OpenAI Nonprofit raised over the years, but one wonders what exactly these contributors were thinking. Why did Elon Musk, for example, think that a charity was a better “investment” in the future of AI technology than a for-profit company, given that he’s had some success with for-profit companies? The media coverage has a lot of speculation on that score, but I’m still unsure which of it is true and which is not. I’m looking to you, Jonah Hill, to get to the bottom of this.