Friday, January 26, 2024
Are Lawyers, Lawmakers, and Law Professors Really Ready for AI in 2024?
We just finished our second week of the semester and I’m already exhausted, partly because I just submitted the first draft of a law review article that’s 123 pages with over 600 footnotes on a future-proof framework for AI regulation to the University of Tennessee Journal of Business Law. I should have stuck with my original topic of legal ethics and AI.
But alas, who knew so much would happen in 2023? I certainly didn’t even though I spent the entire year speaking on AI to lawyers, businesspeople, and government officials. So, I decided to change my topic in late November as it became clearer that the EU would finally take action on the EU AI Act and that the Brussels effect would likely take hold requiring other governments and all the big players in the tech space to take notice and sharpen their own agendas.
But I’m one of the lucky ones because although I’m not a techie, I’m a former chief privacy officer, and spend a lot of time thinking about things like data protection and cybersecurity, especially as it relates to AI. And I recently assumed the role of GC of an AI startup. So, because I’m tech-adjacent, I’ve spent hours every day immersed in the legal and tech issues related to large and small language models, generative AI (GAI), artificial general intelligence (AGI), APIs, singularity, the Turing test, and the minutiae of potential regulation around the world. I’ve become so immersed that I actually toggled between listening to the outstanding Institute for Well-Being In Law virtual conference and the FTC’s 4-hour tech summit yesterday with founders, journalists, economists, and academics. Adding more fuel to the fire, just before the summit kicked off, the FTC announced an inquiry into the partnerships and investments of Alphabet, Inc., Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc. Between that and the NY Times lawsuit against OpenAI and Microsoft alleging billions in damages for purported IP violations, we are living in interesting times.
If you’ve paid attention to the speeches at Davos, you know that it was all AI all the time. I follow statements from the tech leaders like other people follow their fantasy football stats or NCAA brackets. Many professors, CEOs, and general consumers, on the other hand, have been caught by surprise by the very rapid acceleration of the developments, particularly related to generative AI.
However, now more members of the general public are paying attention to the concept of deepfakes and demanding legislation in part because the supernova that is Taylor Swift has been victimized by someone creating fake pornographic images of her. We should be even more worried about the real and significant threat to the integrity of the fifty global elections and occurring in 2024 where members of the public may be duped into believing that political candidates have said things that they did not, such as President Biden telling people not to vote in the New Hampshire primary and to save their votes for November.
For those of us who teach in law schools in the US and who were either grading or recovering from grading in December, we learned a few days before Christmas that Lexis was rolling out its AI solution for 2Ls and 3Ls. Although I had planned to allow and even teach my students the basics of prompt engineering and using AI as a tool (and not a substitute for lawyering) in my business associations, contract drafting, and business and human rights class, now I have to also learn Lexis’ solution too. I feel for those professors who still ban the use of generative AI or aren’t equipped to teach students how to use it ethically and effectively.
Even so, I’m excited and my students are too. The legal profession is going to change dramatically over the next two years, and it’s our job as professors to prepare our students. Thompson Reuters, the ABA, and state courts have made it clear that we can’t sit by on the sidelines hoping that this fad will pass.
Professionally, I have used AI to redraft an employee handbook in my client’s voice (using my employment law knowledge, of course), prepare FAQs for another client’s code of conduct in a very specialized industry, prepare interview questions for my podcast, and draft fact patterns for simulations for conferences and in class. I’ve also tested its ability to draft NDAs and other simple agreements using only ChatGPT. It didn’t do so well there, but that’s because I know what I was looking for. And when I gave additional instructions, for example, about drafting a mutual indemnification clause and then a separate supercap, it did surprisingly well. But I know what should be in these agreements. The average layperson does not, something that concerns Chief Justice Roberts and should concern us all.
How have you changed your teaching with the advent of generative AI? If you’re already writing or teaching about AI or just want more resources, join the 159 law professors in a group founded by Professors April Dawson and Dan Linna. As for my law review article, I’m sure a lot of it will be obsolete by the time it’s published, but it should still be an interesting, if not terrifying, read for some.
https://lawprofessors.typepad.com/business_law/2024/01/are-lawyers-lawmakers-and-law-professors-really-ready-for-ai-in-2024.html