Appellate Advocacy Blog

Editor: Tessa L. Dysart
The University of Arizona
James E. Rogers College of Law

Thursday, July 6, 2023

Courts are Regulating Generative AI for Court Filings.  What Does This Mean for Legal Writers? 

Thursday’s Rhaw Bar: A Little Bite of All Things Rhetoric and Law—exploring ideas, theories, strategies, techniques, and critiques at the intersection of rhetoric and legal communication.

Courts are Regulating Generative AI for Court Filings.  What Does This Mean for Legal Writers? 

There’s been a flurry of court-initiated activity around using generative artificial intelligence (generative AI) to draft court filings. One court has sanctioned the misuse of OpenAI’s large language model, ChatGPT.  Perhaps as a result, at least four more have issued orders regulating the use of generative AI in legal writing.

What’s going on here?  And what does this activity mean for legal writers?

How It All Began:  A Federal Court Sanctions Lawyers’ “Bad Faith” Use of ChatGPT “Fake Cases” in a Court Filing

In March of this year, two lawyers filed a motion in the United States District Court for the Southern District of New York that included citations to multiple court opinions that did not exist.  In Mata v. Avianca, Inc., the plaintiff’s lawyers admitted that one of the lawyers had used ChatGPT, “which fabricated the cited cases.”  The lawyer said that he did not think at the time that ChatGPT could fabricate cases.  According to the court’s finding of fact, the lawyers persisted in representing the cases as real even after they became aware that they were fake.

In its order sanctioning the attorneys, the court noted that although “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” lawyers must “ensure the accuracy of their filings.”   As such, the Court sanctioned the lawyers for citing the fake cases under Federal Rule of Civil Procedure 11(b)(2), which required lawyers to certify that, after a reasonable inquiry, the lawyers believed that the “legal contentions [in the court filing were] warranted by existing law.”   The court suggested that, perhaps, if the lawyers had “come clean” about the fake cases in a timely manner, the lawyers might not have violated Rule 11 simply by mistakenly citing the fake cases.  But because the lawyers had engaged in acts of “conscious avoidance and false and misleading statements to the Court” and had continued to stand by the fake cases even after judicial questioning, they had engaged in bad faith, which merited sanctions. 

How Courts are Regulating Generative AI—And What They Appear to Be Concerned About

Between the time news reports began circulating and the Mata court’s order issuing sanctions, other courts acted to prospectively regulate generative AI use in cases before them.  Their rationales for regulating generative AI use in court filings vary but are focused on four concerns:

  • ensuring the involvement of human beings in checking generative AI’s accuracy;
  • ensuring that cited legal authority cited exists and is accurately described;
  • protecting sensitive information from inadvertent disclosure to others; and
  • ensuring lawyers do their own writing.

Human Beings Must Check Generative AI’s Output for Accuracy

In the United States District Court for the Northern District of Texas, one judge created a new “Judge Specific Requirement” that requires all attorneys and pro se litigants to certify for all filings in the case that either (1) they will not use generative AI to draft court filings or (2) a “human being” will check any portions generated by AI “for accuracy, using print reporters or traditional legal databases.”

The judge explained that “legal briefing” is not a good use of generative AI because it is “prone to hallucinations [(i.e., inaccurate information)] and bias.” Concerning bias, the judge said that because large language models like ChatGPT have not sworn an oath to “faithfully uphold the law and represent their clients,” they are “unbound by any sense of duty, honor, or justice” that applies to lawyers and act only according to “computer code” and “programming.” 

The judge advised parties that they could, if they desired, move for leave to explain why generative AI “has the requisite accuracy and reliability for legal briefing.”  The judge provided a certification form that requires a guarantee that

[n]o portion of any filing in this case will be drafted by generative artificial intelligence or that any language drafted by generative AI --including quotations, citations, paraphrased assertions, and legal analysis -- will be checked for accuracy, using print reporters or traditional legal databases, by a human being before it is submitted to the court. I understand that any attorney who signs any filing in this case we'll be held responsible for the contents thereof according to the applicable rules of attorney discipline, regardless of whether generative artificial intelligence drafted any portion of that filing.

A magistrate judge In the United States District Court for the Northern District of Illinois articulated a similar rationale when he added a certification requirement to his Standing Order for Civil Cases.   The judge required that any party that uses any “generative AI tool” for “preparing or drafting” court filings must “disclose in the filing that AI was used and the specific AI tool that was used to conduct legal research and/or to draft the document.”  The judge said that parties should “not assume” that relying on generative AI would “constitute reasonable inquiry” under Rule 11 of the Federal Rules of Civil Procedure.  The Standing Order focused on the unreliability and inaccuracy of legal research as the reason for the certification requirement. It said that the judge would “presume” that the certification means that “human beings . . . have read and analyzed all cited authority to ensure that such authority actually exist.”

Court Filings Must Have Accurate Citations to Law and the Record

Another judge focused specifically on the accuracy of citations to the law in his order requiring that the use of “artificial intelligence” for court filings be disclosed.  In a standing order for a judge sitting in the United States District Court for the Eastern District of Pennsylvania, the judge required that all attorneys and pro se parties make a “clear and plain factual statement” that disclosed the use of “AI . . . in any way in the preparation” of court filings and certify “every citation to the law or the record . . . has been verified as accurate.”

Parties Must Protect Confidential and Business Proprietary Information from Disclosure to Generative AI

In the United States Court of International Trade, one judge issued an “order on artificial intelligence” to protect “confidential or business proprietary information” in court briefs.

In the Court of International Trade, specific rules protect “sensitive non-public information owned by any party before it” from disclosure.  As such, the court requires filings to identify which information contains sensitive information.  It also requires lawyers to file “non-confidential” versions of briefs that remove this information.  Lawyers practicing before the Court of International Trade can receive sensitive information if they are certified by the court to do so.

In this context, the judge explained his concern that “generative artificial intelligence programs . . . create novel risks to the security of confidential information.”  Because lawyers might prompt these programs with confidential or business proprietary information to get generative AI to provide useful outputs, a risk arises that generative AI will “learn” from that prompt, thereby enabling the “corporate owner of the [generative AI] program [to retain] access to the confidential information.”  The order says this implicates “the Court’s ability to protect confidential and business proprietary information from access by unauthorized parties.”

Accordingly, the court ordered all submissions drafted with the assistance of generative AI by using “natural language prompts” be accompanied by (1) a disclosure identifying which generative AI “program” was used and which portions of the document had been drafted with generative AI assistance, and (2) a certification stating that the use did not result in any sensitive information being disclosed to “any unauthorized party.”  The order also specifically allowed any party to seek relief based on the information in this notice.

Lawyers Must Do “Their Own Writing”

In the case of Belenzon v. Paws Up Ranch, LLC, filed in the United States District Court for the District of Montana, a judge ordered that an out-of-state attorney admitted pro hac vice must “do her own work.”  The court said that this included doing “his or her own writing.” As such, the court prohibited the pro hac lawyer from using “artificial intelligence automated drafting programs, such as Chat GPT.”  The court did not explain its reasoning in the order.

What Should Legal Writers Do in This New Regulatory Environment?

These varying approaches to generative AI (as well as the availability of it) put pressure on legal writers to anticipate what they should do in this new environment.  Here are some suggestions for taking action.

Check local court rules, standing orders, procedural orders issued in your case, or the published preferences of judges to see if a judge has rules on generative AI use. This is a quickly developing area, and you can expect that more judges—and perhaps even entire courts in their local rules—will begin to consider whether and how they regulate generative AI.

Read the new regulations carefully. How judges will regulate AI in their courtroom will likely vary, so read carefully and avoid assumptions.  For example, in the new regulations, the courts vary how they refer to the technology they are concerned about, using both “generative AI” and “artificial intelligence” as identifiers. But these terms do not necessarily mean the same thing. “Artificial intelligence” generally means a broader category of tools than “generative AI.”  For example, Word’s Editor is powered by artificial intelligence.  Lexis already uses “extractive artificial intelligence” in some of its research products. Brief Catch represents that it uses artificial intelligence in its products. These are all AI tools that do not fall within the category of generative AI. 

A lawyer attempting to comply with AI regulation needs to know the scope of what the court wants to regulate.  That is, does a court requiring a certification about “artificial intelligence” mean to include tools like those mentioned above?  If you are not sure what the judge means, it might be wise to ask.  (and judges should be as clear as possible about what artificial intelligence tools they are concerned about so as not to unintentionally regulate writing tools too broadly.  For example, Word’s Editor does not seem to raise the concerns the judges have identified yet fits within the category of “artificial intelligence.”)

In addition, courts vary in what they want you to do about generative AI. One court—in one specific circumstance—has prohibited its use.  But the rest—so far—ask for various attestations about what and how it has been used.  As time progresses, you may appear before courts regulating generative AI differently.  Get clear on the requirements and add the requirements to your court-specific writing checklist.

If you use generative AI to help you write, treat it like any other writing tool. Generative AI does not replace you; you are responsible for the quality of your writing.  The courts are right: no currently available generative AI tool replaces a lawyer in producing written documents.   But there is potential for generative AI to help legal writers write more clearly, precisely, correctly, and persuasively.  This could mean better and more cost-effective results for clients—and more efficient and effective practice before the courts.  In other words, courts could benefit from lawyers competently and carefully using generative AI as a legal writing tool.

Plus, enterprise versions of generative AI tools are rapidly developing for use in the legal domain, which may make using generative AI for legal writing less risky.   Some products already exist; others are on the way. These tools are meant for lawyers, and some lawyers are already using them.  Unlike the publicly available all-purpose large language models like ChatGPT and Bard, these fine-tuned and further trained models will likely better protect confidential client information; produce more accurate, reliable, and verifiable for legal research; and be more competent at generating effective legal writing.  In other words, future generative AI writing tools will do more to address the courts' concerns about generative AI.  Regardless of whether you are using general purpose or enterprise generative AI for your legal writing, one thing won’t change: you are ultimately responsible for the written work you produce.  You are the human being the courts care about. You cannot outsource your judgment and competence to generative AI.  It does not evaluate information, legally reason, or do legal analysis (even though it might appear to). It does not have a professional identity committed to the rule of law, just results, and fair play.  What it does is this:  It uses mathematical computations to predict the most appropriate words to provide in response to a prompt. Thus, to use generative AI ethically and responsibly, you must

Understand how generative AI works. Generally speaking, you have an ethical duty to be competent in using technological tools as part of your practice.  If you don’t have a basic understanding of natural language processing, machine learning, and large language models, you should get that understanding before you use generative AI.  There’s a strong argument that generative AI is here to stay as part of legal practice.  Learn all you can.

Be careful about disclosing confidential information in prompting generative AI; know how your prompts are used and retained. How generative AI treats the information you give it is in flux.  For example, while ChatGPT did not have a setting that kept prompts from training the large language model when it was released to the public, it does now.  And it also now has a setting that will allow users to limit the storage of prompts to 30 days.  While these changes are great examples of the rapid evolution of generative AI in response to user feedback, those changes don’t solve all of the lawyer’s problems concerning sharing confidential client information with generative AI. 

In my opinion, the question of what information can be shared with generative AI is a complex question to which only simple answers have been offered so far.  Part of the complexity comes from variations in state ethics rules.  Depending on your state ethics rules, you may have more or less leeway to ethically include client information in prompts.  In addition, if disclosing client information in a prompt furthers the client’s interests, perhaps there is room for a lawyer to argue that a disclosure to generative AI is warranted.  Moreover, it might be arguable that prompts for generative AI may, if carefully crafted, fall into the “hypothetical” rule that appears in many states’ confidentiality rules.  But, at this point, little certainty exists about how state bars will apply confidentiality rules when client information is shared in a generative AI prompt.   I hope that bar regulators provide answers to these questions about confidentiality—perhaps in ethics opinions. 

Know your legal obligations regarding data privacy and cybersecurity. The ethics rules about confidentiality don’t fully address the Court of International Trade Judge’s concern about disclosing proprietary information.  That information might be subject to other disclosure laws.  Thus, you should also consider whether you have legal duties that extend to the protection and privacy of your clients’ and others’ information in the generative AI context.  In addition, if you work for a law firm, you may have policies that address sharing and using information in the firm’s possession.  You should know what those policies are. 

And finally, check every AI-generated citation, fact, statement of law, and analytical statement. This is the dominant theme of the courts’ orders thus far: lawyers are failing to check the accuracy of generative AI’s output.  But if you are a lawyer, you already know that ensuring the accuracy of the work you produce is a fundamental ethical obligation.  So, no matter how confident you are in the output of a generative AI tool, you must always check the output that is purported to be factual or authoritative.  ChatGPT, for example, warns you about this.  At the bottom of its context window webpage, it states, “ChatGPT may produce inaccurate information about people, places, or facts.”   So, as you have always done with your legal writing, check the accuracy of every citation.  Read every legal authority to ensure it stands for the legal propositions you claim. Update and validate your authorities.  Double-check every fact.  Ensure that every step in the argument is logical, reasonable, ethical, and persuasive.  If you use generative AI to revise or edit your work, check every change to ensure it is correct.

What are your thoughts about generative AI and legal writing?

Kirsten Davis teaches at Stetson University College of Law and in the Tampa Bay region of Florida. She is the founding director of the Institute for the Advancement of Legal Communication and currently serves as Stetson’s Faculty Director of Online Legal Education Strategies.  Among other things she’s up to right now, she’s currently studying generative AI and its impact on legal communication. The views she expresses here are solely her own and not intended to be legal advice. You can reach Dr. Davis at [email protected].

https://lawprofessors.typepad.com/appellate_advocacy/2023/07/courts-are-regulating-generative-ai-for-court-filings-what-does-this-mean-for-legal-writers-.html

Legal Ethics, Legal Profession, Legal Writing, Rhetoric, Web/Tech | Permalink

Comments

Post a comment