Tuesday, October 3, 2023
Disclosing the Use of AI
Following well-publicized instances of lawyers using generative artificial intelligence to draft briefs that misrepresented the law, some courts now require lawyers (and pro se litigants) to certify whether, and if so, to what extent, they used AI in preparing briefs. These orders are not uniform and may require more disclosure than would be apparent at first blush. But before delving into what disclosures may or may not be required, let’s talk about AI.
Merriam-Webster defines AI as, “the capability of computer systems or algorithms to imitate intelligent human behavior,”[1] and as “a branch of computer science dealing with the simulation of intelligent behavior in computers.”[2] Merriam-Webster defines generative AI as “artificial intelligence that is capable of generating new content (such as images or text) in response to a submitted prompt (such as a query) by learning from a large reference database of examples.”[3] Generative AI includes things like ChatGPT.
The instances where lawyers found themselves in trouble for using AI involved the use of generative AI. And it was those instances that prompted the orders requiring lawyers to disclose the use of AI. But tools like Grammarly and Word’s “Editor” are AI—they’re just not generative AI. And there lies the problem—the orders requiring disclosure don’t always distinguish between AI and generative AI. For example, Judge Baylson of the United States District Court, Eastern District of Pennsylvania put on this order:
If any attorney for a party, or a pro se party, has used Artificial Intelligence (“AI”) in the preparation of any complaint, answer, motion, brief, or other paper filed with the Court and assigned to Judge Michael M. Baylson, they MUST, in a clear and plain factual statement, disclose that AI has been used in any way in the preparation of the filing and CERTIFY that each and every citation to the law, or the record in the paper, has been verified as accurate.[4]
On the other hand, Judge Starr of the United States District Court, Northern District of Texas, has put on order that distinguishes between the use of AI and generative AI. That order says:
All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being. These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why. Accordingly, the Court will strike any filing from a party who fails to file a certificate on the docket attesting that they have read the Court’s judge-specific requirements and understand that they will be held responsible under Rule 11 for the contents of any filing that they sign and submit to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing.[5]
Thus, a lawyer filing something in Judge Baylson’s court should disclose the use of an AI tool like Grammarly or Word’s “Editor” function in preparing the brief, whereas a lawyer filing something in Judge Starr’s court does not have to disclose the use of those tools, but instead must only disclose the use of generative AI.[6] While Judge Baylson’s order suggests that he might have only meant to require the disclosure of the use of generative AI (because he refers to checking citations), the language of the order sweeps more broadly and requires disclosing the use of any AI.
Given the increased use of AI and particularly generative AI, it’s likely that more courts will require the disclosure of the use of AI in preparing filings. It’s important that lawyers fully comply with those requirements.
[1] https://www.merriam-webster.com/dictionary/artificial%20intelligence
[2] Id.
[3] https://www.merriam-webster.com/dictionary/generative%20artificial%20intelligence
[4]https://www.paed.uscourts.gov/documents/standord/Standing%20Order%20Re%20Artificial%20Intelligence%206.6.pdf
[5] https://www.txnd.uscourts.gov/judge/judge-brantley-starr
[6] Disclosure: I used Word’s Editor in preparing this post.
https://lawprofessors.typepad.com/appellate_advocacy/2023/10/disclosing-the-use-of-ai.html
Comments
The evolving landscape of AI in the legal field is quite fascinating! It’s interesting to see how different courts are approaching the issue of disclosure. The distinction between AI tools like Grammarly and generative AI like ChatGPT really underscores the need for clear guidelines. It'll be crucial for legal professionals to stay updated and ensure compliance to maintain the integrity of legal proceedings. What are your thoughts on how this might affect the trust in legal documents and the overall legal process?
Posted by: chatgptdeutschkostenlos.de | May 5, 2024 8:22:34 PM
The increasing use of AI in various fields brings forth crucial questions of transparency and disclosure. ChatGPT, as a product of advanced AI, reflects the broader conversation about the ethical considerations and disclosure norms surrounding artificial intelligence. It's essential for users and developers alike to engage in a dialogue about the responsible and transparent implementation of AI technologies, ensuring that the benefits align with ethical standards. The legal and ethical landscape surrounding AI disclosure is evolving, and discussions like these contribute to shaping a more informed and accountable future for AI applications.
Posted by: GPTDeutsch | Jan 17, 2024 5:59:33 PM