Appellate Advocacy Blog

Editor: Charles W. Oldfield
The University of Akron
School of Law

Tuesday, November 14, 2023

Stigmatizing AI Usage

Last month, fellow blogger Charles Oldfield posted about some courts requiring lawyers to disclose their use of AI in preparing briefs for the court.  In the post, he noted that, while the goal seemed to be ferreting out the use of generative AI, the requirements may have inadvertently stretched beyond that scope.  But both instances raise the questions of why and how to cite AI.

I’ve been attending a wonderful conversation group of legal writing professors, led by Professors Kirsten Davis from Stetson University and Carolyn Williams from the University of North Dakota, discussing legal writing and generative AI. And a recent discussion addressed whether and, if needed, how to cite generative AI in legal writing.

Professor Davis first addressed the question of how we, as legal writers, should view the role of generative AI: as the author or authority, as a co-author, as an assistant, or as a tool. She aptly pointed out that our view of the technology directly informs whether we should cite or disclose our use of generative AI. Professor Williams (author of the 7th Edition of the ALWD Guide to Legal Citation) then addressed the purposes of citation:

(1) allowing the reader to locate the source of the writer’s information;

(2) giving credit to the author of the words or ideas the writer used;

(3) showing the reader that the writer conducted proper research;

(4) protecting the writer from plagiarizing;

(5) increasing the writer’s credibility with the reader; and

(6) providing additional information about the sources used and their connection to the writer’s assertions to aid the reader’s choices about whether to pursue the source.

These considerations made me wonder how those judges requiring disclosure are viewing AI and what purpose they believe disclosure serves.  And it seems their concern has less to do with the technology, itself, and more to do with skepticism that lawyers will use it in a way that violates the rules of professional conduct.

ABA Model Rule 5.1(b) provides that “A lawyer having direct supervisory authority over another lawyer shall make reasonable efforts to ensure that the other lawyer conforms to the Rules of Professional Conduct.”[i]  When a lawyer uses generative AI to draft motions, pleadings, briefs, or other filings with the court, the lawyer is treating the technology as a subordinate attorney and, therefore, should be reviewing the output for compliance with the rules of professional conduct.  This review includes verifying that “each and every citation to the law, or the record in the paper, . . . [is] accurate”[ii] and does not reflect any “personal prejudices, biases, and beliefs.”[iii]  But these same purposes are served when an attorney signs the document under Federal Rule of Civil Procedure 11.[iv]  So including the certification seems superfluous, especially given that supervising attorneys do not habitually credit their subordinating attorneys’ work in drafting.

Requiring the disclosure also fails to serve any of the traditional purposes of citation. Because generative AI rarely, if ever, produces identical output in response to a repeated prompt,[v] a reader cannot use a citation to either verify the accuracy of any assertions or investigate the source any further. And, because generative AI uses predictive language, it is—by design—drawing on the ideas of others represented in the text used in its training; thus, citing it does not serve to give credit to the proper authority or even protect the writer from plagiarism. If a legal writer treats AI-generated drafts as work produced by a subordinate attorney, then the attorney will have already checked the accuracy and validity of legal assertions and associated citations to authority, so the added layer of citing the AI tool(s) used does not further the purpose of establishing thorough research.

With respect to establishing the writer’s credibility, disclosing the use of generative AI might very well have the opposite effect considering highly publicized recent follies involving generative AI and legal filings.[vi]  And this negative effect is likely to be exacerbated by disclosure requirements rooted in skepticism.

Mr. Oldfield included as his final endnote that he “used Word’s Editor in preparing th[e] post.”  I assume the inclusion was done in jest to emphasize the absurdity and breadth of some of the existing disclosure requirements. But it raises an interesting point: by requiring lawyers to disclose their use of AI, are courts discouraging lawyers from using a potentially valuable tool?

In the small group I was in for our legal writing discussion on if and how to cite generative AI-created content, we concluded that asking students to cite their use of AI on submissions would be futile because it would either discourage them from using AI or encourage dishonesty about whether they did. Requiring attorney disclosure feels the same.

And, if the true goal of requiring disclosure is to ensure ethical usage of AI, it is likely to have the opposite effect. Discouraging lawyers from using AI could cause violations of Rule 1.1, requiring lawyers to “provide competent representation to a client” through “legal knowledge, skill, thoroughness and preparation.”  Comment 8 expressly directs that the duty of competence requires lawyers to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”[vii] Discouraged usage might also result in violations of Rule 1.5, requiring only reasonable fees,[viii] if a lawyer avoids using generative AI where the AI could complete the same task in less time, resulting in a higher-than-necessary fee for a client.[ix] And, to the extent required disclosure imposes a stigma on lawyers using generative AI, disclosure requirements could encourage dishonesty about usage, causing violations of Rule 3.3’s duty of candor to the tribunal.

While generative AI has not yet reached a point where it can replace lawyers, it is certainly capable of being a valuable time-saving tool that benefits both lawyers and clients. Lawyers should be encouraged to learn about and understand it, rather than avoid it. And, to that end, disclosure requirements should be abandoned.

 

[i] ABA Model Rules of Professional Conduct, available at: https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_5_1_responsibilities_of_a_partner_or_supervisory_lawyer/

[ii] https://www.paed.uscourts.gov/documents/standord/Standing%20Order%20Re%20Artificial%20Intelligence%206.6.pdf

[iii] https://www.txnd.uscourts.gov/judge/judge-brantley-starr

[iv] “By presenting to the court a pleading, written motion, or other paper—whether by signing, filing, submitting, or later advocating it—an attorney . . . certifies that to the best of the person's knowledge, information, and belief, formed after an inquiry reasonable under the circumstances:

(1) it is not being presented for any improper purpose, such as to harass, cause unnecessary delay, or needlessly increase the cost of litigation;

(2) the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law;

(3) the factual contentions have evidentiary support or, if specifically so identified, will likely have evidentiary support after a reasonable opportunity for further investigation or discovery; and

(4) the denials of factual contentions are warranted on the evidence or, if specifically so identified, are reasonably based on belief or a lack of information.”

Fed. R. Civ. P. 11(b).

[v] Charles Ross, Does ChatGPT Give the Same Answer to Everyone?, Medium.com (March 20, 2023), available at:  https://medium.com/@charles-ross/does-chatgpt-give-the-same-answer-to-everyone-521e3e9355a4

[vi] See, e.g., Benjamin Weiser, Here’s What Happens When Your Lawyer Uses ChatGPT, New York Times (May 27, 2023), available at https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html.

[vii]ABA Model Rules of Professional Conduct, available at: https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/comment_on_rule_1_1/

[viii] ABA Model Rules of Professional Conduct, available at:  https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_5_fees/

[ix] Brad Hise and Jenny Dao, Ethical Considerations in the Use of A.I., Reuters.com (Oct. 2, 2023), available at: https://www.reuters.com/legal/legalindustry/ethical-considerations-use-ai-2023-10-02/

https://lawprofessors.typepad.com/appellate_advocacy/2023/11/stigmatizing-ai-usage.html

Appellate Practice, Appellate Procedure, Legal Ethics, Legal Writing | Permalink

Comments

Post a comment