Thursday, April 6, 2023
Prompt Engineering for ChatGPT Can Improve Your Legal Writing—Even if You Never Use ChatGPT
Thursday’s Rhaw Bar: A Little Bite of All Things Rhetoric and Law—exploring ideas, theories, strategies, techniques, and critiques at the intersection of rhetoric and legal communication.
Prompt Engineering for ChatGPT Can Improve Your Legal Writing—Even if You Never Use ChatGPT
Generative artificial intelligence (AI), like ChatGPT and Bing’s AI-powered chat, is motivating a fundamental reconsideration of the ethics and practicalities of how humans can produce good writing. In the legal writing context, there’s plenty of discussion around whether and how legal writers should use generative AI as part of writing practice. While I’m not going to delve into the questions about whether and how to best use generative AI like ChatGPT in legal writing (I’ve already done a little bit of that here), I want to look at a skill necessary for working effectively with generative AI that I think can help you be a better writer in general: prompt engineering or prompt design. Understanding prompt engineering and applying it to your own writing can help you write better.
What is Prompt Engineering?
Prompt engineering is the process of using specific words and phrases along with choices about the structure and organization of those words and phrases to write instructions that improve generative AI’s ability to provide a response that is useful to a human prompter. The emerging literature around generative AI suggests that prompt engineering will be a skill that writers will need to competently use large language models like ChatGPT.
Prompting for ChatGPT is done in natural language, in everyday conversational English (for English speakers). This is because ChatGPT is trained to understand and respond to conversation in a conversational manner. So no specialized programming knowledge is needed to prompt ChatGPT; instead, a human prompter needs two things: (1) an understanding the rhetorical situation to which ChatGPT is being asked to respond, and (2) an ability to communicate that rhetorical situation precisely and clearly to ChatGPT in everyday language.
Prompt Engineering and the Rhetorical Situation
Prompt engineering, at bottom, represents a human ability to carefully describe rhetorical situations that call for some kind of communicative response. In other words, prompts give ChatGPT a way to get to know the rhetorical situation confronting the human prompter, which includes the needs of the audience, the limitations and opportunities presented by the context, and the purpose of the writing in relation to the issues confronted by the writer. Like all other writing, writing generated by ChatGPT is more effective if it is written specifically for the rhetorical situation the human writer envisions; that is, it is written for a specific audience, with a specific purpose, and in an appropriate tone. The better engineered a prompt, the better chance there is of generating useful text appropriate for the rhetorical situation.
An Example of the Basic Techniques of Prompt Engineering
Ultimately, prompt engineering is a set of techniques that encourage ChatGPT to generate more useful responses. There’s quite a bit to know about prompt engineering, but reviewing the basic concept will help you understand how prompt engineering techniques can improve your own writing regardless of whether generative AI is part of the process. Let’s walk through an example of how we might prompt ChatGPT to generate content about an appellate standard of review.
“Write a paragraph that sets out the de novo standard of review.”
This is a basic prompt in the form of an instruction, and I used it to get this result from ChatGPT:
The de novo standard of review is a legal standard applied by courts when reviewing decisions made by lower courts or administrative agencies. Under this standard, the reviewing court gives no deference to the decision made by the lower court or agency, and instead, conducts a completely fresh review of the evidence and legal arguments presented. This means that the reviewing court does not consider the lower court or agency's decision as binding or controlling and has the power to make an entirely new decision based on the evidence and arguments presented to it. The de novo standard of review is typically used in cases where there are no factual findings to be reviewed, or when the issue being reviewed is a question of law.
Not too bad, but through prompt engineering, I might generate a better result. First, I can give ChatGPT context about the rhetorical situation, specifically describing the audience for whom it is writing:
“You are writing an appellate brief for the United States Court of Appeals for the 11th Circuit. Appellate court judges are your audience.”
I can also describe to ChatGPT the specific output I expect:
“Write a paragraph that sets out the de novo standard for the standard of review section of the appellate brief.”
So now, my more specific, detailed, and rhetorically aware prompt looks like this:
“You are writing an appellate brief for the United States Court of Appeals for the 11th Circuit. Appellate court judges are your audience. Write a paragraph that sets out the de novo standard for the standard of review section of the appellate brief.”
In this prompt, I've added more information about the rhetorical situation for ChatGPT to draw upon as it attempts to predict the best response for the prompt. (Remember, ChatGPT and other generative AI predict what words are appropriate in response to the prompt; they do not think, analyze, or evaluate, they predict using mathematical models for language).
This was the response to my new prompt:
In this appellate brief, we are appealing a decision made by a lower court to the United States Court of Appeals for the 11th Circuit. As appellate court judges will be our audience, it is important to explain the standard of review that will be applied in this case. Under the de novo standard of review, the appellate court will review the lower court's decision without giving any deference to the lower court's findings or legal conclusions. The appellate court will instead conduct a fresh review of the record and make its own determination as to the appropriate legal outcome. This standard of review is typically applied in cases where the lower court's decision involved purely legal issues or when there were no factual findings made by the lower court. In this case, we argue that the de novo standard of review should be applied to our appeal.
Two notable things happened here. First, expert legal writers will recognize stylistic junk in the response. Specifically, a clumsy first sentence, an explicit nod to judges as the audience, and a specific statement that it is important to identify the standard of review are all stylistically awkward choices for an appellate brief. But second, and more favorably, experts will recognize that ChatGPT generated a response that is more rhetorically aware: it generated a response that is less academic in its tone and is written from the point of view of the appellant. While the voice of the paragraph needs more attention in revision, ChatGPT has improved the rhetorical sensibility of the response.
But I think I can get ChatGPT to do better.
Rather than having ChatGPT start over with a new paragraph, I’m going to give it a chance to revise the existing paragraph. I will ask ChatGPT to revise what it wrote, and I will give it more detailed information about the rhetorical situation (in other words, I will add more input data for it to work with in generating a response). I’ll tell ChatGPT that I am dealing with an appeal from summary judgment and that I’d like some case citations:
“Revise this paragraph to focus on the standard of de novo review for summary judgment motions. Add citations to cases from the United States Court of Appeals for the Eleventh Circuit that support the assertions in the paragraph.”
Here’s the response:
In the United States Court of Appeals for the Eleventh Circuit, the standard of review for summary judgment motions is de novo. This means that the appellate court will review the lower court's decision on summary judgment without any deference to the lower court's findings or legal conclusions. The appellate court will conduct a fresh review of the record to determine whether there are any genuine issues of material fact and whether the moving party is entitled to judgment as a matter of law. As stated in Gil v. Winn-Dixie Stores, Inc., 257 F.3d 1258 (11th Cir. 2001), "we review a grant of summary judgment de novo, applying the same legal standards as the district court and viewing the evidence in the light most favorable to the nonmoving party." Similarly, in Rice-Lamar v. City of Fort Lauderdale, 232 F.3d 836 (11th Cir. 2000), the Eleventh Circuit held that "we review the grant of summary judgment de novo, drawing all inferences in favor of the nonmoving party." Therefore, we ask the appellate court to apply the de novo standard of review in this case to ensure a fair and impartial review of the lower court's decision on summary judgment.
A good appellate writer will recognize that there’s still some stylistic awkwardness in this paragraph that can be eliminated with some revision. For example, the paragraph can be made more concise by eliminating the quoted material from the case law and instead using the citations provided (assuming they are accurate—accuracy cannot be assumed with ChatGPT) to support the first three sentences of the paragraph. But what is good about the paragraph is that its detail, point-of-view, and tone are more tailored to the rhetorical situation confronting the appellate writer, and the paragraph is responsive to the human prompter’s details. At minimum, there is more richness in the content that the human drafter can use to write an effective statement of the standard of review.
In sum, three ideas about writing emerge from our prompt engineering example:
- Good writing depends on the writer’s ability to carefully interrogate and explicitly articulate the features of the rhetorical situation.
- Being able to identify and articulate the specific writing task to be performed, the context for that task, the details of that context, and the output to be generated can improve the chances of writing to effectively respond to the rhetorical situation.
- Prompt engineering can improve the process of making targeted, thoughtful, and specific revisions.
Using Prompt Engineering in Your Own Writing (and in Mentoring Others’ Writing)
Even if a legal writer never uses a tool like ChatGPT to generate text, using the techniques of prompt engineering in the writing process can help legal writers write better.
It’s likely true that experts in appellate writing subconsciously generate prompts like those we’ve examined here, and those subconscious prompts guide their writing. But remember that ChatGPT got better at its task when it received explicitly stated, detailed prompts. What might it look like to do the same thing in your own writing, to use prompt engineering as a conscious step? And could prompt engineering help expert writers mentor inexperienced ones?
Here’s an example of how prompt engineering might help a more experienced writer mentor a more novice one.
Imagine this case. A school district disciplines a high school student for refusing to participate in a school assembly honoring Veteran’s Day. The student asserts a political motive for refusing to participate and that the discipline violates her free speech rights. Both the school district and the student move for summary judgment before the trial court. The court grants summary judgment for the school district and denies it for the student. The student’s lawyers, one senior appellate lawyer and one junior one, are working together on the appeal, arguing that, as a matter of law, the trial court decided the cross-motions incorrectly.
The junior lawyer has written the first draft of the brief, but the senior lawyer has found it lacking in persuasiveness, particularly because the junior lawyer has not been effective in supporting her argument with factually analogous cases that have outcomes favorable to the student’s position.
In guiding the junior lawyer’s revisions (whether in conversation or in writing), the senior lawyer could use prompt engineering techniques. The senior lawyer could give the junior a basic instruction like “Improve the quality of the analogies in the arguments.” But engineering that prompt could yield better results. First, a better instruction would clarify the task: “Add to the argument analogies to cases that are factually similar and support the outcome we seek.”
Then the prompt would include context that helps the junior lawyer see the rhetorical situation more clearly and from the point of view of the more experienced lawyer. For example, the senior lawyer could add:
“The judges will find analogies to cases persuasive. Cases where an appellate court has reversed summary judgment on similar facts are good for analogies. Ideally, you want to draw the court’s attention to cases where a student was silent or absent from a required school activity and asserted a political reason and the court thought the student was entitled to summary judgment.”
Even further, a good prompt from the senior lawyer could include the output expectation: “Revise your argument paragraphs to add comparisons to at least two cases (if you can find them) that are analogous on their facts and favorable on their outcome. Be specific about the analogies—use details to show how the cases are similar to our case.”
So, what’s going on here? We’ve engineered a prompt—from senior lawyer to junior one—that is more likely to yield what the senior lawyer knows will be more effective argument in the appellate brief. It includes detailed instructions, input data about useful analogies, audience information, and clear output instructions.
While this example reflects communicating a prompt between two people, you can be your own audience for a prompt. Before beginning a project, you might write a prompt that will guide the drafting. In addition, when you are struggling with a particular part of a document, you might stop and ask, “What is my prompt for writing this?” “What instructions do I give myself here? What is the context, the audience, the purpose? What is the output I’m seeking?” You might even take a moment and write that prompt down to focus your efforts. By using the techniques of prompt engineering, you can slow down the process and explicitly analyze the rhetorical situation, which can improve the output.
The Recap
Prompt engineering is a useful technique for working with generative AI because prompt engineering can improve the quality of the responses generated. But prompt engineering can also be a useful technique for legal writers more generally because prompting forces writers to carefully articulate the demands of the rhetorical situation and define precisely what response to that situation is appropriate. The prompt engineering method of creating precise writing instructions, contextualizing those instructions with detail about rhetorical situation, and describing the desired output can help a writer generate text, revise existing text, or give good feedback to other writers. Prompt engineering can help with writing and revision at all levels, from drafting the entire document to the revision of sentences.
Kirsten Davis teaches at Stetson University College of Law and in the Tampa Bay region of Florida. She is the Co-Director of the Institute for the Advancement of Legal Communication. The Institute’s mission is to study legal communication issues and provide programming and training that improves legal communication skills. Among other things she’s up to right now, she’s currently working on a writing handbook written specifically for trial lawyers. The views she expresses here are solely her own and not intended to be legal advice. You can reach Dr. Davis at [email protected].
https://lawprofessors.typepad.com/appellate_advocacy/2023/04/prompt-engineering-for-chatgpt-can-improve-your-legal-writingeven-if-you-never-use-chatgpt.html