Thursday, March 2, 2023
The Rhetoric of ChatGPT: What ChatGPT Had to Say About Its Connection to Rhetoric and What We Can Learn from That Response
Thursday’s Rhaw Bar: A Little Bite of All Things Rhetoric and Law—exploring ideas, theories, strategies, techniques, and critiques at the intersection of rhetoric and legal communication.
The Rhetoric of ChatGPT: What ChatGPT Had to Say About Its Connection to Rhetoric and What We Can Learn from That Response
I’ve been thinking about what ChatGPT might mean for the practice of appellate lawyers and their ability to be persuasive. So, I typed this as a prompt for ChatGPT:
I am writing a blog post for an audience of appellate lawyers. What should I tell them about the relationship between rhetoric and ChatGPT?
Here’s what ChatGPT replied:
Rhetoric is the art of using language effectively to persuade or influence an audience. In many ways, ChatGPT can be seen as an extension of this art, as it uses natural language processing to generate persuasive and compelling responses to a wide range of prompts.
One of the key advantages of using ChatGPT for rhetorical purposes is its ability to quickly generate high-quality content in a wide range of styles and formats. This can be particularly useful for appellate lawyers, who need to produce persuasive legal arguments and compelling briefs that are tailored to the specific needs of their clients.
At the same time, it's important to remember that ChatGPT is a machine learning model, and as such, it has limitations. While it can generate persuasive responses based on patterns it has learned from large volumes of text data, it cannot understand the nuances of human emotion, context, or intent in the same way that a human being can.
This means that appellate lawyers who use ChatGPT for rhetorical purposes must be careful to ensure that the responses generated by the model are appropriate and effective for their intended audience. They should also be aware that the use of ChatGPT in legal writing is still a relatively new and evolving area, and that best practices are still being developed.
Overall, ChatGPT can be a valuable tool for appellate lawyers looking to enhance their rhetorical skills and produce high-quality legal content. By using the model in conjunction with their own expertise and insights, lawyers can create persuasive and compelling arguments that effectively convey their clients' positions.
Analyzing ChatGPT’s Response
Not a bad response, generally speaking. But what has ChatGPT revealed about itself? Here’s a brief analysis of some things we can learn about ChatGPT from this response.
ChatGPT Might Assume Too Much About the Reader
ChatGPT starts with a correct definition of rhetoric and then asserts that ChatGPT is an extension of rhetoric because it uses natural language processing (NLP). But ChatGPT assumes the reader knows what NLP is. (And later it assumes that the reader knows what a “language processing model” is.) If you don’t know (I didn’t) what NLP is, here’s a quick definition sourced from IBM:
NLP combines computational linguistics—rule-based modeling of human language—with statistical, machine learning, and deep learning models. Together, these technologies enable computers to process human language in the form of text or voice data and to ‘understand’ its full meaning, complete with the speaker or writer’s intent and sentiment.
So, one thing we learn about ChatGPT from this passage is that it can wrongly assume a reader’s knowledge, something lawyers know better than to do. While we might think that better engineering of ChatGPT’s prompt would solve the problem, recall that I told ChatGPT the audience for whom I was writing. That specific detail in the prompt did not seem to make a difference in ChatGPT understanding that many members of the intended audience would have limited knowledge of the computer science terms associated with ChatGPT.
ChatGPT Thinks of Itself as “Art”
I’m intrigued that ChatGPT chose the term “art” when describing itself as an “extension” of rhetoric. This characterization seems counterintuitive because ChatGPT is the product of computer, data, and language science. Yet, ChatGPT suggests that because NLP is a human-like effort to understand the meaning of language (as the definition above reflects), ChatGPT sees itself as an “artistic” endeavor, at least regarding persuasion. And, at least in some cases, ChatGPT has proven that to be true--there’s something that feels like artistry in ChatGPT’s responses to prompts. (See, e.g., this poem by ChatGPT and the debate around whether ChatGPT’s poetry is “art.”)
If ChatGPT sees itself as art rather than science, then we should take it at its word and think of it as we do art and artists. That is, as do artists, ChatGPT brings a particular perspective to its responses. As such, ChatGPT’s response is a form of art—and only one of many responses available. Just like painters and sculptors can interpret their objects of study differently, ChatGPT’s interpretation of data in a particular response is one of many. Thinking of this another way, we might view ChatGPT’s responses to prompts as opinions, subject to issues of perspective, interpretation, accuracy, bias, and incompleteness.
ChatGPT Speaks Like It Wants to Sell You Something, but Should You Buy It?
Although I asked ChatGPT to describe a relationship between two things, ChatGPT’s response feels more like a sales pitch. It makes grand, authoritative-sounding claims about ChatGPT’s value to the user. It is confident that it can help lawyers persuade and compel by providing “high-quality content in a wide range of styles and formats.” ChatGPT reminds us that it has “key advantages,” that it can enhance the lawyer’s persuasive skill, and that it can generate high-quality content that can be tailored to client’s needs.
Why might ChatGPT sound so confident? Maybe ChatGPT’s confidence comes from the way it generates responses. ChatGPT generates responses by examining a vast amount of data, looking for the patterns and relationships between words in the sample, and then predicting what response is appropriate for the context. Perhaps because I told it I was writing a blog post, the texts that ChatGPT accessed for context were blogs, and because many blogs are written in a tone designed to sell something, this caused ChatGPT to adopt the same tone by predicting that I was expecting that tone in the response.
Regardless of the reason, this extra-confident tone of the response should give us a reason to scrutinize ChatGPT’s claims more carefully. If we read closely, we see that the response doesn’t really explains what “high-quality content” means. On one hand, commentators point out that ChatGPT cannot tell the difference between true and false information and thus can deliver inaccurate responses. This aspect of ChatGPT, then, is not “high-quality.” Moreover, ChatGPT’s ability to sound very confident in its responses can lure readers into believing the responses’ truth. For lawyers, this inaccurate information expressed in a confident tone is a malpractice minefield. For example, one would not want to rely at face value on what ChatGPT says about some point of law. (It’s probably useful to remind those in your office who could use ChatGPT of this important point.)
On the other hand, ChatGPT arguably can offer something of high(er) quality in that it can produce a “wide variety of styles and formats.” You can prompt ChatGPT to write in a particular style—even your own! You can request it write content in the style and organization of a brief or a motion, and it will comply. You can ask it to assist in correcting your grammar. It’s pretty good at offering examples of different genres, styles, and tones of writing. This is because it is good at providing an approximation of whatever you’ve asked it to create based upon the patterns it recognizes in the dataset.
But here’s the catch, I think, with using ChatGPT to generate “style and format” content: the user must already understand the style and structure he or she seeks in order to evaluate the quality of ChatGPT’s response. In other words, if you don’t know already what a case caption for, say, the United States Court of Appeals for the Ninth Circuit looks like, how can you be sure ChatGPT will give you the right format for that caption? Likewise, if you aren’t sure what a good introduction to an appellate brief looks like, how can you know if ChatGPT has given you a good one in its response? ChatGPT would be good at giving you examples, but it can’t really tell you which one is “best” in your circumstances.
ChatGPT Knows Its Rhetorical Limits
Even though ChatGPT is confident in its capabilities, it admits that it has limits. Specifically, ChatGPT responds that it cannot understand emotion, context, or intent like a human does when crafting responses to persuade. This is a pretty significant admission. To be limited in these ways is relevant to ChatGPT’s persuasive abilities; understanding emotion, content, and intent are elements of human communication that are central to rhetorical effectiveness. So, even though ChatGPT sees itself as an “extension” of rhetoric, it is a fairly limited extension.
ChatGPT reminds us that one of its limitations is that it generates persuasion from finding patterns in large amounts of existing data. So, that raises the problem of “garbage in,” “garbage out.” In other words, the quality of ChatGPT’s responses is only as good as the data it can access. That is, if the dataset has both helpful and unhelpful patterns that inform ChatGPT’s responses, how does one know if what ChatGPT generated is something worth relying on? The only way to know is to already have the knowledge essential to evaluating the response.
In the End, ChatGPT is Deferential about Its Writing
ChatGPT says it is the appellate lawyer’s job to ensure what ChatGPT writes is effective for the intended audience. It talks about itself as a “model” that can help lawyers be persuasive and reminds lawyers that the model should be used in conjunction with lawyers’ expertise and insights. In other words, ChatGPT does not take the position that it replaces humans in the writing process—particularly where audience analysis and professional expertise is involved. Here, ChatGPT makes a significant rhetorical move—it reminds you that the technology is only as good as its user and, even after all of its confidence above, disclaims responsibility for the usefulness of its output. Fascinating.
ChatGPT notes that it is new enough that “best practices” for using ChatGPT in legal writing are still being developed. I take this as a challenge! Based upon ChatGPT’s rhetoric (i.e., the way it uses language to talk about itself), I’ll propose some best practices for you:
- Do not be misled by ChatGPT’s confidence. ChatGPT sounds confident and authoritative in its responses, but users should be skeptical about the legitimacy of that confidence.
- Rely on your own expertise, not ChatGPT’s. Appellate lawyers (and staff working for them) need to have an existing knowledge base to evaluate ChatGPT’s responses. In particular, evaluating the appropriateness of the response for an audience is essential. As with all aspects of outsourcing judgment in legal practice, outsourcing judgment to ChatGPT is dangerous. At the very least, for appellate lawyers to use ChatGPT effectively, they will need to become familiar with prompt engineering techniques that make ChatGPT more expert in the field and thus arguably improve ChatGPT’s responses.
- Treat ChatGPT as opinionated, not authoritative. Although ChatGPT is the product of science, ChatGPT should be seen as an artistic process, generating content that is more like opinion than unassailable fact. Treat ChatGPT as conversation partner, muse, or collaborator than can help you “play” with ideas and text. (Check out Ian Bogost’s article insightfully concluding that “ChatGPT . . . doesn’t understand or even compose text. It offers a way to probe text, to play with text, to mold and shape an infinity of prose across a huge variety of domains . . . into structures in which further questions can be asked and, on occasion, answered.)
- Remember that ChatGPT relies on pattern recognition, a limited mode of persuasion. ChatGPT does not have all the rhetorical capabilities of humans but can recognize patterns in data that might have persuasive impact. Even if that pattern recognition is persuasive, users must remember to look at persuasion from all aspects of the human experience, not just in the ways that ChatGPT looks at it.
- Remember that ChatGPT does not guarantee competent writing, you do. At bottom, ChatGPT does not claim it is superior to you in writing ability but rather it remains deferential to your experience and expertise. It disclaims its ability to effectively write for your “local” audience. No one—not even a computer—knows your clients, your arguments, and your audience better than you do. Rely on your own judgment about competent writing.
What are your thoughts?
Kirsten Davis teaches at Stetson University College of Law and in the Tampa Bay region of Florida. She is the Co-Director of the Institute for the Advancement of Legal Communication. The Institute’s mission is to study legal communication issues and provide programming and training that improves legal communication skills. Among other things she’s up to right now, she’s currently working on a writing handbook written specifically for trial lawyers. The views she expresses here are solely her own and not intended to be legal advice. You can reach Dr. Davis at [email protected].