Tuesday, June 11, 2024
An Experiment in Using ChatGPT-4 to Draft ASP Practice Problems
These days, many (most?) ASPs use course-based support measures. In the "contextualized model," the doctrinal instructor teaches the doctrinal course, while the ASP instructor teaches a skills/ ASP course with problems from the doctrinal course. In the "embedded model," the ASP instructor teaches the doctrinal course but embeds ASP training into that course. In both types, practice problems, model answers, and feedback are key elements of training students on the skills they must build.
A perennial challenge is sourcing those problems. While some doctrinal instructors may write or edit such materials for the contextualized ASP course, time constraints often make this impossible. And drafting problems in the embedded model is no easier. As such, those of us in this field find ourselves writing practice problems and spending much time doing so.
I then read a post on the Faculty Lounge by Rick Bales of Ohio Northern called "Using AI to Help Flip the Law School Classroom." Building on Bridget Crawford's recent helpful series of posts, Rick described specific steps in using ChatGPT-4 to draft effective practice materials.
Based on his recommendations, I decided to experiment. We are moving one of our ASP courses to the embedded model I described above. The new course in the program will be Criminal Procedure (Investigations), which I have not taught since the dawn of time (or at least back when Mapp v. Ohio was still a thing). Suffice it to say that I have a lot of drafting to do.
But using ChatGPT-4 to accomplish this seems promising. I entered a command (... is that what the kids call it these days?) similar to Rick's but using a Crim Pro fact pattern I have been dying to use. (Yes, it is Chief Quimby conducting a search of Moe's Tavern for illegal substances in the "Flaming Moe" cocktail and later interrogating Moe in a Rhode Island v. Innis sort of way.) I specified that I wanted ChatGPT to create an essay question with a model answer, five MBE-like multiple-choice questions, and a thorough explanation of the correct answers and analysis involved in the MCQs.
In thirty seconds, ChatGPT created materials that would take me hours to write. The questions were strong, and the essay model answer and MCQ explanations were good but imperfect. I will edit a few substantive points and change the materials to gender-neutral language, but the answers' use of CREAC/ IRAC was quite strong, the rule sections were good, and the analysis was solid.
https://lawprofessors.typepad.com/academic_support/2024/06/an-experiment-in-using-chatgpt-4-to-draft-practice-problems.html