Monday, February 27, 2023
ChatGPT, Assessments, and Cheating: Oh my!
Last week I gave a quiz to my undergraduates: 14 multiple choice questions plus a short answer question. It was an open note, open book quiz. I had an honor affirmation at the start of it for those using their laptops for e-textbooks and notes. And there was still cheating. And I am not surprised that there was, but I was surprised at how it happened.
ChatGPT (and other forms of AI bots) are almost all we are talking about as a faculty these days. A student could very easily use one to answer a question on an exam, or to organize and write a paper, and it is extremely difficult to police. Additionally, at my school, we have recently begun slowly switching our platform from BlackBoard to Canvas-and our pilot version of Canvas does not have any anti-plagiarism software attached to it yet. (I miss SafeAssign.) I have asked the bot to do all my written assignments and have them handy for comparisons (and will add them to my platform’s institutional database when we have the software). But here is the real conundrum for law schools: is there any way to assess students that cannot be hacked at this point other than an old school closed book exam? Not every class can use the OG exam format for assessment.
There are, of course, two sides to the debate here: on the one hand, resourcefulness is a skill that we want students to have. We want them to ask the right questions in order to get answers that solve problems. Using a bot certainly can improve those skills. We do not actually have lawyers that to go into any form of legal employment where they will be given three hours, a water bottle, and a proctor in order to resolve a client’s case. We are not sure that the NextGen bar exam will ask anyone to memorize vast swaths of law anymore either. So, if there is a resource that can be helpful (and it seems at the moment to be free-ish), why not train students to use it? If we are in front of the use --and behind it as well-- then we can frame the appropriateness of the use and have more control over it.
On the other hand, is this what we want our profession to be? Should we aim to be a group of educated and licensed typists? How can we assess learning about the law separate from learning how to use the legal resources available? In some ways, I suppose we all fear being replaced by machines. It is a common science fiction trope. A computer, however, no matter how sophisticated, may never be able to see the nuances of the human condition that a well-trained attorney can. A bot would probably not make a creative argument for a change in the law since they are limited to the existing law and interpretations of it. Can we teach a bot to think like a lawyer? Probably. Can we teach it to have an off the record conversation with opposing counsel that hammers out a better deal because of something that cannot be said in court? Doubtful. There are unique spaces for human attorneys, even in this brave new world.
I suppose we could always ask ourselves how we would feel if our doctors typed our symptoms into a computer and then used what was spit back as a basis for treating us. I’m not sure I would still need a doctor to do that for me. I could also game the system by avoiding telling it about things where the answer they give might frighten me. I could, in short, lie to the bot and get the answer I wanted if I asked it often enough and changed the variables I share. My doctor sees through my bullshit-and that is why I trust her. She knows that I am more than the sum of my parts. Perhaps this gestalt is why human lawyers will always be superior.
In all honesty, I am not sure what the answer is here-there has to be a balance and there also has to be some nimbleness on the part of law schools in finding it-and soon. In the meantime, I keep telling students that I am assessing their knowledge of what we discuss in class rather than their ability to look it up. And that is what I told the student who copied their short answer on the quiz (verbatim) from the textbook. I wasn't expecting outright handwritten plagiarism. I guess we still need to be vigilant at all levels of technology. I told the student that I enjoyed pg. 104 of our textbook as much as they did (which is why it was assigned), but just being able to identify these words as correct is not the same as understanding why they are correct. Parrots may make lovely pets, but they do not make good lawyers.
(Liz Stillman)
https://lawprofessors.typepad.com/academic_support/2023/02/chatgpt-assessments-and-cheating-oh-my.html