Saturday, May 22, 2010
Most of us remember the days when law school exams came in one shape and size: 100% of the grade; closed book, one day/time in one classroom, handwritten in blue books, and all essay.
Today, however, the shapes and sizes vary greatly.
- More exams are variations on open book: code/rule book only; own outline only; one sheet of paper; everything but a live human being.
- More exams are take home or variably scheduled: take home with several days to complete; take home with a set number of hours to complete; option to take the exam for a set number of hours on one of several days; self-scheduled exams; time and location accommodations for disabled students.
- Formats run the gamut: multiple choice; essay; short answer; true-false; court or practice documents; some mixture of these.
- More professors now have a percentage of the grade for participation, presentations, papers, exercises, or other assignments.
- And the blue book has been supplanted in part or entirely by the use of computers.
Are the changes in the law school exam positive or negative? It depends.
Open book: Proponents comment that open book exams are more realistic to what practice will be like. Attorneys will have their sources or notes in front of them as they write legal memoranda, consider strategies for client cases, and address juries or judges. Many argue that it is sensible for there to be code or rule books available rather than students having to memorize lengthy sections. Some also point to the fact that a lawyer being able to find the law is far more important than a spouting rule robot.
On the negative side, however, some express concern that open book exams encourage students to gloss the law and not really know it at any depth. If only working memory is used instead of long-term memory, they will have no recollection of the basic law later when they get to bar review and practice. Others are concerned that open book exams do not really assess learning unless the professor has carefully designed application questions rather than pure information questions.
Open book exams cause some traps for students, especially unsuspecting 1L's. Students recount stories of not studying as thoroughly because they could "look it up during the exam" and then finding there was not enough time to do so. They also talk about time management problems because they felt compelled to look up everything to be certain even though they knew the answers. Other students remark on their wasting inordinate amounts of time before the exam tabbing books for what turned out to be non-essentials.
Variable schedules: Proponents argue that more flexible scheduling can allow the professor to test students in differing formats than the one place/time exam with a strict time limit. For example, the professor might ask for a memo, brief, court document, or client letter as the answer format. In addition, proponents argue that answers are better analyzed, more organized, and better written when multiple-day take-home scheduling is used.
Certainly allowing disabled students to take exams with extra time as an accommodation is an important improvement in exam procedures - as is letting them have quiet rooms, readers, or scribes. Logistics need to be carefully worked out, of course.
Letting students choose which of several designated days to take an exam at the law school with a set time limit on the day also seems sensible. By picking up the exam and returning it to a proctor under time-stamped or clocked conditions allows for fairness with flexibility. This improvement takes some of the difficulty out of exam schedules for the upper-division student who would have more exams in a series of days than a classmate. It lets a student decide when she feels ready to take the exam.
The time-limited take-home exam (for example, complete within 4 hours after the exam is opened) is manageable. The greatest risk here is that the student will be tempted to break the honor code and actually spend longer than allowed.
Personally, I worry about take-home exams that run over multiple days. First, they often do not consider the accommodations for disabled students; a take home exam that is given for 4 days means that the student with double time has to plan 8 days to work on it. Second, professors often give take-home exams that stretch far beyond the designated exam day for the course, thus encroaching on the intended study days in the schedule for the next exam (especially where 1L students are concerned). Third, students are faced with the reality that many other students will use the maximum possible hours to take the exam and they fear they must do the same to compete. Fourth, professors who tell students that they only need 4 hours to take the exam over the 4 days are usually woefully incorrect about how long the exam will take the average student. If the professor truly thinks it is a 4-hour exam then she should limit the time for taking it or give it at the regularly scheduled time.
Self-scheduled exams have an appeal for students so that they be autonomous in deciding what day and time to take each exam for each course. I have experience with this system at a small liberal arts college. However, it can be a logistical nightmare as the student body and course enrollments increase. And it depends on a strong honor code system to work.
Format changes: No doubt some flexibility away from all fact-pattern essay exams is a plus because different course material may lend itself to different question formats. When I give exams, I mix formats for different kinds of assessment.
In jurisdictions where the MPRE will be required, professional responsibility multiple-choice questions may make perfect sense. Some faculty will argue that multiple-choice should be used for MBE subjects as well. But what about the state bar essay questions? What about the performance exams given in various states? Do they require us to rethink our testing formats as well? Where is the balance between "testing to the bar exam" and assessment for law students?
I think we need to be careful to make the decisions on sound assessment reasons rather than devotion to the bar, hunches, or our convenience for grading. Here are some thoughts:
- Writing good multiple-choice questions is not easy. Training may be necessary for us to avoid poorly crafted questions. After all, most faculty do not begin their careers with test construction expertise.
- The styles of multiple-choice questions used by faculty are all over the map. They often look nothing like MBE or MPRE questions. If the justification is to prepare students for these bar exams, then the questions need to mirror the bar formats. Otherwise, the questions should be tailored to the course material and assessment issues.
- Professors who have honed their multiple-choice questions over several years tend to guard their question pools (once found to be valid and reliable) so they do not need to write new questions. However, because each professor tends to write her own style of questions, students are blindsided if the professor does not release at least some practice questions for students beforehand.
- Without someone in academic affairs monitoring the formats used by faculty, it is all too possible that a section of the 1L class may end up with no essay exams at all. And, I have talked to 2L and 3L students who have found the same because of the mix of courses in a semester. That unforeseen result suggests that we believe that there was no merit in the fact-pattern essay. Do we really want our students to have limited essay experience?
- Word limits and page limits can arguably assist students in more concise exam answers. However, we need to be careful that these limits represent what a student can write concisely as opposed to what a professor who has expertise can write concisely. And we want to make sure that these limits are appropriate to the assessment goals for our questions and not just convenient for grading.
I am lucky because my elective courses have relatively low enrollment caps. I still give comprehensive essay and short-answer exams that require my students to write a great deal. Because I do two reads of each exam (one for initial scoring and one for consistency with scoring on all papers), I create some burdens for myself. I understand the temptation that would exist to change the format if I had large classes of students. However, I hope if that day comes that I will weigh new assessment formats carefully and not lean toward my own needs for simplicity or convenience.
Multiple grades for a course: Many students tell me that they appreciate classes that do not have 100% of their grade dependent on the final exam. However, they often tell me that it frustrates them when professors give them details for those extra projects or presentations near the end of the semester (usually referring to the last 2-3 weeks). In some cases, professors cannot give out information earlier because the project cannot be completed before certain material is covered in class or themes emerge. In other cases, however, it would certainly help well-organized students to be able to plan their work over multiple weeks when they have several courses with projects.
Participation grades trouble some students because they are not "talkers." In my seminars, I designate part of the grade for participation (usually no more than 20%) because I want a seminar to have discussion and not turn into a lecture course. In addition to the usual class discussion, I provide students with opportunities to discuss websites for current items in the news so they can plan their comments ahead of time. Another option could be electronic discussion boards. Throughout the semester, I caution students to remember their participation points and not to "forfeit" them.
Computers and blue books: A few years ago, students would sometimes express concerns to me that their typing skills were not fast and accurate enough to use the computer for an exam. I do not hear that concern very often any more. Now I find that students admit that they do not have the cursive penmanship background to handwrite an exam. As professors, we tend to take that skill for granted. There have always been law students with legibility problems, but today it is far more a problem of actually not having used the longhand method since they were children. Some tell me they were never taught cursive in their entire lives and can only print! (There has been an interesting discussion on the legal writing listserv recently about this very issue.)
Typed exams certainly are faster to read. Having had several bosses with terrible handwriting over the years, I am never phased by student blue books because I can decipher almost anything. As a result, I do not think that my own students missed getting points because of handwriting. However, I can see that it could be an issue. And, if they are printing rather than using longhand on an exam, it is likely to be slower than typing.
There seem to always be a few students whose computers crash and who end up having to complete the exam by hand. The stress and anxiety are usually huge. And for most of them, they have no idea what they were typing before the mishap! Those who use scrap paper to organize answers before typing are less fazed by these problems because they can quickly get re-oriented.
The variations used today really do result in the "it depends" response. Assessment comes with a myriad of decisions to make. The quest for balance needs to be carefully thought through by each professor for each course. (Amy Jarmon)