Monday, March 20, 2023
GPT-4 Just Passed the Bar Exam. That Proves More About the Weakness of the Bar Exam Than the Strength of GPT-4.
It's official: AI has passed the Uniform Bar Exam. GPT-4, the upgraded AI program released earlier this week by Microsoft-backed OpenAI, scored in the 90th percentile of actual test takers.
"Guess you're out of a job," my wife said when I told her the news.
Maybe she's right--unless, of course, the bar exam isn't actually an effective measurement of minimum competence to practice law.
That's the open secret of the legal profession. Bar exams do test a small handful of core legal skills, such as critical reading and basic legal analysis. But they're downright abysmal at measuring the multitude of skills that separate competent and incompetent lawyers, such as legal research, writing ability, factual investigation, crisis response, communication, practice management, creative problem solving, organization, strategic planning, negotiation, and client management.
I am hardly the first commentator to draw attention to this issue. In Shaping the Bar: The Future of Attorney Licensing--which should be required reading for anyone interested in the attorney-licensing conundrum--Professor Joan W. Howarth says this:
Bar exams are both too difficult and too easy. The exams are too easy for people who excel at multiple-choice questions. Wizards at standardized tests can pass the bar with little difficulty, perhaps with a few weeks spent memorizing legal rules, without showing competence in a greater range of lawyering skills or any practice in assuming professional responsibility.
And, bar exams are too difficult for candidates who do not excel at memorizing huge books of legal rules. An attorney would be committing malpractice by attempting to answer most new legal questions from memory without checking the statute, rules, or case law. Leon Greene, the dean of Northwestern Law School in 1939, observed that "there is not a single similarity between the bar examination process and what a lawyer is called upon to do in his practice, unless it be to give a curbstone opinion." The focus on memorization of books of rules was silly in 1939, but today it is shockingly anachronistic, as attorneys asked for "curbstone opinions" would be carrying a complete law library on their phones. Extensive rule memorization makes bar exams less valid, meaning that they test attributes not associated with competence to practice law. Law graduates who would be great lawyers--too many of whom are people of color--are failing bar exams because they cannot drop everything else for two months to devote themselves to memorizing thick books of rules.
Against this backdrop, is it really a surprise that a literal learning machine beat 90% of the human test takers?
Predictably, the National Conference of Bar Examiners quickly issued a press release once the news broke about GPT-4 acing its exam. The NCBE said that human attorneys have unique skills, gained through education and experience, that "AI cannot currently match." And, on that score, I wholeheartedly agree. But that raises the question many of us have been asking for years: If "skills," "education," and "experience" (not mass memorization, regurgitation, and fact-pattern recognition) are what set the best lawyers apart, why aren't we using those qualities to measure minimum competence?
___________________________________________________________
Philip Seaver-Hall is a litigation attorney at Knox McLaughlin Gornall & Sennett, P.C. The views expressed in this post are the author's alone and are not necessarily shared by the Knox Law Firm.
https://lawprofessors.typepad.com/appellate_advocacy/2023/03/chatgpt-just-passed-the-bar-exam-that-says-more-about-the-exam-than-chatgpt.html