Thursday, November 12, 2009

Outcome Measures and Regulatory Failure in Legal Education

[Posted by Bill Henderson]

Over the last few years, the topic of outcome (or output) measures has been a recurring theme at various association meetings and conferences surrounding legal education.  Some of this discussion is motivated by Department of Education initiatives that want to establish a clear linkage between educational cost and economic returns.  Some schools, however, believe that their fortunes will rise when they can be judged on three years of education (e.g., bar passage rates, employment, student satisfaction) rather than the input measures that drive the U.S. News Rankings.

It is hard to imagine a more impossible task than faculty from 190+ law schools reaching a "consensus" on outcomes measures.  Yet, consensus is not required.  The ABA Section on Legal Education and Admission to the Bar, through its authority to accredit law schools, can require law schools to measure, collect, and report information that the Section determines is in the public interest.   In 2007, the Section created a "Special Committee on Output Measures" and asked it to "define appropriate output measures and make specific recommendations as to whether the Section should adopt those measures a part of the [Accreditation] Standards."

So what happened?  The Special Committee's 76-page single-spaced Final Report, issued in July 2008, made little headway in defining output measures or making specific recommendation regarding accreditation. In a nutshell, the Committee recommended that the Standards be amended so that each law school would be free to define and measure its own outcomes.  In theory, these new Standards could be given teeth by the rigor of the outcome measures (or lack thereof) embodied in a school's self-study report and strategic plan (two processes already required under the accreditation process).  This excerpt from the Final Report puts the best possible spin on the Committee's recommendation:

[A]n approach that accords significant independence to law schools would make it possible for the schools to serve as laboratories for innovation and systemic improvement. ... As law schools experiment with various models of their own choosing, the data these schools generate will inform other schools' experiments and will provide a basis for fine-tuning models for instruction and evaluation. At some point in the future, it may be the case that our understanding of outcome measures has progressed so far, and that certain views have become so widely held, that the ABA Section of Legal Education and Admissions to the Bar will be in a position to demand greater specificity in the criteria in the [Accreditation] Standards and/or Interpretations. But, at least at the present time, the Committee believes that in drafting Standards and Interpretations, it is best to give law schools the latitude to experiment with a wide range of models.

When we step back, it is hard to believe that this thousand-flowers-bloom approach is the tack taken by the regulator charged with overseeing legal education.  To paraphrase the above passage, it says "do what you want to do, but try a little harder.  When something works well, and most schools adopt it, the Section can implement it as the new rule.  That way we can avoid difficult decisions that will upset our friends."

In truth, the Committee's approach turns the purpose of outcome measures on its head.  In the broader discussion in higher education, outcome measures are sought because they enable an apples-to-apples assessment of the effectiveness of an educational institution.  Indeed, the entire process is meant to facilitate comparisons.  Why? Because meaningful comparative information levels the playing field between those providing education (the schools) and those financing it (the students/citiizens). When outcome information is readily available, it changes behavior and alters powerful norms, including over-reliance on US News.  In the absence of apples-to-apples outcome information, the market adapts as it does now--by focusing on the basis of inputs (revenues, books, number of faculty, LSAT scores, UGPA, etc.).  It is the opaqueness of legal education that creates a vacuum needed for the US News rankings, which are nearly perfectly correlated with student entering credentials.

The Committee shrinks from the task of defining specific, comparable outcomes because it knows (at least implicitly or subconsciously) that the very process of creating meaningful outputs creates a large number of winners and losers among law schools.   Yet, by refusing to act as regulator that serves the public interest, the ABA Section on Legal Education and Admission to the Bar makes law schools the winner and law students the losers. 

If we evaluate outcome measures from the perspective of law students rather than law schools, there are at least three pieces of information that the Section should collect and publish annually in a format that facilitates school-to-school comparisons:

  1. Bar Passage.  Working in conjunction with the Law School Admissions Council (LSAC) and the National Conference of Bar Examiners (NCBE), the Section should construct a database that compares scores on the Multistate Bar Exam after controlling for entering credentials, jurisdiction, and law school attended.  Preliminary evidence suggests large variations--above and beyond entering credentials--in law schools' ability to get their students over the bar exam hurdle. See Henderson Letter to Special Committee (January 30, 2008).  This information is crucial to diversifying the bar because minority students historically have significantly lower bar passage rates. Both educators and students need to know which schools are most effective at erasing this gap.  Principled objections to the bar exam as an outcome (so often voiced by professors) need to be squared with the practical realities faced by students.

  2. Employment Outcomes.  How many graduates are working in non-legal settings?  What are the salary ranges and distributions within legal and non-legal practice settings? Is there any evidence that some schools have better placement records as a result of curricular initiatives?  Remarkably, no one in legal education knows the answers to these questions. Schools should be required to submit a list of the employers and job titles for all of its graduates, and the Section should then code and compile these lists in a way that reveals the full range of outcomes, thus enabling meaningful school-to-school comparisons.  The lists themselves need not be published; the binning process would capture the useful information while also ensuring student anonymity.  There is a high probability that the current ABA coding system (e.g., "academia", "business") contains outcomes that make $120K in legal education look like a bad investment.  The Section should follow up with these graduates to better understand their circumstances, including the decision-making process that the graduates relied upon.

  3. Debt Loads.  Because of the scholarship process used by virtually all law students, tuition is a misleading indicator of law school cost.  Debt is a more accurate measure.  But means and medians are not enough; students need to see full distributions. Specifically, they should have access to a histogram of a school's debt loads at graduation. And not just law school debt, but also total educational debt and consumer debt.

If the Section focused on the above approach, they will not need to develop the thousand-flowers-bloom approach embodied in the Special Committee's Final Report.  In a market will better information, law schools will find and leverage their own competitive advantage in order to survive--and let's be honest, some schools won't.  From a societal perspective that is okay.  The Section on Legal Education and Admission to the Bar needs to wake up to the fact that is is regulator with a fiduciary responsibility to law students, not law schools.

https://lawprofessors.typepad.com/legal_profession/2009/11/a-starting-point-for-law-school-outcome-measures.html

Teaching & Curriculum | Permalink

TrackBack URL for this entry:

https://www.typepad.com/services/trackback/6a00d8341bfae553ef0128756f5f34970c

Listed below are links to weblogs that reference Outcome Measures and Regulatory Failure in Legal Education:

Comments

Somewhere early in my 2nd year of law school I learned the following:

1) Most of the people I knew who graduated the year before were still unemployed, or ridiculously underemployed, or seeking yet another degree. This was a statistic which seemed impossible to reconcile with the school's employment surveys.

2) The school, and I'm quoting a dean, "Strongly reccommended" that second years plan to borrow an additional 10-15 thousand dollars in our third year, for a bar prep study course and living expenses after graduation. This too seemed inconsistent with the idea of a mostly employed graduate population.

I am still shocked. And the more I learn about the student loan industry, and legal education in general, the more shocked and outraged I become.

Thanks for the article. It made my day.

Posted by: Liz | Nov 12, 2009 4:37:13 PM

In order for outcome measures to actually mean something, and not be as worthless as input measures, schools need to account for self-selection and self-reporting bias. I remember I had to fill out a survey while waiting on the line to get my commencement seating assignment, but nobody cared if it was incomplete or unverified. People just checked a few random boxes and signed their names. Of course the 3L's with biglaw jobs had no problem answering questions about employment outcomes, but the survey didn't care about people like me who were still job searching.

Posted by: Jaded | Nov 16, 2009 7:16:50 AM

Post a comment