Monday, February 9, 2015
That’s the question raised by Jason Yackee in a working paper he recently posted to SSRN. The paper is a nice starting point for this conversation. But there are method problems that prevent us from drawing any strong conclusions at this point.
Basically, the paper does a good job of collecting data on law school clinical offerings and employment outcomes. Jason then runs a basic OLS regression using employment outcomes as his dependant variable. Slots in clinical courses (scaled by enrollment) is the main variable of interest, and there are controls for school ranking and local employment conditions. The impact of clinical opportunities is consistently negative, but its significance varies depending on how school ranking is measured. Jason interprets his results as suggesting, counter-intuitively, that clinical offerings may hurt employment prospects.
The problem with this interpretation is that the causation story could well be backwards. Schools whose graduates are struggling may believe that clinical training will help---or, at least, they believe that applicants will think it helps. That would give us the negative correlation Jason observes. Dealing with this kind of endogeneity problem in data this coarse will be tough. As a first pass, though, it would be nice to see a dynamic model in which we can see the evolution of job outcomes and rankings over time. If lagged clinical offerings predict job outcomes, but not vice-versa, that would support Jason’s story a bit more.
Some other details below the jump.
I’d also like to see some efforts at refining the data. Jason uses single-year U.S. News rankings, or alternatively peer-reputation rankings, as his measures of the school’s reputation on employment outcomes. We know that rankings are noisy, especially below the T-14, so a rolling average would probably make more sense. And why not use the professial-reputation score, which presumably represents the views of some employers? A more thorough version would also include alternate measures of placement outcomes, lagged placement outcomes, employment conditions (e.g., use employment rates for major MSA's where graduates go, not the rate for where the school is) and alternate measures of clinical positions available.
There’s also a big omitted variable, which is student ability. U.S. News ranking is of course correlated with that, but not perfectly (some schools in desirable job markets have higher LSAT averages than schools ranked well above them in U.S. News, e.g.). At a minimum, I’d want to see 25/75 percentile LSAT scores and a measure of student body diversity (which I think is usually a positive for employers).
But we should thank Jason for efforts in taking the first steps in answering an important question for legal education.