Monday, February 9, 2015

Do Clinical Offerings Improve Law School Graduates' Job Outcomes?


That’s the question raised by Jason Yackee in a working paper he recently posted to SSRN.  The paper is a nice starting point for this conversation.  But there are method problems that prevent us from drawing any strong conclusions at this point. 

Basically, the paper does a good job of collecting data on law school clinical offerings and employment outcomes.  Jason then runs a basic OLS regression using employment outcomes as his dependant variable.  Slots in clinical courses (scaled by enrollment) is the main variable of interest, and there are controls for school ranking and local employment conditions.  The impact of clinical opportunities is consistently negative, but its significance varies depending on how school ranking is measured.  Jason interprets his results as suggesting, counter-intuitively, that clinical offerings may hurt employment prospects.

The problem with this interpretation is that the causation story could well be  backwards.  Schools whose graduates are struggling may believe that clinical training will help---or, at least, they believe that applicants will think it helps.  That would give us the negative correlation Jason observes.  Dealing with this kind of endogeneity problem in data this coarse will be tough.  As a first pass, though, it would be nice to see a dynamic model in which we can see the evolution of job outcomes and rankings over time.  If lagged clinical offerings predict job outcomes, but not vice-versa, that would support Jason’s story a bit more. 

Some other details below the jump.

I’d also like to see some efforts at refining the data.  Jason uses single-year U.S. News rankings, or alternatively peer-reputation rankings, as his measures of the school’s reputation on employment outcomes.  We know that rankings are noisy, especially below the T-14, so a rolling average would probably make more sense.  And why not use the professial-reputation score, which presumably represents the views of some employers?  A more thorough version would also include alternate measures of placement outcomes, lagged placement outcomes, employment conditions (e.g., use employment rates for major MSA's where graduates go, not the rate for where the school is) and alternate measures of clinical positions available.  

There’s also a big omitted variable, which is student ability.  U.S. News ranking is of course correlated with that, but not perfectly (some schools in desirable job markets have higher LSAT averages than schools ranked well above them in U.S. News, e.g.).  At a minimum, I’d want to see 25/75 percentile LSAT scores and a measure of student body diversity (which I think is usually a positive for employers).

But we should thank Jason for efforts in taking the first steps in answering an important question for legal education.

| Permalink


Brian, thanks for the post. I appreciate the engagement with the paper. The empirical analysis is indeed modest, as I hope I take pains to say in the paper. On the other hand, even a simple one-year scatterplot can be suggestive, and better than nothing, which is otherwise what we seem to have. A dynamic analysis would indeed be nice. I am worried about doing one, though, given the very poor reputation that self-reported employment data has in the pre-Law School Transparency era. I am not sure that the 2013 LST data is objectively as high-quality as we would want it to be, but I am pretty sure that earlier data is of much worse quality. (As a future-oriented caveat, the decision of the ABA to change employment reporting to 10-months from 9-months out will also make a time series analysis more difficult than it might have been).

As for using the peer-review scores, I do include analysis in the paper using them. They are very correlated with rank (0.88), and the basic take-away point is the same. At the suggestion of a blog commenter, I also ran a new model that strips out the law-school-funded jobs from the LST employment score. Clinics remain insignificant and/or wrongly signed, and USNWR rank/prestige remains positive and significant, but the magnitude declines somewhat, as we would expect, as it was mostly higher-ranked schools that funded a lot of jobs. I’ve revised the paper to include that analysis and can provide on request to readers, and will post it on SSRN when I get a chance. I could I suppose also add clerkships back in, which would probably just counteract the effect of taking out school-funded jobs, but I am not sure I will bother doing that at this point.

The suggestion to average USNWR rankings is a potentially interesting one. (On the other hand the peer review scores don’t move all that much). To the extent that year-to-year rankings fluctuations are random, it shouldn’t impact the analysis. FWIW I did average the 2014 rankings with the 2013 rankings (the two that I have in the dataset) but it didn’t really impact the analysis.

Anyway the paper is designed to be a starting point for discussion. I have been really struck by the lack of much in the way of empirical studies of the underlying question, and using “off-the-shelf” data in a modest, transparent, and simple way (which is all that my research budget affords at the time) seems like a helpful first step.

Posted by: Jason Yackee | Feb 9, 2015 2:00:08 PM

Brian, I can't disagree with any of your caveats, but let me ask this: is there any reason we would think clinical offerings would in fact improve employment outcomes? Most clinics are 2L/3L offerings. Large firms, at least, tend to hire from their summer associate classes, and the select their summer associate classes between 1L and 2L year, before students actually have firmly committed to, much less taken clinical courses. I can't see clinical education as having anything other than a marginal effect on employment outcomes, at least at 1st and 2d tier schools.

Posted by: Adam Levitin | Feb 9, 2015 7:01:23 PM

Post a comment