April 5, 2007
The Grain of Salt I Usually Take With The USNWR Rankings
My general feeling has always been that there is a very small nugget of useful information in the USNWR rankings. I apologize for the JPG format at left, but if you click on the picture, you can see the detail. The x-axis represents peer assessment scores from 1.2 at the far left to 4.9 at the far right. Each column represents the number of schools at that score. (I have reversed the data on the x-axis in response to Nancy Rapoport's request below.)
My understanding is that the peer assessment score is derived from the result of a survey that US News sends to the "select" voters at each school (deans, associate deans, FAC chairs, etc.). So if you do as I did, and re-order the rankings solely by peer score, WYSIWYG: a distribution of law schools that accurately reflects wholly subjective and probably either visceral or uninformed impressions on the part of the select group of voters, probably more akin to consumer branding than any rigorous analysis. (My empirical basis for saying this is my one interview with such a voter who will remain nameless, but who told me in fact he had no clue how to rank most of the schools.)
My hypothesis going in was that there would be something of a bell curve, and by God, I was right. I'm not enough of a statistician to do a standard deviation analysis on this, but it's pretty clear what this is saying, as least as to that uninformed and visceral reaction:
1. There are roughly 16 schools at the top end that shouldn't even be included in the bell curve. There aren't many of them, and they are separately by a quantum from the rest of the pack. I haven't done a comparison over time, but I'll bet it changes very slowly.
2. The top 100 (or the "first tier") on this distribution ends somewhere inside of a 2.2 peer assessment. That is somewhere in the apex of the main body of the curve. (Reflecting the weight I understand US News gives to the peer assessment, I guess it's not surprising that there are only five otherwise "Tier 3" or "Tier 4" schools that work into the top 100 on this reordering.)
3. For large groups of schools, there is no perceived difference in reputation, and getting all worked up about a move up or down by a couple of digits seems misplaced.
4. Just like there probably isn't a whole lot of difference between the performance of the students who get between 3.1 and a 2.6 GPA equivalent on your exams with a mandated mean of 2.85, there really isn't a whole lot of difference between the group bunched between a 2.8 peer assessment score (the schools at that level are BYU, Florida State, Alabama, Miami, Oregon, Pittsburgh, and San Diego) and a 1.7 peer assessment score (schools at that level are California Western, Capital, New England, Northern Illinois, Roger Williams, South Texas, St. Mary's, Texas Wesleyan, and Touro). Perhaps a jump from a 1.7 to a 2.8 would be meaningful, but most of the moves inside that block are essentially meaningless.
5. I haven't done the same thing for lawyer assessments, but I'd be surprised if it doesn't come out about the same way. Assuming it came out exactly the same way, it would tell me, were I applying to law school nowadays, that in fact there is a significant brand value that attaches at the far right, but that the distinctions quickly evaporate as you move left through the main body of the curve. Indeed, as the deans keep telling us, there are myriad other reasons to select a school than where it happens to plop down between number 36 (Alabama) and "Tier 4" (see above).
TrackBack URL for this entry:
Listed below are links to weblogs that reference The Grain of Salt I Usually Take With The USNWR Rankings:
Thanks, Jeff! Nice job--is it possible to move the schools w/higher scores to the right, so that the chart complies w/the convention that the "better stuff" is to the right of the mean?
Posted by: Nancy Rapoport | Apr 4, 2007 3:24:43 PM