Sunday, September 16, 2018
The Sisk rankings, which rank US law faculties based on mean and median citation counts, came out last month. Many Deans and faculty members spend lots of time discussing the most impactful faculties based on the rankings. After having a conversation with a friend at another institution, I am convinced that the Sisk rankings have it (partially) wrong. While it is interesting to see which non-scholars bring down particular faculties in terms of school-wide rankings or which significant individual pickups lead to a big increase (Orin Kerr and Herb Hovenkamp, for example, this last time around), school-wide rankings do not accurately reflect the impact of a school. The rankings tend to benefit schools with smaller faculties where one or two faculty members with high citations make up for a number of less productive or inactive scholars.
I propose an alternative measurement to augment the Sisk rankings. I draw upon my NBA watching experience to explain. The biggest difference between the NBA regular season and playoffs in the NBA is largely one of a shrinking rotation. You want your better players on the floor longer because that is how you win games. Typically, but not always, your starting five players are your best players on the team and get the most minutes. Why don't we treat the faculty rankings in a similar way?
The Sisk rankings provide the top 10 most cited people of the last five years of a given faculty. Typically (and there are caveats why this is not always true), these people represent the most important scholars on a given faculty and the ones most responsible for the entire faculty's reputation. For purposes of a competitive system, we don't know by name and we don't care about the marginal professor at a given school the way we don't care about the 11th person in the rotation on the Knicks. The people who create the scholarly reputation for a school are the top performers, much the way that the most offensive value in the NBA as measured by win shares are the ones we care about the most.
It would be interesting to see what such rankings would look like. Sisk et al have this information and I would encourage them or Brian Leiter to run another comparison ranking as an appendix that uses this methodology.