September 14, 2012
Beyond Cite Stuff by Law Profs
Should cluster analysis supplement law journal ordinal ranking to improve citation metrics? According to Theodore Eisenberg and Martin T. Wells (both Cornell Law), the answer is "yes" in their Ranking Law Journals and the Limits of Journal Citation Reports [SSRN]. And then there is The Cite Stuff: Inventing a Better Law Faculty Relevance Measure [SSRN] by James Cleith Phillips and John Yoo (both Berkeley Law). You remember Yoo right? -- the author of the Torture Memos before becoming an amateur information scientist. Apparently both articles have "discovered" what has been commonly accepted informetric knowledge for decades.
At least there is some hope for raising the scholarship bar produced by members of the legal academy. See Triangulating Judicial Responsiveness: Automated Content Analysis, Judicial Opinions, and the Methodology of Legal Scholarship [SSRN] by Chad M. Oldfather (Marquette Law), Joseph P. Bockhorst (Wisconsin -Milwaukee Department of Electrical Engineering and Computer Science) and Brian P. Dimmer (Petit & Dommershausen):
The increasing availability of digital versions of court documents, coupled with increases in the power and sophistication of computational methods of textual analysis, promises to enable both the creation of new avenues of scholarly inquiry and the refinement of old ones. This Article advances that project in three respects. First, it examines the potential for automated content analysis to mitigate one of the methodological problems that afflicts both content analysis and traditional legal scholarship — their acceptance on faith of the proposition that judicial opinions accurately report information about the cases they resolve and courts‘ decisional processes. Because automated methods can quickly process large amounts of text, they allow for assessment of the correspondence between opinions and other documents in the case, thereby providing a window into how closely opinions track the information provided by the litigants. Second, it explores one such novel measure — the responsiveness of opinions to briefs — in terms of its connection to both adjudicative theory and existing scholarship on the behavior of courts and judges. Finally, it reports our efforts to test the viability of automated methods for assessing responsiveness on a sample of briefs and opinions from the United States Court of Appeals for the First Circuit. Though we are focused primarily on validating our methodology, rather than on the results it generates, our initial investigation confirms that even basic approaches to automated content analysis provide useful information about responsiveness, and generates intriguing results that suggest avenues for further study.
Remember, the science of citation metrics grew out of the content analysis of foreign newspapers performed by WWII military intelligence staff. It still remains an accepted but evolving screening tool to collect data sets for producing content analysis. But content analysis takes much more work compared to merely spitting out absurb rankings based on raw numbers by amateur law prof "info scientists". [JH]