March 7, 2013
Automation and Its Discontents: A Review of The Case for Curation: The Relevance of Digest and Citator Results in Westlaw and Lexis
Susan Nevelow Mart has recently completed a seminal study, “The Case for Curation: The Relevance of Digest and Citator Results in Westlaw and Lexis.” [forthcoming publication in Legal Reference Services Quarterly] She found that curation, or human indexing, makes Key Numbers (KN) significantly more precise than the largely automated, “Lexis Topics” (LT), and significantly more precise than an entirely automated Lexis application, “More Like This Headnote” (MLTH) She also found that Shepard’s outperforms KeyCite when the user applies either to identify cases citing a targeted case for the same point of law.
Susan’s evidence represents a milestone achievement, not least by establishing a rigorous empirical standard. Her study involved review of “over 450 [landmark] cases to find 90 suitable cases, in addition to the ten cases from [her] previous study.” Students reviewed the Westlaw and Lexis versions of each of these cases for a Westlaw-Lexis pair of comparable headnotes. They used the KN and LT assigned to paired headnotes to find other cases classified under the same KN or LT. The students also applied MLTH to Lexis headnotes in the 90 cases. Finally, the students limited KeyCite and Shepard’s results to just those cases citing each of the 90 cases with respect to the designated headnote pair. The students followed an instruction on relevance that the headnotes supported, together with jurisdictional and other restrictions. They reviewed over 4000 cases for relevance. An additional statistical review ensured that their judgments of relevance were reliable.
62% of cases found through KN were judged relevant, while about 63% found through LT – and 52% found through MLTH – were judged not relevant. Susan concludes that editorial indexing in KN gives that system a decided advantage over LT and MLTH in precision, or the percentage of total cases retrieved that are relevant. MLTH and LT each showed a third or less of unique and relevant cases when compared to KN. These findings suggest to me that users of a digest would do to better to start with KN than with LT or MLTH, if they have a choice and have limited time.
KeyCite’s and Shepard’s respective algorithms assign citing cases to headnotes. Susan identifies a winner in this “battle of the algorithms”: Shepard’s had “the edge” by about 15% in helping the student researchers identify relevant cases. But at precision rates of about 43% and 28%, respectively, neither Shepard’s nor KeyCite appear to work at all well for the application at issue. Moreover, Susan found that Shepard’s yielded “twice as many unique relevant results as KeyCite.” These findings suggest to me that users of either, if pressed for time, should start with Shepard’s, but otherwise use both.
No law librarian has undertaken a study of this scale. Her sample size and statistical review provide evidence that appears generalizable. So we now have good reason to believe that automation has far from superseded the human indexing that distinguishes KN from LT and MLTH. And where automation has taken over, the evidence on Shepard’s and KeyCite hardly encourages enthusiasm about its effectiveness, even if Shepard’s has an “edge” for results limited to headnotes. Susan’s groundbreaking study should inform instruction everywhere in the use of these services.