Sunday, February 21, 2010
Last week we blogged about how the algorithm used by WestlawNext to lead legal researchers from search terms to search results doesn't take into account the perspective of novice law students. For legal research scholar Chris Wren, it raises again the same drawback that all user-indexed computer research engines pose; they lack the consistency and predictability of a common, discernible indexing system. The prescient words Chris and Jill Wren authored back in 1988 still hold true today:
The widespread use of computers to retrieve legal authorities based on words in their text diminishes the leveling effect that has resulted from the profession's reliance on a common indexing system. Indexes channel researchers dealing with similar facts and legal issues to a common pool of authorities; a researcher's level of skill at using the indexes affects mainly the amount of time required to find relevant authorities, not whether the researcher will find those authorities. When professionally indexed printed resources provided the only realistic avenue for locating legal authorities to support an assertion, all participants in a legal proceeding could, with a sufficient amount of diligence, reasonably expect to locate all or nearly all of the same authorities. But different researchers looking online for authorities addressing the same issue can create entirely different indexes and locate disparate pools of authorities; each researcher's results will depend on his or her skill at indexing online documents.
As the use of CALR services spreads, and in some places supplants the use of printed research tools, the special skills required to use the databases (and the possible need for assistance from specialists) will make gaining access to the law more difficult for some groups. Clients who cannot afford to hire lawyers in firms with the resources to devote to keeping on top of database searching, as well as laypeople who want to do their own legal research, will find their access to the law diminished in comparison to the access that would have been available through printed materials.
Chris worries that the problem faced by novice users of all CALR services might be compounded by those products that use an algorithm to generate results.
The more the research details get turned over to unseen computer algorithms, the more dependent the researcher becomes on the skill and insight of those who create the algorithms. That's especially dangerous for the novice, who, at the outset, stands a good chance of not seeding the algorithm with suitable search terms, and then likely does not have any experiential benchmark for assessing whether the algorithm has retrieved a relevant collection of documents or accurately ranked those documents.
It's always great to get the insights of a giant in the field like Chris. And you, our dear readers, are clearly the beneficiaries.
I am the scholarship dude.