September 14, 2011
The WestSearch Straitjacket For Legal Research - Thinking Beyond The Keyword: Part II
In Part 1, we examined evidence for two kinds of limitations on WestSearch’s effectiveness. Ron Wheeler has studied limitations related to how WestSearch ranks documents for importance and relevance. (Does WestlawNext Really Change Everything?, 103 Law Libr. J. 359 (2011)) He found its use of crowdsourcing (to assess importance) may not work well for finding “esoteric content”: the needle may remain burried in a haystack of irrelevant search results (low precision). He also found its automated linking of related content (to assess relevance) may not work well for broad searches: a broad search may produce “a far narrower range of related content,” with missed search results that would otherwise interest a legal researcher (low recall).
Another kind of limitation applies to all keyword search platforms, and not just WestSearch. It arises from the failure of keywords and concepts to march together. We saw how synonymy, ambiguity, and syntactic complexity can disable WestSearch, although these problems of language continue to disable all search platforms. As two of WestSearch's developers concede, “[v]arious commercial claims to the contrary, it is not yet possible to solve the findability problem by automation alone. This is not merely a matter of natural language processing; there is also a need for domain knowledge, not all of which may be represented explicitly.” (Peter Jackson & Khalid Al-Kofahi, Human Expertise And Artificial Intelligence In Legal Search (2010), at 7) This concession would surprise us only if we trusted the way Thomson Reuters - Legal (TR Legal) markets WestlawNext (WN). TR Legal has marketed WN as a platform that quickly - and consistently - produces the right answers.
In Part 1, we also bridged a gap between reality and marketing at Customers.WestlawNext.com. WN has been designed to accommodate the reality of limitations on WestSearch. WN developer Mike Dahn promises us, “Don’t worry. We didn’t take anything away.” (Meet The People Who Made It Possible, media clip) In fact, WN provides users alternative research methods to correctly answer questions that present problems of language. These alternatives will cost more to use in WN than Westlaw Classic (WC).
So for what types of questions would legal researchers use WestSearch to achieve its promise of delivering “fast” answers with “confidence”? At Customers.WestlawNext. com, TR Legal has identified two types of questions whose answers depend on use of descriptive terms. For both types of questions, WestSearch performs as advertised, with high precision, or both high precision and recall. However, TR Legal has selected types of questions that predictably increase the odds of WestSearch’s success. The sample questions allow for use of sufficiently descriptive terms to avoid or minimize problems of synonymy, ambiguity, and syntactic complexity. Under these circumstances, other case-law search engines, such as Google Scholar, also have predictable advantages. Legal search engines may even approach the high precision of WestSearch, depending on how closely the keywords selected designate the concepts intended.
We will discuss why TR Legal should not advertise WestSearch by using biased questions. But we will first examine the nature of the bias. It affords us important clues on the scope of WestSearch’s benefits. The benefits fall within a narrow, but significant range of legal research questions. These are “descriptively specific” questions - questions that can be described in terms descriptive or distinctive enough to specify the right legal concepts. WestSearch’s usefulness increases with the descriptive specificity of keywords. We can develop a general rule to define characteristics of such keywords. We can then apply two related strategies to gain the most benefit from WestSearch, across a matrix of the usual competing research imperatives - time, cost, accuracy, and comprehensiveness.
We can not leverage WestSearch’s benefits unless our rule and strategies reflect how WestSearch has been designed to achieve higher recall and precision. WestSearch is supposed to achieve higher recall and precision: (i) the more you can identify descriptive keywords that relevant documents will have; (ii) the more these keyword designate the right concepts; (iii) the more of these keywords you include in your WestSearch query; (iv) the more you know about the jurisdiction and particular source(s) for documents having these keywords; and (v) the more other such documents receive citations, and the more other WN users access them.
What features increase the descriptive specificity of keywords? Ironically, the bias in a WN study helps us identify these features. TR Legal asked the Legal Research Center (LRC) to conduct an allegedly independent, “efficiency study,” the Online Research Product Comparison Study: Westlaw and WestlawNext, Apr. 19, 2010. (Click here for the summary and full report.) The study looks impressive. “Research that takes almost 11 minutes using Westlaw,” LRC states in its summary,” takes less than 4 minutes using WestlawNext … [for] a productivity gain of 64%!”
LRC engaged 50 pairs of attorneys to answer five questions. One member of the pair used Boolean or Natural Language searches in WC to answer the question; the other used Boolean or “plain English” searches in WN. Suspicion of bias begins with the first of the “key findings” - that “[b]oth research methodologies produced equally accurate results.” Why did both methodologies yield “equally accurate results” - a “testament to the efficiency and utility of both Westlaw and WestlawNext”? They yielded equally accurate results because of the sample questions LRC used - “targeted-issues” questions. Each targeted-issue question has just one correct answer, involving just one legal authority. Use of “targeted-issues” questions arouses further suspicion of bias, because legal researchers typically must answer more complicated types of questions.
Not surprisingly, LRC’s “targeted-issues” questions achieve perfect recall and high precision with WestSearch. For each question, a keyword searcher can anticipate just the highly descriptive keywords needed to answer it. One of the study’s questions suffices to demonstrate this point:
Is a foreign corporation with only one employee in West Virginia an ‘employing unit’ subject to unemployment compensation laws?
The correct answer is W. Va. Code § 21A-1A-14. A keyword searcher can already anticipate the descriptive shape the answer must assume. The “targeted” code section must have the phrase “employing unit” and the words “one,” “employee,” “foreign,” and “corporation.” The code section belongs to a family of sections under a chapter of the West Virginia code on “unemployment compensation.” We can therefore predict a successful outcome if we enter these descriptively specific terms in a well-designed search engine, and run the search in a database with the West Virginia code. Indeed, we can predict high precision, whether we use WestSearch or Google, nothwithstanding enormous differences in the volume and variety of content searched. WestSearch will work better than Google, in part because WestSearch has a unique design, and in part because the WV-State collection has content limited to a distinct set of legal documents. But the results of the following Google search still underscore the nature of the bias: “foreign corporation with only one employee in West Virginia an “employing unit” subject to unemployment compensation laws.” The top-ranked result is “WV Code Chapter 21A - West Virginia Legislature.” Moreover, the same Natural Language search in WC (see below) performs just as well when done in the unannotated West Virginia Code. Here is the second-top ranked result for a search run on Sept. 2, 2011:
|URL encoding of WC's Natural Language search: foreign+corporation+with+only+one+employee+in+West+Virginia+an+%22employing+unit%22+subject+to+unemployment+compensation+laws|
It is unclear why attorneys in the study took up to 11 minutes to identify this result as the correct one when they ran their Natural Language searches. (LRC study, at 32-33)
The other four issue-questions of the LRC study (at 25) have the same advantages that bias the study toward its favorable conclusion. Given the language and nature of the questions, the study’s participants can identify descriptively specific keywords that the “targeted” authorities must have. That is, they already know from the questions how to select keywords that allow their searches to circumvent problems of synonymy, ambiguity, and syntactic complexity.
We will later return to the unusual circumstances surrounding the study. LRC’s biased questions turn out to be helpful. If we know the kinds of keywords that work well in WestSearch, we can develop a rule for when WestSearch might be useful. The biased questions allow us to identify the characteristics of such keywords as they appear in relevant documents. They include
- phrases or words that unambiguously designate the right concepts (“employing unit,” “gambling advisory commission,” “elected official”)
- names (“President Nixon,” “Dr. Seuss,” “Cat In The Hat,” “Ohio”)
- citations (New York state “CPLR 301”)
- terms that together strongly tend to designate the right concepts (an “issue 3” search with the keywords “bank” and “robbery” and “conviction” and “president” and “Nixon” and “Reagen”)
As a rule, then, legal researchers should consider using WestSearch whenever keywords have these characteristics. Related strategies can help them decide whether they should use WestSearch.
Let us start with an overall strategy. Use WestSearch if, and only if, (i) you can reduce your question to highly descriptive terms that a relevant document must have, regardless of the document’s source or type of source; (ii) you value speed over cost; and (iii) you value precision over recall. To understand the rationale for this rule, consider an example from Customers.WestlawNext.com of what we may call a “context” question. When context questions have high descriptive specificity, you may be able to achieve high precision with WestSearch (but not also a Natural Language search in WC.) A context question is one whose correct answer covers a body of legal authorities and helps the researcher better understand a legal topic for solving a related legal problem. As we saw in Part I, to answer a context question, it will usually be more cost-effective and instructive to use a secondary source than to run a WestSearch. But let us suspend disbelief in the idea that keyword searching should exhaust legal research methods.
In a media clip, The Science of WestSearch, the narrator offers this “real life” question to demonstrate WestSearch’s performance: “Can a municipality be liable for civil rights violations by its employees?” The WN user enters this question in “plain English” in an unrestricted WestSearch. The two top-ranked cases among the search results are Monell v. New York City Dept. of Social Servs., 436 U.S. 658 (1978), and Canton v. Harris, 489 U.S. 378 (1989). The narrator identifies Monell as “the seminal case.” And so it is for violations of interest under 42 U.S.C. § 1983.
How do these impressive results compare with the results of the same search of federal case law in Google Scholar? Predictably enough, searching Google Scholar, you will find that Monell and Canton also appear as the two-top ranked search results, and in the same ranking order. However, these are not among the top-ranked results of the equivalent Natural Language search in ALLCASES and the equivalent Natural Language search in ALLFEDS [URL encoding: Search&query=Can+a+municipality+be+liable+for+civil+rights+violations+by+its+employees]. Unlike WestSearch and Google Scholar, WC does not include a crowdsourcing feature for Natural Language queries. The second ResultsPlus document for the ALLCASES search represents a promising start, especially as it provides a fuller context:
At any rate, the bias behind the marketing example involves stacking the deck with a question having enough descriptive specificity to invite comparisons of Google Scholar and WestSearch.
Of course, for questions, like this one, we would need much more testing to compare the precision of Google Scholar and WestSearch. WestSearch will very likely win that contest. But we have a different goal. What does this biased example at Customers.WestlawNext.com tell us about WestSearch’s benefits? It tells us to use WestSearch for similar questions if we value speed over time and precision over recall.
If a researcher has a little more time to answer the “municipal liability” question, the researcher can save money by using WN’s alternatives to keyword searching. So suppose that in lieu of WestSearch, a researcher used WN to browse the table of contents of a treatise - Nahmod, Civil Rights & Civil Liberties Litigation: The Law of Section 1983. A researcher could quickly identify a relevant chapter:
Chapter 6. "Every Person": Governmental Liability
§ 6:1. Introduction
§ 6:2. The law prior to Monell: the origin of § 1983 governmental immunity in Monroe v. Pape
§ 6:3. The law in the circuits prior to Monell: local government bodies and their agencies
§ 6:4. The law prior to Monell: suing individual officers who are persons
§ 6:5. Monell v. Department of Social Services: local government bodies are now persons
§ 6:6. Monell's official policy or custom requirement for local government liability: a duty analysis
A WN user may not have ready access to a library with this treatise or one like it. Under a “commercial” pricing plan, it would cost $42 to retrieve each section at a “retail,” transactional rate. (WestlawNext Pricing Guide for Commercial Plans (Feb. 2010)) - expensive enough, but cheaper than running a WestSearch first at $60 and then retrieving specific sections of such “speciality” treatises. Under a comparable, “private” pricing plan, the “retail,” transactional rate is $24 per section, or almost 50% less than the cost of WN retrieval. (Westlaw Pricing Guide For Private Price Plans (Apr. 2010))
In addition, the table of contents for The Law of Section 1983 aids research in a way WestSearch can not. The table of contents provides a useful context, especially for a legal researcher new to section 1983 practice. If you are researching an unfamiliar area of law, context will help you know what to look for and where to look for it. You may even end up redefining your question. Along the way, you may uncover relevant legal authorities that you would not think to look for from exclusive use of WestSearch. You will thus compensate for WestSearch’s failings in recall. Without the context, you are also likely to view more documents than you need to view, at WN’s increased cost per retrieval. Using WestSearch by default fits TR Legal’s marketing imperative. It also risks unnecessarily inflating costs.
You will have another reason to use the recommended WestSearch strategy if you value precision over recall. As we saw in Part I, TR Legal advertises high precision and recall for WestSearch. (WestSearch: Westlaw Next Search Technology) But the claim appears misleading. While WestSearch may improve overall recall and precision, as precision rises, recall falls. The inverse relationship between precision and recall has consequences. Wheeler appears to have described one such consequence. (Wheeler, at 371-372) He tested use of highly descriptive words in a broad, “plain English” search. His unrestricted, WestSearch query, abortion trimester constitutional, retrieved 317 cases on “the constitutionality of abortion and the timing measured in trimesters." But his Natural Language query in WC retrieved almost two and half more cases. A researcher reviewing the larger set of cases may find ones of interest, even if they are “not at first glance, directly relevant.”
This outcome should not surprise us. WestSearch relevancy ranking depends on a method for boosting recall that may also have the effect of limiting it. In Part I, we examined a patent for WestSearch technology. The inventors describe a central purpose of its new relevance ranking:
At least one problem the present inventors recognized with this effective and highly successful system [WC] is that it does not fully appreciate the "one good case" methodology that many, if not most, legal researchers uses when conducting their research. This method generally entails a user running a relatively broad or intermediate query, manually identifying one highly relevant case law document from the search results, and then leveraging that good document to find other relevant documents ... Accordingly, the present inventors have recognized a need for improvement of information-retrieval systems for legal documents and potentially other document retrieval systems. (U.S. Patent Application No. 20080033929, at [0006-0007])
By linking topically-related documents to top-ranked search results, WestSearch is supposed to increase recall. But a legal researcher may find many documents of interest that are not linked even to the most relevant results of a highly descriptive search query. WestSearch may improve precision if it shifts the most relevant documents to the top. But by tying other relevant documents to top-ranked search results, WestSearch may, in fact, limit recall. Precision can not be the tail that wags the recall dog.
We can now turn to another strategy for effectively using WestSearch. Start with a specific (type of) source for use of WestSearch if: (i) you can describe your question in terms more or less specific to how you know that (type) of source would describe the answer; and (ii) you do not know how specifically your terms match the right conceptual descriptions in the target source. To work, this strategy depends on a researcher’s ability to anticipate a sufficient fit between descriptive keywords and the technical language, or “legal jargon,” of a primary source, such as a legislative or administrative code. We can better understand this strategy by comparing two questions as our examples. The first question we have already discussed. It implies a very tight fit between descriptive words and the right concepts, making the strategy unnecessary. The second question involves a looser fit, making the strategy helpful:
- The tight-fitting question: “Is a foreign corporation with only one employee in West Virginia an ‘employing unit’ subject to unemployment compensation laws?” (LRC study, at 25) (Answer: W. Va. Code § 21A-1A-14)
- The looser-fitting question: ”Is it lawful to use deadly force to defend a motor vehicle in the state of Georgia? (Wheeler, at 366-367 and n.42 (2011)) (Answer: “[Y]ou need to read several statutory sections, including Ga. Code Ann. §§ 16-3-23, 16-3-23.1, 16-3-24, 18.104.22.168 (2007). However, after reading them all, it is § 16-3-24.1 that ultimately defines the term ‘habitation’ to include motor vehicles in a way that answers the question.”)
Both questions are tied to state codes. The first question has more descriptive specificity than the second, because it is a targeted-issues question. Just one code section provides the correct answer, and the keyword searcher can anticipate a phrase (“employing unit”) and several words that the section must have. We would expect high precision from a “plain English” or Boolean use of WestSearch to answer the first question, even if we search “WV-State” content in WN. (WestSearch algorithms override full Boolean functionality, unless the Boolean searcher uses WestSearch’s “Advanced Search” template. Wheeler, at 370, n.54)
We need to apply the strategy to the second question. Although “deadly force” appears in one of the relevant code sections (§ 16-3-23.1), a different description of the concept - “force which is intended or likely to cause death” - occurs in other relevant sections (including § 16-3-24.1). So the second question has a complication of language: its answer has synonymous expressions. To try to answer it, a WestSearch user could enter the question in “plain English,” without any filtering restriction, as Wheeler did. A “plain English” search can also start with “GA-State” content in WN, or even just the Georgia code. Wheeler found that the “statutory section [he] was seeking appeared ninth in the list of statutory sections retrieved.” (367) WestSearch precision increases, however, if the WestSearch user starts with just the Georgia code. The relevant code sections then appear among the top-ranked results. TR Legal, however, markets WN as if the user need never first identify a specific source. So we have bridged another gap between reality and TR Legal’s marketing of WN. Moreover, if a legal researcher has another two or three minutes, the researcher can avoid online cost altogether. LexisNexis hosts a free version of the Georgia code under contract with the state. The researcher can run a “natural language” search, deadly force defend motor vehicle. The search engine ranks this code section as the top result:
O.C.G.A. § 16-3-21 (2011), TITLE 16. CRIMES AND OFFENSES, CHAPTER 3. DEFENSES TO CRIMINAL PROSECUTIONS, ARTICLE 2. JUSTIFICATION AND EXCUSE, § 16-3-21
It is the wrong section. But it belongs to the neighborhood of a correct answer. So the researcher could scan the topical outline of the code under Title 16, Chapter 3, Article 2, and find the relevant code sections that way:
TITLE 16. CRIMES AND OFFENSES
CHAPTER 3. DEFENSES TO CRIMINAL PROSECUTIONS
ARTICLE 2. JUSTIFICATION AND EXCUSE
§ 16-3-20. Justification
§ 16-3-21. Use of force in defense of self or others; evidence of belief that force was necessary in murder or ...
§ 16-3-22. Immunity from criminal liability of persons rendering assistance to law enforcement officers
§ 16-3-23. Use of force in defense of habitation
§ 16-3-23.1. No duty to retreat prior to use of force in self-defense
§ 16-3-24. Use of force in defense of property other than a habitation
§ 16-3-24.1. Habitation and personal property defined
§ 16-3-24.2. Immunity from prosecution; exception
§ 16-3-25. Entrapment§ 16-3-26. Coercion
§ 16-3-27. Benefit of clergy
§ 16-3-28. Affirmative defenses
Again, context matters, even if it otherwise disappears from a list of search results.
It may seem more than odd to develop effective strategies for WestSearch by reverse engineering the biases in its corporate marketing. In fact, we should not have needed to pursue this exercise. The need to pursue it raises troubling concerns about the way TR Legal has marketed WestSearch.
We would have been better able to assess WestSearch’s strengths and weaknesses if WestSearch’s marketers embraced an unbiased or evenhanded approach. Legal questions often cause problems of synonymy, ambiguity, and syntactic complexity, but you would not know that if you relied on TR Legal’s preferred examples to market WestSearch. To demonstrate WestSearch’s effectiveness, TR Legal singles out questions that minimize these problems of language and thus play to the strengths of search engines. TR Legal also relies on questions that minimize problems of WestSearch’s criteria for ranking relevance and importance.
TR Legal not only fails to disclose WestSearch’s limitations, but charges WN customers more to overcome them. In Part 1, we compared “retail,” transactional costs that “commercial” WN and WC customers would incur to use secondary sources without a keyword search. It will cost them almost twice as much in WN to use tables of contents for viewing sections of secondary sources. Under these circumstances, TR Legal’s advertisements appear to convey false impressions to WN’s prospective customers.
Consider a sample advertisement - an excerpt from the media clip, Introducing WestlawNext:
Now you don't have to choose a database. Finding what you want is easy. Simply pick a jurisdiction and mulitple content types are searched at once. In a single box, you can enter the way you are used to searching, the way you are used to speaking, or even by a specific citation. Behind this simple search is the power of the new WestSearch technology, which leverages our editorial assets much like an expert researcher would, only faster, giving you the most thorough research results, viewable by document type, and ranked ranked by relevance.
Nothing about this advertisement as a whole changes its impact; the excerpt sufficiently represents the tenor of the whole, and the tenor of other advertisements. Prospective customers would reasonably expect from such advertisements that there are no fundamental problems of language limiting WestSearch’s effectiveness. They would reasonably expect that WestSearch’s design does not create new and significant problems of keyword searching. And they would reasonably expect not to pay more in WN than in WC to circumvent any significant problems that migrate from WC to WN.
With one exception, advertisements at Customers.WestlawNext.com appear to mislead customers because they are incomplete. “[A] statement that contains only favorable matters and omits all reference to unfavorable matters is as much a false representation as if all the facts stated were untrue.” (Restatement 2d of Torts, §529; see also §551(2)(b)). The advertising omissions may also be material: they may affect the decisions of customers to switch from WC to WN. If sellers do not disclose material negative facts in their advertisements, they may violate the FTC Act and state mini-FTC acts.
VIEW WESTLAW NEXT EFFICIENCY STUDY
See how research that takes almost 11 minutes on Westlaw takes less than 4 minutes using WestlawNext.
Other advertisements, like Controlling Costs With Cutting Edge Technology, promote “lower costs for clients” due to the “significant time savings and efficiency of WestlawNext.” (at 10)
Customers who read the LRC study will learn that it has a “limited sample size,” and that it involves just one type of question that “does not completely reflect most real-world online research tasks.” (LRC study, at 22, italics added) Questions about targeted-issues do not occur nearly as often as other “real-world online research tasks.” We are told that “a follow-up study with “larger and less-defined issues could provide illuminating data,” even though we are also told “targeted searches … remove the variability [of participant response] in more open-ended research projects.” The translation should be obvious enough. LRC studied performance for targeted-issues because it could not undertake a study with “more illuminating data” from “less-defined” questions that “reflect most real-world research tasks.” A limitation of this kind on the study already compromises the value of its findings. A customer who reads the study may reach this conclusion, even if the customer does not also realize that, by selecting target issues, LRC biases the outcome in WestSearch’s favor.
Given this disclosure, TR Legal does not appear to mislead customers in the advertised study, although customers who neglect the full study may be misled. But under what circumstance would TR Legal engage a firm to conduct such a compromised study? We should not be surprised to learn that the study may lack the independence LRC claims for it. (LRC study, at 5) The following facts do not prove that the study lacks independence, but together they create the unwelcome appearance that it does.
Vicki C. Krueger, an LRC manager, led the “LRC research team” in preparing the study. She has conducted at least seven other studies on behalf of West Group - the predecessor to TR Legal - or about its services. According to a “white paper” that she co-authored, “LRC and West purchase each other’s services; however, LRC does not endorse any specific online research provider and utilizes several distinct online services." (Recovering Online Legal Research Costs Best Practices for Enhancing Small Firm Profitability and Service to Clients (2006), at n.3) While LRC may not “endorse any specific online research provider,” it has had a history of significant relationships with West.
First, senior officers at LRC have had careers in sales and marketing at West. Christopher R. Ljungkull, LRC’s co-founder and CEO, worked at West Publishing Company between 1987 and 1994. (LRC Management Profiles) He marketed software products and “was responsible for WESTLAW advertising and for developing WESTLAW marketing plans." When Thomas W. Dwyer joined LRC in 2003 as a senior vice president of sales, he had served for seven years "in a variety of executive positions at West Group, the leading electronic publisher of legal and regulatory information." (LRC press release, July 15, 2003) He was a director and then a vice president of marketing. “The sales and marketing programs I developed within five months of joining Thomson Legal Publishing in 1995,” he says, “ took my business unit from a fifth-place contributor to the number one position in less than a year.” (Tom Dwyer Linked-In Profile). Second, in 1994, LRC and West Publishing Company agreed that West would “exclusively [refer] its customers requests for analytical legal research and writing to [LRC]." (10-KSB, filed with SEC, Apr. 2, 2001) The exclusive contract was in effect through at least 2000, although no public information about LRC reveals its current status. In 1997 and 1998, LRC reported that it paid West a “fee” for the referrals. (10-KSB filings with SEC, Mar. 27, 1997, and Mar. 30, 1998) Third, with another investor, former West President Vance Opperman provided LRC an undisclosed amount of financing in 1999. (10-Q, filed with SEC ,May 3, 1999).
Thus LRC and TR Legal have had ties significant enough to undercut LRC’s claim that it did an “independent” study on WN for TR Legal. Their ties mean not that LRC deliberately biased the sample questions, but that LRC may have had a financial incentive to do so. Their ties also reinforce a worry about the apparently misleading nature of advertisements at Customers.WestlawNext.com. TR Legal could have better promoted WN if it had avoided important omissions about WestSearch’s limitations.
Editor's Note: Part I of this two part series titled "The WestSearch Straitjacket For Legal Research - Thinking Beyond The Keyword" was published on LLB last week here. Both posts in this series were written by an anonymous author, a first in LLB's six-plus years of publishing unless one considers the rare cases where a post has been published anonymously but carrying a byline something like "a law librarian" instead of the more traditional "Anon." "My bad," I guess. Pseudonyms, anyone? Considering the current state of affairs in the institutional buyer-vendor relationship, it may not be the last anonymous post published on LLB. I may have more to say about this matter later.
Until then, and in response to some inquiries, I will say this:
- The author is not a Lexis employee (or an employee of any other vendor)
- The author is not a disgruntle TR Legal contract employee (or a disgruntle contract employee of any other vendor).
- The author is not a ex-TR Legal employee (or an ex-employee of any other vendor).
Yes, all of the above questions were asked one way or another. See one comment to the first post in this series and my reply for an illustration.
The author is one of us: a professional law librarian. The author has been practicing law librarianship for X-decades, specializing in legal research.
The messenger is never more important than the message if the reader takes the time to examine the message closely by evaluating it on the basis of the content's merits. [JH]
First, very nice posts--there's a lot of good thought and info in there. On a different note, though, this has been bothering me since I saw Wheeler's LLJ piece:
"His unrestricted, WestSearch query [i.e. Westlaw Next] . . . retrieved 317 cases . . .. But his Natural Language query in [Westlaw Classic] retrieved almost two and half more cases."
I've got to be missing something here. I was of the understanding that ALL natural language searches in Westlaw Classic retrieved 100 results. (Yes, you can change that to a different number in your preferences, but 100 is the max and every search will always return the number you set.) So where did all of the extra results come from?
Posted by: CJM | Sep 18, 2011 3:11:13 PM