Saturday, September 21, 2013
This is a new article sent to me by a former guest blogger of ours, Nick Wagoner, who co-authored it along with Professor Drury Stevenson (South Texas). It's called Lawyering in the Shadow of Data and discusses the use of so-called "big data" to help lawyers more effectively practice law by offering clients a more accurate prediction about how their disputes are likely to be resolved. Nick says the article has recently been submitted for publication. You can download a copy here at SSRN. Here's an excerpt.
Previous technological advances mostly made firms more efficient at tasks that they were already doing—scheduling meetings, drafting documents, sharing ideas, and looking up cases. A few technologies considerably changed how lawyers approached a task. The most obvious example of this was the profession’s massive shift toward precision-timed billing (in minutes or fractions of hours) in the early 1980s, rather than ball-parked or “scheduled” fees, once computer software made such time-tracking more feasible.12 For the most part, however, the underlying nature of the work remained largely the same—researching, writing, meeting with clients and opposing counsel—and the technology merely made these tasks more convenient, or allowed lawyers to handle more cases. Even the research that attorneys now perform through Lexis and Westlaw is analytically analogous to the old approach in law libraries—finding cases in bound digests and reports, using intricate indexing systems like West’s Keys or Lexis’s Headnotes.
Big data, by contrast, invites lawyers to make a fundamental change in their approach to the law itself by looking to statistical patterns, predictors, and correlations, in addition to the legal rules that purportedly control outcomes—case law, statutory law, procedural rules, and administrative regulations. Traditional lawyering required knowledge of the pertinent legal rules and the ability to apply them to a given set of facts, whether in litigation or in transactional work. This application of law to facts would yield a type of estimate about probabilities; that is, a prediction of the likelihood that a given rule would govern a given scenario. The question was whether a feature of the client’s current situation would trigger a rule and its mandatory result. Analogies, comparisons, and normative judgments all figured into this assessment. Lawyer’s fees reflected, in theory, the time and resources required to determine the relevant law and analyze the likely outcome.
Big data turns this approach on its head. Rather than assuming that rules dictate outcomes as the basis for making specific predictions, big data looks for patterns and correlations. For example, historical litigation data, in the aggregate might reveal a judge’s tendency to grant or deny certain types of pretrial motions, an opponent’s historical avoidance of expert witnesses, or a party’s typical timing for settlements may all be more relevant for a client or lawyer than the published court opinions in prior cases that ran the full course of litigation.