December 18, 2009
Artificial Intelligence, Avatar* Lawyers, and Judgment
Posted by Jeff Lipshaw
I was an Avatar skeptic, but having read the reviews this morning and watched the trailer (available through the link), I'm going to see it. What about avatar lawyers? As part of my business lawyering judgment project, and I've been surveying the literature on artificial intelligence and the law. Could a computer (or an avatar lawyer) ever make a difficult judgment? I want quickly to summarize the field (over-simply, no doubt) and make one central observation. I think the answer to that question is "yes," but it's probably not the kind of "yes" that makes us comfortable. [See Update below on my incorrect use of Avatar Lawyer. If it makes you feel better, think "Robo-Lawyer."
First, let me pose a prototypical problem at the magnitude of difficulty (or complexity) I want to address (this is an excerpt from an in-process piece):
A small manufacturing firm makes plastic electrical connectors. It sells five million of them a year to the automotive industry at a price of fifty cents a unit. The firm's gross revenues are thus $2.5 million. The form purchase order from the automobile manufacturer provides that the supplier is responsible for all losses, including consequential damages, arising from any defect. If the connectors turned out to be defective, their replacement would require two hours of time from a service mechanics (roughly $100). The automobile manufacturer refuses to modify the form warranty provision. Should the firm sell the connectors?
Note the number of business and legal issues this hypothetical presents. It requires a lawyer (human or avatar) to understand the default rules under Section 2-207 of the Uniform Commercial Code, the negotiated alternatives to that statute, to predict possible legal outcomes and weigh the legal risks against a business opportunity with a known payoff, taking into account risk averseness and cognitive biases.
We now need to sort through different kinds of reasoning and their amenability to being replicable in a computer (our avatar lawyer). There's a certain aspect to this that is purely deductive, and capable of being programmed, even by a dolt like me. When I've taught UCC 2-207, I've used a flow chart to chart the deductive system it incorporates. From a given set of assumptions, using another set of rules of inference, the program tells you whether you have a contract and on what specific terms. (See flow chart left.) Embedded within the flow chart, however, are questions the answers to which are "yes" or "no", but the reasoning to which involves something other than deduction. For example, the deductive system requires an assessment at Step 8 whether the terms in the expression of acceptance are "different from" or "additional to" those in the offer. That assessment is not deductive. It may be analogical - we look at other instances of terms being different or additional and decide whether our case is closer to the ones in which the answer was "yes" than to ones in which the answer was "no." Or we could say, perhaps, that the assessment is inductive. Here we are going to look at all the past cases interpreting "different from" and derive a general rule that distinguishes "yes" cases from "no" cases. (Kant referred to this, by the way, as "reflective judgment." I will get to why it's a judgment in a moment.) The other piece of the inductive process would be to take the rule thus derived and determine whether our case falls within the rule. (That's what Kant referred to as "determinant judgment.") We see the principle of "garbage in - garbage out" at work here; if the analogical or inductive reasoning along the way is poor, the deductive process of the flow chart isn't going to be very helpful.
More below the fold.
But there's even another kind of reasoning, or insight, or mental process involved here, and that can take us in a couple of other directions. As to analogical reasoning, we now have to ask the question what makes a good analogy? I'm not going to dip into the cognitive science that addresses this issue for now - the best source on the intersection of metaphor, analogy, and the law is Steve Winter's A Clearing in the Forest. Suffice it to say that it has more to do with pattern-recognition than rule-recognition. As to inductive reasoning, how do you come up with the hypothesis that one particular rule fits the data (here, case results) better than another rule? This is what is called variously abductive reasoning, or inference to the best explanation. (The pioneer was Charles Sanders Peirce.) The best we can do for this kind of reasoning is to say it's a kind of educated guess, and probably related in a significant way to the kind of pattern recognition the cognitive scientists are studying in connection with other pattern recognition. As the behaviorists might say, it's far more likely to be a heuristic than a calculation.
Let's go back to the hypothetical. Surely we can program the avatar lawyer to give us some very good input into the judgment we need to make - it could gather case data as well as empirical data on failure of electrical connectors, and probability calculations of various kinds. But how much input and how good? There's a debate from 2000 between several computer scientists and Cass Sunstein whether artificial intelligence in law can ever be anything more than a glorified LEXIS or WESTLAW. (8 U. Chi. L. Roundtable 1 (2001)), Cass's position being that computers can't do analogical reasoning. There's a response from Eric Engle (Richmond J. L. & Tech., Vol. 9, Issue 2) disagreeing with Cass, and arguing that the criticism is based on notions of static rules of computation, rather than dynamic rules of computation, in which the computer learns from its prior errors. Moreover, so-called "neural networks" already allow computers to undertake pattern recognition. These are computer programs design to model the way that brain neurons process patterns. Again, highly oversimplified, these are programs that allow parallel rather than serial processing, and contain learning algorithms that allow the program to "learn" - that is, to reject choices available within the program. The program doesn't just find a solution - it finds the optimal solution (usually the solution that has the lowest cost).
Here's why I think Sunstein is right, and why the complex judgment in my hypothetical won't ever - in concept - be finally amenable to an AI resolution (my central observation), even though - in concept - the avatar lawyer is possible. Human brains supply the rules to the program. Whether it's merely induction or it's an artificial neural network that can replicate analogous reasoning through pattern recognition, we need to undertake an abductive process to come up with the rule we are going to give the computer, either close to the surface (as in my simple flow chart), or deep down in the midst of the neural network. Even if the rule is a second- or third-order learning rule, we need to make a judgment that is the best choice among various alternative rules for learning. This is the infinite regress that is the source of rule-skepticism. You decide to adopt Rule A as the optimizer for making a particular choice among possible patterns that could explain the visual data. What is the rule you used to adopt Rule A? Let's assume it was Rule A'. How did you decide to adopt Rule A'1 as opposed to Rule A'2? If it was a rule, Rule A'', how did you choose A"1 over A"2? At some point, you choose without a rule (hence, the rule-skeptics view that there's no Rule of Law, only indeterminacy and the exercise of power).
There's no reason why we can't, in theory, make a really sophisticated, and perhaps even human-seeming, judgment computer (the gray eminence that is the avatar of old Mr. Cravath, for example) that could make the decision in my hypothetical. It would assess the case law, calculate the probabilities of defect and their cost consequences, determine the business's risk averseness, correct for cognitive biases, and give us a "yes" or "no" answer to the question. But as long as it's digital, it will contain deep within its bowels, an inescapably brute judgment. What leaves us uncomfortable is our inability to confront the maker of that brute judgment.
*UPDATE: My son James, who also today suggested I was getting a haircut "under an illusion of agency" (you have to love a kid who can say that), tells me I have completely misused the idea of a virtual world avatar. Too bad.
*UPDATE-2: I have now seen the movie and James is correct. The correct concept would be "Robo-Lawyer," not Avatar Lawyer. And it is one spectacular movie.
TrackBack URL for this entry:
Listed below are links to weblogs that reference Artificial Intelligence, Avatar* Lawyers, and Judgment: