Appellate Advocacy Blog

Editor: Tessa L. Dysart
The University of Arizona
James E. Rogers College of Law

Saturday, August 11, 2018

Of Robots and Law School


Perspectives: Teaching Legal Research & Writing is one of my favorite journals. It's dedicated to articles about teaching law students (and that is one of my favorite things). The editors put out an interesting call for papers recently. They asked: Will AI change how we teach the law? 

It got me thinking. 

As much as technology is disrupting the practice of law, it has the potential to downright flip the teaching of law on its head. After all, technology not only brings new ways of lawyering—which will change what we teach—but it's also changing how we teach. 

I thought it would be fun to brainstorm a bit. How could just one form of technology—artificial intelligence—change things for us law professors in the next few years? I have a big list, and I think it will be interesting to lawyers and law students, too. 

But first, some vocabulary. I'm confining things here to "artificial intelligence"—technology that aims to mimic human thinking and complete complex tasks that usually require human-level intelligence. There is a lot more technology that might disrupt lawyering and legal teaching, like virtual reality or online learning, but we will leave those for another day. 

Let me also say at the outset: AI should be exciting for anyone involved in teaching law students. The realities of today's legal market mean that students need more practical skills. Some smart folks predict that lawyering will become more and more project-based in coming years. The number of solo and small-team practitioners are likely to rise. Our students will need entirely new skill sets. They must be flexible, resourceful, and adaptable. And AI can help us close a lot of skill gaps and empower students in ways never possible before. 

Now to the AI. My running list of ways that autonomous tech will likely change legal education (and it is a running list; I'm sure there are loads more), includes:

  1. We need to prepare students for new AI-related laws;
  2. Students need to know how to deal with the ethical implications of AI;
  3. Students need practical training on new AI legal research tools;
  4. Students need to understand how to use AI-aided drafting and analysis tools (including the shortcomings of these tools);
  5. We need to show students how to deal with AI evidence in court;
  6. We need to give students an understanding of what AI can and cannot do for lawyers
  7. We need to train students fundamental skills for learning and using new technology tools, including those leveraging AI;
  8. We need to learn how to use AI to help us teach better.

First, the obvious change is that AI is already creating new substantive laws that we will need to teach our students about. For example, autonomous cars are a hot topic for legislatures across the country. But the big changes will come as AI penetrates deeper into the fabric of our civilization: AI investments; AI decision-making about benefits, or hiring decisions—or a million other business matters; autonomous weapons systems; safety regulations for AI; privacy regulations for AI; AI competition regulation; AI workforce regulations; and on and on.  

Indeed, Congress recently introduced the Future of AI Act (H.R. 4625) to broadly consider new AI legislative issues. Several other AI bills have been introduced: the AI Jobs Act of 2018, the Self-drive Act, the AV Start Act, and more. AI-related laws are going to be a bigger deal each year, and to equip our students for the future, we law professors will want to wade into this new doctrinal frontier.

Second, AI poses all sorts of legal-ethics issues. As I touch on below, AI will soon predict the outcomes of litigation with frightening accuracy. How much can a lawyer rely on those predictions in advising a client? In taking particular litigation positions? Can a lawyer solely rely on legal research done by AI (that is probably better than the research most folks do, anyway) and still comply with her ethical demands? What about when a lawyer uses AI to file lawsuits autonomously? 

Next, AI legal research is probably the most obvious way our teaching has already changed. I recently touched on some groundbreaking AI legal research programs in another post. Not only can these tools find cases better than humans, but they can continuously crawl through the data and keep lawyers updated with changes in the law. These programs are already giving attorneys a tactical advantage—at least, for those daring and patient enough to learn them. So what better way for us to support our students than to teach them how to use these tools, too?

Which brings us to the meat: the drafting itself. AI is already offering help here—even with the complex analysis. AI programs can draft contracts, and even divorce degrees, better than us mortals. Take JP Morgan’s COntract INtelligence program, COIN, which analyzes legal document with fewer errors than humans. Or ROSS, which can prepare legal memos with rule discussions that can be plug-and-played into a brief.

Or Judicata’s new tool, Clerk, which picks apart the cases and legal analysis in your brief and offers options to rewrite it all better. Here are a couple shots of this tool in action. First, the AI plucks out an argument from a brief and figures out what other cases might be better:

Case analysis

Here is a snapshot of Judicata’s tool working to analyze the cases a brief uses to support an argument:

Cases strengthOther tools can help not only with analyzing caselaw or arguments—but they can actually predict the likelihood that a particular argument will succeed with a particular judge. Lawyers can equip themselves with insight about what a judge may like in a brief, how opposing attorneys are likely to advocate, and more. Lex Machina, for example, applies natural-language processing to millions of court decisions to identify trends that can be used to help lawyers craft better arguments. The program can do things like summarize the legal strategies of opposing lawyers based on their case histories and determine the arguments most likely to convince specific judges:

LexOther tools, like Premonition, can predict the winner of a case before it even goes to court, allowing lawyers to better advise clients at the outset of a case. These AI briefing and analysis tools can give our students a huge leg up as they enter practice (often opposite lawyers who have no idea these tools even exist).

Next, AI is already beginning to influence the evidence lawyers use in court. AI can process data in new ways, giving us accurate predictions and insights into the world. This will mean that, soon, we will see lawyers using AI to bolster factual evidence, expert valuations, and more. Imagine how much more accurate an AI would be at estimating damages or causation (with its ability to crunch massive sets of constantly-updated data). Some of this is already happening. 

With all of the changes AI is bringing, lawyers (and thus our students), will need to understand enough of how AI works so that they speak the language and argue about its shortcomings. After all, lawyers still need to interpret all the data that spits out of these AI tools. For example, lawyers need to understand how AI can actually magnify data problems and why, sometimes, its conclusions can be undermined by bad inputs. A lot of lawyers hear about some technology tool and exclaim: “oh wow, a company says this analysis is 98% accurate? It must be trustworthy!”

But instead, the lawyer should know enough to at least ask the right questions, like: “Can you show me the set of metrics that this AI analysis used? Why did you select these? Why did you leave out what you did? What process did you use to get your underlying data?”

A related skill that students will need is the ability to discern between the good and the bad of technology. There are tons of new AI and other tech tools released each month, and much of it is garbage. For hapless lawyers who don’t understand what’s under the hood, it can be tough to tell the difference. Even simple best practices—like knowing to do manual quality checks of the results you get from a tool—can go a long way.

Let me give you two final ways AI should change how we’re teaching law. First, students need the basic tech skills that will allow them to research and pick up new AI tools as they roll out. Every tool will be different, with new user interfaces, new options, and new quirks to master. Students need to know the right questions to ask and the basics of how each type of tool work. For example, e-discovery platforms all have a common set of features (like tagging, search organization, and data fields). We can teach students these common features, as well as strategies for how to quickly learn the ins and outs of a new tool, so they don't have to rely on trial and error.

Finally, AI should can us more directly with our teaching. There are so many possibilities here I will just leave you with a couple.

AI analytics can tell us which legal fields are the hottest—allowing us to focus more on those doctrinal areas so that students are better positioned for success.

AI can give us insights into learning data on various dimensions, helping us identify which teaching methods work, how better to structure the learning process, and which teaching methods are better correlated with learning outcomes. Is a teaching method working? Have an AI review your students work before and after--it will be able to tell you. 

Something I am experimenting with a bit right now, AI can even do some of the teaching itself. Imagine an AI that can give your students live feedback on their legal writing—pointing out improper citations and guiding them in writing a correct one, pointing out grammar problems with examples of how to fix them, pointing out poor headings, poor rule explanations, or poor applications—or problems with formatting or any of a dozen other things we are constantly trying to teach our students (but never have enough time for). And how thankful would our students be for high-quality essay feedback, on demand? 

Look, I know there is some worry that AI programs might take over the world and turn us into their slaves. But in the meantime, AI can do a lot of good for us law professors, students and lawyers. 

Joe Regalia teaches at Loyola University School of Law, Chicago and practices at the firm of Sidley Austin LLP. The views he expresses here are solely his own and not intended to be legal advice. Check out his other articles here

| Permalink


Post a comment