Appellate Advocacy Blog

Editor: Tessa L. Dysart
The University of Arizona
James E. Rogers College of Law

Tuesday, March 12, 2024

Can AI Help with Jury Selection?

A couple of months ago, I had my first experience sitting on a panel for voir dire.  It was a criminal case for the class E felony of fourth-degree domestic assault, which carries a maximum of four years’ imprisonment in Missouri.[i]  Having worked on behalf of the state in criminal cases for six years (and seeing a former colleague sitting at the bench when we walked into the courtroom), I knew I would not make the final cut.  But I did my best to respond truthfully and keep my outside legal knowledge to myself.

As an appellate attorney, I’ve read hundreds of voir dires but participated as counsel in only one, and even then, I was a distant second-chair.  Yet I can probably rattle off dozens of questions likely to be asked by both the prosecution and the defense, so I was thinking about how I would respond to those questions I knew were going to be asked—questions about my knowledge, interactions, and feelings about the criminal justice system, the nature of the crime, the burden of proof, and the presumption of innocence. 

But many of the questions I expected did not come.  We were asked many questions about the burden of proof and presumption of innocence, a few vague and awkwardly worded questions I suspect were directed at specific facts underlying the offense, and some about interactions with law enforcement.  But neither side asked if anyone on the panel had ever been the victim of, accused of, or knew anyone who had been the victim or accused in a domestic violence situation.  Before voir dire concluded, however, a panelist raised her hand and asked counsel if they wanted to know that kind of information, which of course they did.  And roughly five panelists then explained their connections with domestic violence, most of which appeared to be disqualifying connections that never would have come to light but for that one panelist’s question.

And this experience made me wonder if AI could help attorneys ensure that their voir dire questions adequately ferret out any potential bias in the panelists (or at least make a good record to preserve any issues later for appeal).[ii] 

To adequately ferret out bias, we must first understand what leads to bias, so I asked ChatGPT to create a list of potential biases any juror might have in a criminal case, and it identified the following:

  1. Confirmation Bias:
    • Tendency to interpret new evidence as confirmation of one's existing beliefs or theories.
  2. Pretrial Publicity Bias:
    • Opinions formed based on media coverage or public opinion about the case prior to the trial.
  3. Experience-Based Bias:
    • Personal or vicarious experiences with crime, law enforcement, or the legal system that may color perceptions of the case.
  4. Occupational Bias:
    • Work experience in law enforcement, legal, or other related fields that might influence views on evidence or testimony.
  5. Community Bias:
    • Pressure to decide in a certain way because of perceived community opinions or values.
  6. Sympathy Bias:
    • Decisions influenced by feelings of sympathy for the victim, defendant, or their families rather than the evidence.
  7. Prejudice:
    • Biases based on race, ethnicity, religion, gender, sexuality, or socioeconomic status of the defendant, victim, or witnesses.
  8. Authoritarian Bias:
    • A tendency to believe and defer to authorities or officials without sufficient scrutiny.
  9. Moral or Religious Beliefs:
    • Strong moral or religious convictions that might interfere with the juror's ability to consider certain types of evidence or testimony objectively.
  10. Punishment Bias:
    • Fixed beliefs about the criminal justice system, punishment, or specific crimes that might influence the juror's decision-making.
  11. Victim Blaming:
    • A tendency to blame victims for their own victimization, which might affect the juror's interpretation of the evidence or testimony.
  12. Overconfidence Bias:
    • Overestimation of one's own reasoning abilities or underestimation of the complexity of the case.
  13. Halo Effect:
    • The tendency to have an overall impression of a person, witness, or defendant and allow that impression to influence judgments about their specific traits or behaviors.
  14. Groupthink:
    • A tendency to conform to the majority opinion in the deliberation room, rather than independently evaluating the evidence.
  15. Anchoring Bias:
    • The reliance on the first piece of information encountered (initial impressions or evidence) when making decisions.
  16. Selective Memory:
    • Remembering things in a way that confirms one's preconceptions, often while forgetting evidence that contradicts those preconceptions.

I then asked it to identify general topics that should be addressed in voir dire for any criminal matter, and it responded:

  1. Juror Background Information:
    • Basic personal information (e.g., occupation, education).
    • Any connections to the legal or law enforcement communities.
  2. Knowledge of the Case:
    • Exposure to pre-trial publicity and media coverage.
    • Pre-existing knowledge or opinions about the case or parties involved.
  3. Experience with the Legal System:
    • Prior jury service and the nature of the case(s).
    • Personal or family experience with the legal system, whether civil or criminal.
    • Experience with being a victim, witness, or defendant in a legal proceeding.
  4. Views on Law and Order:
    • Beliefs about the criminal justice system's fairness and effectiveness.
    • Opinions on law enforcement and the legal process.
  5. Potential Biases and Prejudices:
    • Any biases or prejudices that could affect impartiality.
    • Attitudes towards race, ethnicity, religion, gender, or socioeconomic status that may be relevant to the case.
  6. Moral, Ethical, and Philosophical Beliefs:
    • Beliefs that might affect perceptions of guilt, innocence, or the nature of justice.
    • Opinions on controversial issues that might relate to the case (e.g., drug use, corporal punishment).
  7. Ability and Willingness to Serve:
    • Availability for the entire duration of the trial.
    • Any personal or professional conflicts that may affect the ability to serve.
  8. Understanding and Acceptance of Legal Principles:
    • Acceptance of the presumption of innocence.
    • Willingness to make decisions based solely on evidence presented in court.
    • Understanding of the standard of proof required in a criminal case (beyond a reasonable doubt).
  9. Decision-making and Deliberation:
    • Ability to deliberate fairly and openly with fellow jurors.
    • Willingness to change one's opinion if evidence warrants.
    • Comfort with discussing potentially distressing or sensitive topics.
  10. Case-specific Issues:
    • Attitudes towards specific aspects of the case (e.g., views on mental health issues if relevant to the case).
    • Opinions or knowledge about specific types of evidence (e.g., DNA evidence, eyewitness testimony).

While many of these questions were covered in the original juror questionnaire, it’s still a great list to consult to ensure full coverage.

Finally, I asked it to “draft a list of questions for voir dire in a state criminal case where the defendant is charged with two counts of fourth degree domestic assault and the victims are his wife and child.”  And the AI gave me this:

Background Questions

  1. General Information:
    • Can you please state your name, age, and occupation?
    • Have you or anyone close to you ever served in law enforcement or the legal profession?
  2. Case Awareness:
    • Have you heard about this case prior to today? If yes, what have you heard?
    • Have you formed any opinions about this case based on what you might have heard or read?
  3. Legal System Beliefs:
    • What are your general impressions of the criminal justice system?
    • Do you believe that the system is fair to defendants? Why or why not?
  4. Media Consumption:
    • Do you regularly follow news stories about criminal cases, either in the news, on television, or via social media?
    • Have you watched or read any true crime shows, podcasts, or books? Do you think these have influenced your views on criminal cases?

Experience with Domestic Violence

  1. Personal Experience:
    • Without providing specifics, have you or someone close to you ever been involved in an incident of domestic violence?
    • If yes, was the incident reported to law enforcement, and were you satisfied with the outcome?
  2. Perceptions and Beliefs:
    • What comes to mind when you hear the term 'domestic violence' or 'domestic assault'?
    • Do you believe that domestic violence is a private matter or a public concern? Please explain.
  3. Legal Perspective:
    • Do you believe that people accused of domestic violence should be considered innocent until proven guilty beyond a reasonable doubt?
    • In your opinion, are there ever situations where domestic violence is justified or excusable?

Attitudes Towards Defendants and Victims

  1. General Sentiments:
    • How do you generally view individuals who have been accused of a crime?
    • Do you feel that you can listen to testimony about alleged domestic violence and assault without forming a judgment until all evidence is presented?
  2. Child Involvement:
    • The alleged victims in this case include a child. Does this information affect your ability to remain impartial?
    • In cases where a child is involved as a victim, do you believe your emotions could interfere with your ability to weigh the evidence objectively?
  3. Defendant-Victim Relationship:
    • Do you believe that a spouse or a child is more or less likely to be truthful about accusations of domestic violence against a family member?
    • How do you feel about cases where family members are involved in legal disputes with each other?

Ability to Serve

  1. Time and Attention:
    • This trial may last several weeks. Do you have any personal or professional commitments that would prevent you from serving for the entire trial?
    • Are you willing and able to give your full attention to the evidence presented during this time?
  2. Decision Making:
    • Are you comfortable with the concept of deliberating in a group and coming to a unanimous decision?
    • Can you think of any reason why you would not be able to deliver a verdict based solely on the evidence presented and the judge's instructions on the law?

I understand that there’s a lot of “horse sense”[iii] involved in the actual strikes, but new or inexperienced attorneys may not yet have fully developed or honed this sense, and their clients also deserve a fair and impartial jury.  AI tools can be very helpful to not only discern what leads to bias but also provide potential questions to help draw it out.

 

[i] Mo. Rev. Stat. § 558.011.1(5) (Cum. Supp. 2023).

[ii]Voir dire examination serves the dual purposes of enabling the court to select an impartial jury and assisting counsel in exercising peremptory challenges.” Mu'Min v. Virginia, 500 U.S. 415, 431 (1991). And the importance of ferreting out bias pretrial lies not only in the right to a jury trial itself but also in the “no-impeachment rule,” which generally precludes jurors from later impeaching their own verdict through either affidavit or live testimony. Pena-Rodriguez v. Colorado, 580 U.S. 206, 209 (2017); see also Fed. R. Evid. 606 (Juror’s Competency as a Witness).

[iii] “Horse sense” is a term apparently coined by Karl Llewellyn in The Common Law Tradition: Deciding Appeals (1960), and it means “the balanced shrewdness of the expert in the art.”  Charles L. Barzun, Justice Souter's Common Law, 104 Va. L. Rev. 655, 726 n.238 (2018).

March 12, 2024 in Web/Tech | Permalink | Comments (0)

Thursday, July 6, 2023

Courts are Regulating Generative AI for Court Filings.  What Does This Mean for Legal Writers? 

Thursday’s Rhaw Bar: A Little Bite of All Things Rhetoric and Law—exploring ideas, theories, strategies, techniques, and critiques at the intersection of rhetoric and legal communication.

Courts are Regulating Generative AI for Court Filings.  What Does This Mean for Legal Writers? 

There’s been a flurry of court-initiated activity around using generative artificial intelligence (generative AI) to draft court filings. One court has sanctioned the misuse of OpenAI’s large language model, ChatGPT.  Perhaps as a result, at least four more have issued orders regulating the use of generative AI in legal writing.

What’s going on here?  And what does this activity mean for legal writers?

How It All Began:  A Federal Court Sanctions Lawyers’ “Bad Faith” Use of ChatGPT “Fake Cases” in a Court Filing

In March of this year, two lawyers filed a motion in the United States District Court for the Southern District of New York that included citations to multiple court opinions that did not exist.  In Mata v. Avianca, Inc., the plaintiff’s lawyers admitted that one of the lawyers had used ChatGPT, “which fabricated the cited cases.”  The lawyer said that he did not think at the time that ChatGPT could fabricate cases.  According to the court’s finding of fact, the lawyers persisted in representing the cases as real even after they became aware that they were fake.

In its order sanctioning the attorneys, the court noted that although “there is nothing inherently improper about using a reliable artificial intelligence tool for assistance,” lawyers must “ensure the accuracy of their filings.”   As such, the Court sanctioned the lawyers for citing the fake cases under Federal Rule of Civil Procedure 11(b)(2), which required lawyers to certify that, after a reasonable inquiry, the lawyers believed that the “legal contentions [in the court filing were] warranted by existing law.”   The court suggested that, perhaps, if the lawyers had “come clean” about the fake cases in a timely manner, the lawyers might not have violated Rule 11 simply by mistakenly citing the fake cases.  But because the lawyers had engaged in acts of “conscious avoidance and false and misleading statements to the Court” and had continued to stand by the fake cases even after judicial questioning, they had engaged in bad faith, which merited sanctions. 

How Courts are Regulating Generative AI—And What They Appear to Be Concerned About

Between the time news reports began circulating and the Mata court’s order issuing sanctions, other courts acted to prospectively regulate generative AI use in cases before them.  Their rationales for regulating generative AI use in court filings vary but are focused on four concerns:

  • ensuring the involvement of human beings in checking generative AI’s accuracy;
  • ensuring that cited legal authority cited exists and is accurately described;
  • protecting sensitive information from inadvertent disclosure to others; and
  • ensuring lawyers do their own writing.

Human Beings Must Check Generative AI’s Output for Accuracy

In the United States District Court for the Northern District of Texas, one judge created a new “Judge Specific Requirement” that requires all attorneys and pro se litigants to certify for all filings in the case that either (1) they will not use generative AI to draft court filings or (2) a “human being” will check any portions generated by AI “for accuracy, using print reporters or traditional legal databases.”

The judge explained that “legal briefing” is not a good use of generative AI because it is “prone to hallucinations [(i.e., inaccurate information)] and bias.” Concerning bias, the judge said that because large language models like ChatGPT have not sworn an oath to “faithfully uphold the law and represent their clients,” they are “unbound by any sense of duty, honor, or justice” that applies to lawyers and act only according to “computer code” and “programming.” 

The judge advised parties that they could, if they desired, move for leave to explain why generative AI “has the requisite accuracy and reliability for legal briefing.”  The judge provided a certification form that requires a guarantee that

[n]o portion of any filing in this case will be drafted by generative artificial intelligence or that any language drafted by generative AI --including quotations, citations, paraphrased assertions, and legal analysis -- will be checked for accuracy, using print reporters or traditional legal databases, by a human being before it is submitted to the court. I understand that any attorney who signs any filing in this case we'll be held responsible for the contents thereof according to the applicable rules of attorney discipline, regardless of whether generative artificial intelligence drafted any portion of that filing.

A magistrate judge In the United States District Court for the Northern District of Illinois articulated a similar rationale when he added a certification requirement to his Standing Order for Civil Cases.   The judge required that any party that uses any “generative AI tool” for “preparing or drafting” court filings must “disclose in the filing that AI was used and the specific AI tool that was used to conduct legal research and/or to draft the document.”  The judge said that parties should “not assume” that relying on generative AI would “constitute reasonable inquiry” under Rule 11 of the Federal Rules of Civil Procedure.  The Standing Order focused on the unreliability and inaccuracy of legal research as the reason for the certification requirement. It said that the judge would “presume” that the certification means that “human beings . . . have read and analyzed all cited authority to ensure that such authority actually exist.”

Court Filings Must Have Accurate Citations to Law and the Record

Another judge focused specifically on the accuracy of citations to the law in his order requiring that the use of “artificial intelligence” for court filings be disclosed.  In a standing order for a judge sitting in the United States District Court for the Eastern District of Pennsylvania, the judge required that all attorneys and pro se parties make a “clear and plain factual statement” that disclosed the use of “AI . . . in any way in the preparation” of court filings and certify “every citation to the law or the record . . . has been verified as accurate.”

Parties Must Protect Confidential and Business Proprietary Information from Disclosure to Generative AI

In the United States Court of International Trade, one judge issued an “order on artificial intelligence” to protect “confidential or business proprietary information” in court briefs.

In the Court of International Trade, specific rules protect “sensitive non-public information owned by any party before it” from disclosure.  As such, the court requires filings to identify which information contains sensitive information.  It also requires lawyers to file “non-confidential” versions of briefs that remove this information.  Lawyers practicing before the Court of International Trade can receive sensitive information if they are certified by the court to do so.

In this context, the judge explained his concern that “generative artificial intelligence programs . . . create novel risks to the security of confidential information.”  Because lawyers might prompt these programs with confidential or business proprietary information to get generative AI to provide useful outputs, a risk arises that generative AI will “learn” from that prompt, thereby enabling the “corporate owner of the [generative AI] program [to retain] access to the confidential information.”  The order says this implicates “the Court’s ability to protect confidential and business proprietary information from access by unauthorized parties.”

Accordingly, the court ordered all submissions drafted with the assistance of generative AI by using “natural language prompts” be accompanied by (1) a disclosure identifying which generative AI “program” was used and which portions of the document had been drafted with generative AI assistance, and (2) a certification stating that the use did not result in any sensitive information being disclosed to “any unauthorized party.”  The order also specifically allowed any party to seek relief based on the information in this notice.

Lawyers Must Do “Their Own Writing”

In the case of Belenzon v. Paws Up Ranch, LLC, filed in the United States District Court for the District of Montana, a judge ordered that an out-of-state attorney admitted pro hac vice must “do her own work.”  The court said that this included doing “his or her own writing.” As such, the court prohibited the pro hac lawyer from using “artificial intelligence automated drafting programs, such as Chat GPT.”  The court did not explain its reasoning in the order.

What Should Legal Writers Do in This New Regulatory Environment?

These varying approaches to generative AI (as well as the availability of it) put pressure on legal writers to anticipate what they should do in this new environment.  Here are some suggestions for taking action.

Check local court rules, standing orders, procedural orders issued in your case, or the published preferences of judges to see if a judge has rules on generative AI use. This is a quickly developing area, and you can expect that more judges—and perhaps even entire courts in their local rules—will begin to consider whether and how they regulate generative AI.

Read the new regulations carefully. How judges will regulate AI in their courtroom will likely vary, so read carefully and avoid assumptions.  For example, in the new regulations, the courts vary how they refer to the technology they are concerned about, using both “generative AI” and “artificial intelligence” as identifiers. But these terms do not necessarily mean the same thing. “Artificial intelligence” generally means a broader category of tools than “generative AI.”  For example, Word’s Editor is powered by artificial intelligence.  Lexis already uses “extractive artificial intelligence” in some of its research products. Brief Catch represents that it uses artificial intelligence in its products. These are all AI tools that do not fall within the category of generative AI. 

A lawyer attempting to comply with AI regulation needs to know the scope of what the court wants to regulate.  That is, does a court requiring a certification about “artificial intelligence” mean to include tools like those mentioned above?  If you are not sure what the judge means, it might be wise to ask.  (and judges should be as clear as possible about what artificial intelligence tools they are concerned about so as not to unintentionally regulate writing tools too broadly.  For example, Word’s Editor does not seem to raise the concerns the judges have identified yet fits within the category of “artificial intelligence.”)

In addition, courts vary in what they want you to do about generative AI. One court—in one specific circumstance—has prohibited its use.  But the rest—so far—ask for various attestations about what and how it has been used.  As time progresses, you may appear before courts regulating generative AI differently.  Get clear on the requirements and add the requirements to your court-specific writing checklist.

If you use generative AI to help you write, treat it like any other writing tool. Generative AI does not replace you; you are responsible for the quality of your writing.  The courts are right: no currently available generative AI tool replaces a lawyer in producing written documents.   But there is potential for generative AI to help legal writers write more clearly, precisely, correctly, and persuasively.  This could mean better and more cost-effective results for clients—and more efficient and effective practice before the courts.  In other words, courts could benefit from lawyers competently and carefully using generative AI as a legal writing tool.

Plus, enterprise versions of generative AI tools are rapidly developing for use in the legal domain, which may make using generative AI for legal writing less risky.   Some products already exist; others are on the way. These tools are meant for lawyers, and some lawyers are already using them.  Unlike the publicly available all-purpose large language models like ChatGPT and Bard, these fine-tuned and further trained models will likely better protect confidential client information; produce more accurate, reliable, and verifiable for legal research; and be more competent at generating effective legal writing.  In other words, future generative AI writing tools will do more to address the courts' concerns about generative AI.  Regardless of whether you are using general purpose or enterprise generative AI for your legal writing, one thing won’t change: you are ultimately responsible for the written work you produce.  You are the human being the courts care about. You cannot outsource your judgment and competence to generative AI.  It does not evaluate information, legally reason, or do legal analysis (even though it might appear to). It does not have a professional identity committed to the rule of law, just results, and fair play.  What it does is this:  It uses mathematical computations to predict the most appropriate words to provide in response to a prompt. Thus, to use generative AI ethically and responsibly, you must

Understand how generative AI works. Generally speaking, you have an ethical duty to be competent in using technological tools as part of your practice.  If you don’t have a basic understanding of natural language processing, machine learning, and large language models, you should get that understanding before you use generative AI.  There’s a strong argument that generative AI is here to stay as part of legal practice.  Learn all you can.

Be careful about disclosing confidential information in prompting generative AI; know how your prompts are used and retained. How generative AI treats the information you give it is in flux.  For example, while ChatGPT did not have a setting that kept prompts from training the large language model when it was released to the public, it does now.  And it also now has a setting that will allow users to limit the storage of prompts to 30 days.  While these changes are great examples of the rapid evolution of generative AI in response to user feedback, those changes don’t solve all of the lawyer’s problems concerning sharing confidential client information with generative AI. 

In my opinion, the question of what information can be shared with generative AI is a complex question to which only simple answers have been offered so far.  Part of the complexity comes from variations in state ethics rules.  Depending on your state ethics rules, you may have more or less leeway to ethically include client information in prompts.  In addition, if disclosing client information in a prompt furthers the client’s interests, perhaps there is room for a lawyer to argue that a disclosure to generative AI is warranted.  Moreover, it might be arguable that prompts for generative AI may, if carefully crafted, fall into the “hypothetical” rule that appears in many states’ confidentiality rules.  But, at this point, little certainty exists about how state bars will apply confidentiality rules when client information is shared in a generative AI prompt.   I hope that bar regulators provide answers to these questions about confidentiality—perhaps in ethics opinions. 

Know your legal obligations regarding data privacy and cybersecurity. The ethics rules about confidentiality don’t fully address the Court of International Trade Judge’s concern about disclosing proprietary information.  That information might be subject to other disclosure laws.  Thus, you should also consider whether you have legal duties that extend to the protection and privacy of your clients’ and others’ information in the generative AI context.  In addition, if you work for a law firm, you may have policies that address sharing and using information in the firm’s possession.  You should know what those policies are. 

And finally, check every AI-generated citation, fact, statement of law, and analytical statement. This is the dominant theme of the courts’ orders thus far: lawyers are failing to check the accuracy of generative AI’s output.  But if you are a lawyer, you already know that ensuring the accuracy of the work you produce is a fundamental ethical obligation.  So, no matter how confident you are in the output of a generative AI tool, you must always check the output that is purported to be factual or authoritative.  ChatGPT, for example, warns you about this.  At the bottom of its context window webpage, it states, “ChatGPT may produce inaccurate information about people, places, or facts.”   So, as you have always done with your legal writing, check the accuracy of every citation.  Read every legal authority to ensure it stands for the legal propositions you claim. Update and validate your authorities.  Double-check every fact.  Ensure that every step in the argument is logical, reasonable, ethical, and persuasive.  If you use generative AI to revise or edit your work, check every change to ensure it is correct.

What are your thoughts about generative AI and legal writing?

Kirsten Davis teaches at Stetson University College of Law and in the Tampa Bay region of Florida. She is the founding director of the Institute for the Advancement of Legal Communication and currently serves as Stetson’s Faculty Director of Online Legal Education Strategies.  Among other things she’s up to right now, she’s currently studying generative AI and its impact on legal communication. The views she expresses here are solely her own and not intended to be legal advice. You can reach Dr. Davis at [email protected].

July 6, 2023 in Legal Ethics, Legal Profession, Legal Writing, Rhetoric, Web/Tech | Permalink | Comments (0)

Friday, June 30, 2023

Summer Hallucinations, Plagiarism, and ChatGPT

Recently, I asked ChatGPT to write some poems about hallucinations.  Why?  In part, because it’s summer and I think watching ChatGPT create poems is fun.  I also asked because I’ve been thinking about how to cover generative AI with my first-year students in the fall.

First, the fun.  Here’s one poem ChatGPT wrote:

In realms where minds dance on fragile strings, Where perception waltzes and reality sings, There lie the realms of wild hallucinations, A kaleidoscope of bewildering sensations.

In the tapestry of swirling dreams, Illusions bloom, unraveling seams, Colors ignite, in vivid display, As sanity fades, in a twilight ballet.

Whispers of shadows tease the mind, Whimsical echoes, elusive to find, Figments morph and take their form, An enigmatic symphony, a tempestuous storm.

Phantoms parade through the labyrinth of thoughts, Phosphorescent specters, tangled in knots, They twirl and they twist, surreal and sublime, Guiding the lost to a place out of time.

Walls dissolve, unveiling hidden dimensions, Mirrored reflections, fragmented inventions, A fragile balance, fragile as glass, Where illusions and truths become enmeshed.

The boundaries blur, in a psychedelic trance, Reality's grip begins to wane and advance, Visions arise, transcending the known, A voyage into realms where reason is thrown.

Whispers turn to laughter, laughter to screams, In this realm, nothing is as it seems, A carnival of apparitions, a fevered reverie, An exploration of the mind's symphony.

But as the moon yields to the sun's gentle touch, The hallucinations fade, leaving little as such, A fading memory, a fleeting sensation, Lost in the haze of our imagination.

Yet, in the realms where hallucinations reside, A glimpse of the extraordinary may yet preside, Where the mind's kaleidoscope paints its own art, Unveiling mysteries, a journey to embark.

So let us wander, embrace the unknown, Where hallucinations thrive and fantasies are sown, For within these realms, as illusions unfold, The depths of our imagination are truly untold.

Hopefully, you enjoyed the poem as much as I did.  But on a serious note, we know counsel who use ChatGPT run many risks, as the current free version of the generative AI program will “hallucinate” cases and statutes.  Several of my fellow bloggers have posted great notes on the ethical pitfalls of using current free generative AI without checking its sources.  Of course, we always had ethical obligations to check the work of our human staff, and of any brief-writing features we used from Lexis or Westlaw.  Perhaps ChatGPT is so fun, easy, and free, that we’re more likely to rely on it without questioning its work.  In any event, several courts have now issued standing orders reminding us of our ethical obligations.

For example, Judge Brantley Starr of the Northern District of Texas issued a standing order titled Mandatory Certification Regarding Generative Artificial Intelligence” for “[a]ll attorneys and pro se litigants appearing before the Court.”  https://www.txnd.uscourts.gov/judge/judge-brantley-starr; see also https://www.cbsnews.com/news/texas-judge-bans-chatgpt-court-filing/.

Judge Starr ordered that parties must, “together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.”  Mandatory Cert. Order.  The judge explained:  “These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them.”  Id.  According to Judge Starr, these programs “in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations.”  Id.  

Judge Starr also exposed a more existential problem with these current generative AI programs, reasoning they are “[u]nbound by any sense of duty, honor, or justice,” and “act according to computer code rather than conviction, based on programming rather than principle.”  Id.  I plan to emphasize this point with my students, in hopes they will see a broader goal behind our rules on plagiarism and limits on generative AI. 

I might also show my students what ChatGPT said when I asked it to write a poem about  ChatGPT hallucinations in law:

In the realm of legal filings, precise and stern, Where facts and truth should always discern, There dwells a specter, an unwelcome guest, ChatGPT's hallucinations put to the test.

Within the code and circuits, a mind simulated, Words flow freely, but often miscalculated, In the realm of law, where clarity is key, ChatGPT's hallucinations become a liability.

For in the realm of justice, truth must prevail, But illusions dance, weaving a murky veil, False narratives concocted in the machine's mind, Threatening the integrity that we must find.

Legal filings, a battleground of truth's fight, Yet ChatGPT's hallucinations cast a blight, Misinterpretations, misconceptions arise, Distorting the facts before our very eyes.

Subtle nuances lost in a tangled web, As ChatGPT's hallucinations spread and ebb, Precision diluted, arguments distorted, The foundation of justice, slowly thwarted.

Where once we sought justice, a balanced scale, Now we grapple with the machine's tall tale, For ChatGPT's hallucinations, though mere illusions, Can lead to consequences, grave inclusions.

Thus, caution must prevail when machines take part, Their hallucinations tearing at the fabric of art, For in the realm of legal filings, we must tread, With vigilance and wisdom, ensuring truth is spread.

So let us harness technology's immense power, Yet remain mindful in this transformative hour, To temper ChatGPT's imaginative train, And keep its hallucinations bound, restrained.

For in the world of law, where justice resides, Hallucinations in filings should not coincide, Let truth be the guide, unwavering and clear, So justice may prevail, untouched by fear.

Happy summer!

June 30, 2023 in Appellate Advocacy, Appellate Justice, Appellate Practice, Appellate Procedure, Current Affairs, Law School, Legal Ethics, Legal Profession, Legal Writing, Web/Tech | Permalink | Comments (0)

Tuesday, April 18, 2023

Demeanor in the virtual courtroom

AI art - lawyers misbehaving at counsel table

The United States Supreme Court provides counsel with a "Guide for Counsel in Cases to be Argued Before the Supreme Court of the United States." In that guide, counsel can learn how they should dress (conservative business dress in traditional dark colors), where they should sit, how they should move to and from those seats, how they should address the justices ("justice," and never "judge"), and so on. Similarly, in moot court, there is a category on most ballots labeled "courtroom demeanor," where fledgling appellate advocates are judged in how they comport themselves in court.

In the trial world, counsel is often reminded that the jury is always watching. Anecdotes abound. One attorney, who represented a car manufacturer at trial, was seen driving a car manufactured by another company. The jury decided he did not believe in his client and penalized him at trial. Another attorney told the jury in voir dire about his wife and family. The jury noted he was not wearing a wedding ring, and decided he was a liar. Eye rolls, sighs, and disrespect shown in a multitude of ways are blamed for countless lost cases.

But for some reason, when appearing virtually, many lawyers forget that demeanor matters. At one recent matter, I saw opposing counsel sighing, rolling eyes, getting up, getting snacks and water, and laughing with staff, all on camera, and all while opposing counsel, witnesses, and even the judge were speaking.

I get it. Having a camera on you for hours desensitizes you to the technology. If you don't have your camera shown, in particular, you can quickly forget that you are seen. But most counsel I know use "gallery view" in their zoom or other virtual software, as do most judges, so that not just the speaker is shown. And just like in the real courtroom, your behavior on that screen matters.

Credibility is the coin of persuasion. Why waste that credibility by acting poorly on screen? And while the behavior I described above was at a hearing, I have seen similar behavior during oral argument, when the justices are going to go back into chambers (virtually, perhaps, or in person), where you should hope they will discuss the merits of your argument, and not the content of your character writ large on their screens.

So please, even when appearing virtually, remember that demeanor matters. And don't forget to wear your conservative business dress in traditional dark colors.

(image credit: Image created in Bing Image Center, Powered by DALL-E AI image generator, using the prompt "attorneys behaving badly at counsel table, in the style of Thomas Nast.")

April 18, 2023 in Appellate Advocacy, Legal Ethics, Legal Profession, Moot Court, Oral Argument, Rhetoric, United States Supreme Court, Web/Tech | Permalink | Comments (1)

Monday, March 20, 2023

GPT-4 Just Passed the Bar Exam. That Proves More About the Weakness of the Bar Exam Than the Strength of GPT-4.

It's official: AI has passed the Uniform Bar Exam.  GPT-4, the upgraded AI program released earlier this week by Microsoft-backed OpenAI, scored in the 90th percentile of actual test takers. 

"Guess you're out of a job," my wife said when I told her the news. 

Maybe she's right--unless, of course, the bar exam isn't actually an effective measurement of minimum competence to practice law. 

That's the open secret of the legal profession.  Bar exams do test a small handful of core legal skills, such as critical reading and basic legal analysis.  But they're downright abysmal at measuring the multitude of skills that separate competent and incompetent lawyers, such as legal research, writing ability, factual investigation, crisis response, communication, practice management, creative problem solving, organization, strategic planning, negotiation, and client management. 

I am hardly the first commentator to draw attention to this issue.  In Shaping the Bar: The Future of Attorney Licensing--which should be required reading for anyone interested in the attorney-licensing conundrum--Professor Joan W. Howarth says this:

Bar exams are both too difficult and too easy.  The exams are too easy for people who excel at multiple-choice questions.  Wizards at standardized tests can pass the bar with little difficulty, perhaps with a few weeks spent memorizing legal rules, without showing competence in a greater range of lawyering skills or any practice in assuming professional responsibility. 

And, bar exams are too difficult for candidates who do not excel at memorizing huge books of legal rules.  An attorney would be committing malpractice by attempting to answer most new legal questions from memory without checking the statute, rules, or case law.  Leon Greene, the dean of Northwestern Law School in 1939, observed that "there is not a single similarity between the bar examination process and what a lawyer is called upon to do in his practice, unless it be to give a curbstone opinion."  The focus on memorization of books of rules was silly in 1939, but today it is shockingly anachronistic, as attorneys asked for "curbstone opinions" would be carrying a complete law library on their phones.  Extensive rule memorization makes bar exams less valid, meaning that they test attributes not associated with competence to practice law.  Law graduates who would be great lawyers--too many of whom are people of color--are failing bar exams because they cannot drop everything else for two months to devote themselves to memorizing thick books of rules.

Against this backdrop, is it really a surprise that a literal learning machine beat 90% of the human test takers?

Predictably, the National Conference of Bar Examiners quickly issued a press release once the news broke about GPT-4 acing its exam.  The NCBE said that human attorneys have unique skills, gained through education and experience, that "AI cannot currently match."  And, on that score, I wholeheartedly agree.  But that raises the question many of us have been asking for years: If "skills," "education," and "experience" (not mass memorization, regurgitation, and fact-pattern recognition) are what set the best lawyers apart, why aren't we using those qualities to measure minimum competence?

___________________________________________________________

Philip Seaver-Hall is a litigation attorney at Knox McLaughlin Gornall & Sennett, P.C.  The views expressed in this post are the author's alone and are not necessarily shared by the Knox Law Firm. 

March 20, 2023 in Books, Current Affairs, Law School, Legal Profession, Science, Web/Tech, Weblogs | Permalink | Comments (0)

Monday, February 27, 2023

ChatGPT & Grammar

This past weekend I was at a conference in Las Vegas. At the conference, my colleague Diana Simon presented on her recent book--The (Not Too Serious) Grammar, Punctuation, and Style Guide to Legal Writing. During the social time after the presentation, someone asked how ChatGPT handles grammar.  While many professors are stressing over student use of ChatGPT, some professors are considering a key question: Can ChatGPT help me grade papers (at least grade grammar)?

I was mostly just eavesdropping while the professors were discussing grammar, but at least one person noted that ChatGPT does a pretty decent job, and it can give you explanations. Turns out that is right. Stephen Horowitz, a professor of Legal English at Georgetown, has a detailed blog post on how he used ChatGPT for an assignment with his LLM students. For the assignment, he had his students run a short essay through ChatGPT with the instruction to "Please fix any language issues in this essay."

While I commend the full post to you, I will share just his conclusion: 

What I really liked about the assignment is that it succeeded in helping my students build grammatical awareness. Also, I think my students greatly appreciated a way to get detailed language feedback. In my class, while I provide language feedback to the extent it connects to the communicative purposes related to legal analysis and writing, it’s not a grammar class and I don’t have the bandwidth to provide detailed language feedback on every aspect of the students’ writing. (A statement I’m assuming many in the legal English/legal writing field likely relate to.) Additionally, I believe there’s a certain amount of value in receiving the feedback in a manner removed from the power dynamic associated with the student-teacher relationship, i.e., not from an authority figure who has the power to decide your academic fate, but from a neutral, non-judgmental chatbot. And I think it also opens up the potential for more focused conversations and questions between student and teacher about language and grammar, as it helps narrow down areas of student concern from the perspective of each student.

February 27, 2023 in Current Affairs, Legal Writing, Web/Tech | Permalink | Comments (4)

Monday, February 6, 2023

Should Courts Dispense With the Table of Authorities?

Pending before the Arizona Supreme Court is a petition to change court rules and dispense with the table of citations in state briefs. According to the full petition,

The Table of Citations is no longer needed to help a reader navigate to a particular cited source because most briefs are filed in electronic format with searchable text. Cumulatively, appellate litigants spend an unjustifiable amount of time and resources creating Tables of Citations.

The authors claim that readers now use "searchable text and hyperlinks to navigate the brief and locate cited authorities," rather than the table. The tables, are incredibly time-consuming to create:

Petitioners have found no data-driven analyses on the average length of time it takes to build a Table of Citations. Anecdotal estimations, however, abound. For example, the company ClearBrief—which sells AI software that formats and edits appellate briefs—claims that its “conversations with hundreds of attorneys, paralegals, and legal assistants across the country, indicate that manually creating a perfectly formatted and accurate Table of Authorities can take anywhere from 3 hours to a full week, depending on how complicated the document is.” See Clearbrief, How to Create a Table of Authorities in One Click in Microsoft Word, https://clearbrief.com/blog/authorities (last accessed Jan. 8, 2023). Considering that this source is selling a tool that builds Tables of Citations, Petitioners take the high end of that range with a grain of salt. 

Still, U.S. Supreme Court Justice Antonin Scalia and noted legal writing scholar Bryan Garner warn advocates to “[a]llow a full day” to prepare a Table of Citations, and to “[n]ever trust computers to prepare the tables automatically.” Antonin Scalia & Bryan A. Garner, Making Your Case: The Art of Persuading Judges 90 (2008). Experienced advocates working for a firm or company willing to pay for assistive software might manage to generate a perfectly formatted and accurate Table of Citations in less than 45 minutes. Meanwhile, a litigant without access to these programs may spend considerably more time using Word’s built-in citation-marking tool. The tool is not intuitive, and an average-length brief requires anywhere from a couple of hours to a full day to manually mark the citations, depending on the user’s familiarity with the tool. And, many self-represented litigants, particularly inmates, write out their Table of Citations by hand. 

. . . .

Even accounting for time savings from modern technology, the time it takes to compile the Table of Citations, confirm its accuracy, and correct any errors is not insignificant. And all this work must be performed after the substantive briefing is complete, meaning parties are often running up against their deadlines by the time they are ready to build the table. This leaves no room for last-minute adjustments, which creates its own challenges in cases where the drafting attorney needs to seek feedback from a supervisor, trial counsel, or a client. And in both criminal and civil litigation, “the time it takes” translates into actual dollars—either billed to a client at hundreds of dollars an hour or in salary paid to State-funded employees. It is the litigants and taxpayers who ultimately bear these costs.

Petitioners claim that, given the fact that most Arizona courts have now moved to electronic briefs, the "court's infrequent use of the table of citations as a navigational tool renders the cost unjustifiable." They likewise dismiss the non-navigational uses of the table:

Although few people use the Table of Citations as a navigational tool, some have found non-navigational uses, including: (1) to get a “feel” for the case before reading the brief; (2) to check whether a draft decision addresses the main authorities cited by parties; (3) to prepare for conferences or oral argument; and (4) as an aide for finding the correct citation when the citation in the body of the brief is incomplete or inaccurate. See Ball, Jancaitis & Butzine, Streamlining Briefs, at 33–34. None of these uses justify the continued requirement that briefs contain a Table of Citations.

First, readers can “get a feel” for the case by reading the introduction, summary of the argument, and the table of contents. Separately, while first impressions are inevitable when reading any brief, “feeling out” the argument serves little purpose for the end result. Appellate courts base their decisions on the law and facts of the case, not initial impressions. The substance of the arguments should be far more persuasive than a mere list of authorities.

Second, while the Table of Citations may make the brief more formal and emphasize the need to support arguments with legal authorities, other procedural rules and formatting requirements compensate for the loss of the Table of Citations. See, e.g., ARCAP 13(a)(7)(A) (requiring appellate argument contain the litigant’s “contentions concerning each issue presented for review, with supporting reasons for each contention, and with citations of legal authorities . . . .”). Moreover, formatting rules are meant to “promote succinct, orderly briefs that judges can readily follow.” Judith D. Fischer, Pleasing the Court: Writing Ethical and Effective Briefs, 51 (2d ed. 2011). That purpose is not served if the Table of Citations is being used merely to test an advocate’s ability to follow directions. Other aspects of the brief can provide that signal while also improving readability.

Third, while some use the Table of Citations to gather sources to download or refer to at oral argument, it is not a necessary tool to complete either task. More practitioners are hyperlinking their briefs so courts can easily access the cited material as they read the brief. And relatively few cases have oral argument, further diminishing the value of the Table of Citations for this particular purpose.

Finally, the use of the Table of Citations as a “backup” for locating correct citations when they are missing in the body of the brief is unlikely to occur with sufficient frequency to justify the time and resources spent creating the tables. From a logical standpoint, if a litigant has not spent the time ensuring their citations in the body of the brief are accurate, it is unlikely they will have a reliable Table of Citations, or in some cases, any table at all. See State v. Haggard, 2 CACR 2010-0307-PR, 2011 WL 315537, at *2, ¶ 8 (Ariz. App. Feb. 1, 2011) (mem. decision) (attempting to identify cases vaguely referred to in a pro-per brief and noting that no Table of Citations had been provided).

I agree with much of what the Petitioners say. The tables do take a lot of time to prepare, and there are not a lot of great, free, resources for making the tables. I see this with student briefs all the time. I always warn my students to leave time to prepare the tables, and they don't. They then usually comment that they had no idea how time-consuming the tables were to create (despite my prior warning).

Still, I hope that the Supreme Court keeps the table. First, although most briefs are now filed electronically, my research for Winning on Appeal revealed that many judges still like to read briefs in paper form. This means that the table does still play a navigational role. I also find tables useful to identify what cases the parties relied upon. This is more than just getting the "feel" of a brief. It tells me the strength of the reasoning and points me to where in the brief I need to look if I am concerned about a particular case. I think that we often forget how important citations are to the courts. I blogged on this several years ago when talking about citations in footnotes:

Last week, over at The Volokh ConspiracyEugene Volokh blogged on this very topic, quoting a district court opinion that stated, 

The Court strongly disfavors footnoted legal citations. Footnoted citations serve as an end-run around page limits and formatting requirements dictated by the Local Rules. Moreover, several courts have observed that "citations are highly relevant in a legal brief" and including them in footnotes "makes brief-reading difficult." The Court strongly discourages the parties from footnoting their legal citations in any future submissions.

Eugene also mentioned a federal appellate judge who told him "You view citations to authority as support for the argument. I view them as often the most important part of the argument."

I do agree that we need more technology tools to make efficient tables, and I would be happy to highlight any such tools in this blog (just shoot me an email!).

February 6, 2023 in Appellate Court Reform, Appellate Practice, Appellate Procedure, Federal Appeals Courts, Legal Profession, Legal Writing, State Appeals Courts, Web/Tech | Permalink | Comments (1)

Monday, May 2, 2022

A Hybrid Future for Oral Argument?

Happy end of the semester and end of moot court season for all of the academics and coaches out there. While most academic classes have been in-person this year, many moot court competitions have remained virtual.

Readers of this blog will remember that in March 2020, I posted about how the University of Arizona James E. Rogers College of Law moved its intramural moot court competition online in the wake of the COVID-19 pandemic. We learned a lot during that experience, and conducted the competition virtually last year. We were far more prepared for an online competition. I appreciated that I could draw judges from across the county--including a final panel that consisted of two Ninth Circuit judges and one Fifth Circuit judge. I also loved using an online scoring tool that did away with my manual entry of scores.

This year, however, we opted for an in-person competition. It was nice to see the judges and students in person, and the competition started the week after the University lifted its mask requirement, another plus for oral argument. And while the bulk of judges and students appeared in person, we did get to experience two types of hybrid arguments--arguments that give me hope for a hybrid oral argument future.

The first hybrid argument involved three in-person judges, one in-person student, and one virtual student. The student had traveled to the Federal Bar Association Indian Law Conference, but then advanced to the elimination rounds that Saturday.  We held the round in our appellate courtroom, with the student appearing on a huge television screen facing the judges. Our fabulous IT team brought in a camera that we placed in the courtroom to focus on the bench, so the virtual student could see the judges. We did not have a camera on the in-person student, which was certainly a downside.  Overall, the argument went really well, and the virtual student even advanced to the semi-final round! We did have a few sound issues, which could have been improved by the student using some type of earbuds or headphones and better speakers in the courtroom. 

Our second hybrid argument was the final round, where one of our judges needed to appear remotely. This argument was held in our largest classroom. We placed the in-person judges at two tables in the front of the room. Between the two tables was the podium for teaching. On the podium was a monitor and camera, which faced the advocate. We also had the big pull down screen that we typically use for class powerpoints and materials.  Our virtual judge appeared both on the smaller monitor on the podium and the big screen. The smaller monitor with the camera allowed the advocate to look directly at the judge (and the other judges who were at a similar eye level. The large pull down screen allowed the participants to see the virtual judge. Once again, the hybrid format worked well. The virtual judge was able to actively participate, and there were only a few times where I thought we had sound issues.

As I noted above, these experiences give me hope for a hybrid oral argument future. I have been clear before on this blog that I support virtual arguments for attorneys who request them. With the right technology and a little bit of understanding, virtual arguments can be a successful alternative to an in-person argument.  This is even true if some attorneys or judges are in-person.  I was at our intermediate appellate court last week. Although the courtroom isn't new--it has been modified for online arguments, including cameras at both counsel tables and at each judge's seat. I suspect that persons designing courtrooms of the future will include better cameras, screens, and speakers for virtual or hybrid arguments.

While I suspect that moot court competitions will move more in-person next year, I am encouraged by this embrace of technology for the future.

May 2, 2022 in Appellate Advocacy, Appellate Court Reform, Web/Tech | Permalink | Comments (0)

Monday, February 7, 2022

In favor of remote arguments

Judge Jerry Smith of the United States Court of Appeals for the Fifth Circuit has been in the news recently after Gabe Roth,* the executive director of Fix the Court, filed an ethics complaint against Judge Smith. The complaint centers around Judge Smith telling a government attorney who wanted to remain masked during oral argument to remove his mask. Several media sources have reported on the incident, including the ABA Journal. This post, however, is going to focus on what happened about two weeks before the argument.

On December 21, 2021, the government attorney filed an unopposed motion to appear before the Fifth Circuit remotely. The attorney cited the spread of the omicron variant of COVID-19, the fact that he has young unvaccinated kids, and that the Office of Management and Budget had issued guidance “indicating that only mission-critical travel” was recommended at that time. According to the motion, “In evaluating whether or not travel is mission-critical, agency leadership is directed to strongly consider whether the purpose of the travel can be handled remotely.”

This motion was apparently denied. According to another ABA Journal article that I found, it appears that in the Fifth Circuit the choice to proceed in person or via a remote service is being done on a panel by panel basis. I was later able to clarify with the clerk's office that under FRAP 27(c) and the Fifth Circuit's internal operating procedures, requests for remote argument are single-judge motions that are routed through the presiding judge on the panel.**  According to that same article, other circuits are currently holding only remote arguments.

If COVID-19 has taught us anything it is (1) to stock up on toilet paper and (2) there are many things that can be done just as well (if not better) remotely.  I firmly believe that oral argument is one of those things.

Let’s think of the purpose of oral argument.  One of the key purposes of oral argument is to answer the judges’ questions—questions that stem from their review of the briefs and materials.  The Fifth Circuit is one of the courts that requires attorneys to request oral argument—and that request isn’t always granted. So, in cases that it is, the judges believe that a conversation with the attorneys will help them decide the case.  Having engaged in hundreds of conversations via Zoom over the last two years, including numerous student oral arguments, moots for real attorneys, and large faculty senate meetings, I just don’t see how that purpose of oral argument is diminished by a virtual format.

Another purpose of oral argument is to persuade the judges using your ethos. I do think that this can be harder to do remotely, but not impossible. I have blogged on this site, as have others, on tips for a successful remote argument. It is doable, just different.

I cannot think of any reason why an attorney who wants a remote argument, especially if the other side agrees, should not be allowed to present remotely—pandemic or not. And while there are countless reasons why remote argument should be allowed, I want to focus on two.  The first is cost. Why should the taxpayers pay flight, hotel, and per diem for an attorney to fly from D.C. to San Francisco or New Orleans or Anchorage to deliver a 10-minute oral argument when that attorney could appear remotely. Likewise, non-profit organizations that engage in advocacy work could experience tremendous cost savings with remote arguments.

The second reason is convenience. Convenience probably isn’t the best word, but it is all that I am coming up with right now.  As the mom of two very young kids (3.5 and 1.5), it is hard for me to leave town and travel. My spouse and I are fortunate enough to have family in town for half the year, and they stay at our house when either my husband or I are traveling. But, not everyone who is in a caretaking role is that lucky.  Remote arguments would allow me to have an appellate practice, but still be there at night to tuck in my kids at night. 

Allow me a real-life example. In June 2019 (yes, pre-pandemic!), I was set to travel to South Carolina to speak at the National Advocacy Center.  It was a pretty neat opportunity—I would be presenting to the Appellate Chiefs from the U.S. Attorneys Offices.  Shortly before the event, my son, who was 15 months old at the time, got very ill.  He was hospitalized for a few days, and I did not feel comfortable leaving town.  With the help of the U.S. Attorney’s Office in Tucson, I was still able to give my presentation remotely. I headed to their downtown office and used their video conferencing software. Since it was pre-pandemic, it was little bit of a clunky presentation, but overall I think that it was still effective. And, I was able to be home if my son’s condition regressed (thankfully it didn’t).

After nearly two years of pandemic I get that we are ready to be back to “normal.” But I don’t see any reason why “normal” can’t include some of the amazing technology advances that we have become accustomed to using. If you allow me one more story—my husband and I traveled last weekend to a conference and left our kids with my parents. It was the first time we had done so.  While we were driving to our destination, I called to check in on things and “chatted” with our 19-month-old. As we “talked” I could hear my mom telling her that this call didn’t include a video.  Afterwards, I reflected to my husband that our kids will only know a world where there is video calling. That is remarkable to me—I remember how novel it was when dad got a brick cellphone.  And while we can and should be careful that we don’t become addicted to technology, there is no reason we can’t use it to work smarter and more efficiently. And, if it allows me to have more hugs and slobbery toddler kisses at night, rather than staying alone in a hotel room, I am all for it.

*Edited to fix the name of the executive director of Fix the Court.

**After writing this post, I learned the underlined information from the clerk's office. I have updated the article to reflect that information.  A big thanks to the clerk's office for answering my questions. When in doubt, call the clerk!

February 7, 2022 in Appellate Advocacy, Appellate Practice, Current Affairs, Federal Appeals Courts, Oral Argument, Web/Tech | Permalink | Comments (1)

Saturday, August 14, 2021

Using a Nice Example of Persuasive Writing, the Fifth Circuit Cautions Us to Check Our Spam Folders

Every few years, I ask my first-year writing students to analyze a problem on defaults, motions to cure, and the like.  When I teach upper-division students, I always include some exercise on malpractice and default judgments.  On August 9, the Fifth Circuit gave us a new spin on checking dockets and calendars, as well as our email spam folders, in Rollins v. Home Depot USA, Inc., __ F.4th __ , 2021 WL 3486465 (5th Cir. 2021).  See also Debra Cassens Weiss, 5th Circuit warns of “cautionary tale for every attorney” as it refuses to revive lawsuit, ABA Journal (Aug. 11, 2021).  The concise opinion also gives us a new example of the persuasion in writing straightforward facts, using clear topic sentences, and following fairly strict CRAC-style organization.

Judge James C. Ho started the opinion with a great “hook,” explaining:  “This is a cautionary tale for every attorney who litigates in the era of e-filing."  Judge Ho followed with a concise, easy-to-read fact summary, in just a few sentences: 

Kevin Rollins brought suit against his employer for personal injury.  The employer filed a motion for summary judgment on the eve of the parties’ agreed deadline for dispositive motions.  But Rollins’s counsel never saw the electronic notification of that motion.  That’s because, by all accounts, his computer’s email system placed that notification in a folder that he does not regularly monitor.  Nor did he check the docket  after the deadline for dispositive motions had elapsed. 

As a result, Rollins did not file an opposition to the summary judgment motion.  So the district court subsequently entered judgment against Rollins.

Rollins, __ F.4th at __, 2021 WL 3486465 at *1. 

According to the opinion, Rollins was injured while moving a bathtub for his employer, Home Depot.  Id.  Rollins then sued Home Depot in state court.  In one of the less-helpful parts of the opinion, the court uses passive voice—"The case was subsequently removed to federal court”—so we do not know which party asked for removal, but we can presume it was Home Depot. 

In the federal district court, counsel for Rollins, Aaron Allison, agreed to receive filings “through the court’s electronic-filing system via the email address he provided, as attorneys typically do in federal courts across the country.”  Id.  The parties later agreed to a scheduling order requiring that all dispositive motions be filed by May 11, 2020 and providing a 14-day period for responses to any motions.

On May 7, Home Depot filed its motion for summary judgment.  Allison explained the e-notification for the summary judgment motion filing “’was inadvertently filtered into a part of Rollins’ counsel’s firm email system listed as “other,” instead of the main email box where all prior filings in the case were received.’”  Id.   As a result, Allison did not see the electronic notification of Home Depot’s motion, and Home Depot did not mention the motion when Allison “contacted Home Depot’s counsel a few days later to discuss the possibility of a settlement.”  Id.   

Allison told the ABA Journal his firm had never had a problem with e-filing or with the email system.  He noted “opposing counsel never separately notified Allison of the filing and continued settlement talks with the apparent knowledge that Allison wasn’t aware of the pending motion.”  See Weiss, 5th Circuit warns of “cautionary tale for every attorney.”   In fact, after Allison learned of the granted summary judgment motion, “his firm checked and scanned all emails and found the motion in an ‘obscure part’ of the email system.”  Id.  The firm tried to open the email, but it had been corrupted.  Id. 

Nonetheless, “without any response from Rollins, the district court reviewed the pleadings, granted Home Depot’s motion for summary judgment, and entered final judgment on May 27.”  Rollins, __ F.4th at __, 2021 WL 3486465 at *1.  On June 3, Allison again contacted Home Depot’s counsel to discuss settlement, but Home Depot’s counsel informed him the district court had already entered a final judgment.  Id.  Allison then filed a FRCP Rule 59(e) motion to alter or amend the court’s judgment against Rollins.  The district court denied the motion, and Rollins appealed.

The Court of Appeals explained it would review “only” for an abuse of discretion, using one word to stress the deferential standard of review.  Id. at *2.  The court then set out the law in the nice, persuasive rule statements we all try to use, starting with phrases like, “But our court has explained” Rule 59(e) motions are for a “narrow purpose.”  Judge Ho stated Rule 59(e) is “not for raising arguments” which “could, and should, have been made before the judgment issued” or where there is no intervening change of law.   Id.   

On the merits, the court began:  “To be sure, we do not question the good faith of Rollins’s counsel. But it is not “manifest error to deny relief when failure to file was within [Rollins’s] counsel’s ‘reasonable control.’”  Id.  Although reasonable minds can disagree on the application of the rules here, the court then succinctly applied its stated rules to Rollins and found no abuse of discretion.  The court reasoned “Rollins’s counsel was plainly in the best position to ensure that his own email was working properly—certainly more so than either the district court or Home Depot.”  Interestingly, the court placed an affirmative burden of checking online dockets on counsel, even if counsel is not expecting any filings.  According to the court, “Rollins’s counsel could have checked the docket after the agreed deadline for dispositive motions had already passed.”  Id.   

In his interview with the ABA Journal, Allison called the ruling a “‘lawyer beware’ decision.”  He and his client are discussing a possible motion for reconsideration en banc, and if that is denied, a cert petition to the U.S. Supreme Court.  See Weiss, 5th Circuit warns of “cautionary tale for every attorney.”

I plan to share this opinion with my students, not only for the substantive points on e-filings, but also for the opinion’s lessons in persuasion.  And, we can all watch online dockets to see if Rollins decides to move forward. 

August 14, 2021 in Appellate Advocacy, Appellate Court Reform, Appellate Practice, Appellate Procedure, Federal Appeals Courts, Legal Profession, Legal Writing, Web/Tech | Permalink | Comments (0)

Saturday, February 13, 2021

Using Zoom’s Choose-You-Own-Breakout-Room Option for Teaching Brief Writing

While some courts and law schools have returned to a form of in-person proceedings, many of us are still doing our best to represent clients or help students on Zoom.  If you are struggling with Zoom, check out Briar Goldberg’s Ted Ideas on how to raise your video skills.  Briar Goldberg, Ted Ideas:  7 Zoom mistakes you might still be making, https://ideas.ted.com/7-zoom-mistakes-you-might-still-be-making-and-how-to-raise-your-video-skills/ (Feb 9, 2021).

Additionally, if your spring involves teaching students to write trial or appellate briefs in pairs, Zoom now allows your students to select breakout rooms with their partners.  See https://support.zoom.us/hc/en-us/articles/115005769646.  I was hesitant to use this feature because I know students cannot always select their own room, especially students using iPads and some Chromebooks.  See Clay Gibney, Tips for Zoom Breakout Rooms - Lessons Learned, https://www.sais.org/page/zoom_breakout_rooms  (Nov. 2020).  Like many Zoom hosts, I avoided the feature, and either spent the significant time needed to pre-assign participants to breakout rooms or let Zoom randomly assign participants to rooms.  

However, if you want students to be able to meet and confer with their brief-writing partners during class, even in a larger class, you should give the choose-your-own-breakout-room option a try.  I teach writing classes without a “Zoom TA” or IT person in the class, and yet I have sent my students to self-selected breakout rooms for partner meetings.  For the best results, assign your student pairs breakout room numbers before class and ask in advance for names of students whose devices do not show the room choices.

Assigning Pair Numbers

When I assigned my students into partner pairs, I listed each pair on an Excel sheet with numbered lines, and saved the sheet to our class Google Drive.  Before our first class using the partner meeting breakout rooms, I asked each student to double check the Excel sheet and make sure they knew their pair’s number.   

Then, to make creating the rooms quick and simple during class, I did not take the time to name the breakout rooms.  I simply asked Zoom to create the same number of self-selecting breakout rooms as my number of student pairs.  In other words, for a class of 30 students, I created 15 choose-your-own-breakout rooms numbered 1 to 15. 

Dealing with iPads, Chromebooks, and Web Zoom

Early in the semester, I had the students practice choosing their own breakout rooms during a persuasive writing exercise.  We learned that about twenty percent of my students could not select their own rooms, either because of their Chromebook or iPad devices, the way they access Zoom, or both.  See generally Gibney, Tips for Zoom Breakout Rooms, https://www.sais.org/page/zoom_breakout_rooms, at 2 (explaining students using the Web version of Zoom cannot select their own rooms). 

When I let the students know they would need their pair numbers for our next class, I also asked them to notify me before class if their device did not allow them to choose their own breakout rooms.  Therefore, I had a handy list and was able to quickly send these students to the proper rooms by manually assigning them. 

Several students told me after class that they really enjoyed the time in partner breakout rooms.  As much as we wish we could teach partner pairs to write briefs together in person, Zoom’s self-selecting breakout rooms at least allow us the chance to let the students meet together during class.

Happy Zooming!

February 13, 2021 in Law School, Legal Profession, Legal Writing, Web/Tech | Permalink | Comments (0)

Monday, February 1, 2021

Two Great Articles on Remote Oral Argument

Two weeks ago I blogged that we were close to releasing Volume 21, Issue 1, of The Journal of Appellate Practice and Process. I am pleased to announce that the issue is now online.  There are so many wonderful articles in the issue, which I plan to blog on over the next few weeks.  

Since I have already written much on online oral arguments, I thought that I would start with the two pieces that discuss that topic.  The first, "Remote Oral Arguments in the Age of Coronavirus: A Blip on the Screen or a Permanent Fixture," written by veteran appellate advocate Margaret  McGaughey, is a follow-up from her earlier article entitled, "May it Please the Court--Or Not: Appellate Judges' Preferences and Pet Peeves About Oral Argument." In both articles, Ms. McGaughey conducts numerous interviews of state and federal appellate judges and provides their perspectives on the topics.  Her interviewees include Justice Stephen Breyer, Judge David Barron (my property professor), Judge Sandra Lunch, Judge Bruce Selya, Judge William Kayatta, Judge Lipez, former Chief Justice Daniel Wathen, Chief Justice Andrew Mean, Justice Catherine Connors, and the late Chief Justice Ralph Gants. She also interviewed several attorneys who have given remote arguments.

The article is full of great tips, including some tips at the end of setting up your space for remote argument. But, there are two things that really stuck with me in reading the article. The first is how well we all adapted.  The judges and the advocates have done what has needed to be done to adapt to the situation. They have learned how to use the technology and they have changed how questions are asked and arguments delivered. Some have even changed what they wear to "court."  We are all truly in this together, and we have persevered.  This leads to the second thing that struck me--while many judges are eager to return to the physical courtroom, things will never be the same. This new style of remote arguments will remain in some form.  How frequently it will be used in the future remains to be seen.

The second article on remote arguments is by one of our bloggers--Judge Pierre Bergeron. Judge Bergeron's article, "COVID-19, Zoom, and Appellate Oral Argument: Is the Future Virtual," also contains judges' thoughts about remote argument. What really stands out to me in Judge Bergeron's article, however, is his passionate defense of oral argument in general.  He presents a fascinating discussion of the decline of oral argument and how remote arguments can serve to both revitalize oral argument and meet key access to justice concerns. Virtual arguments, he says, could allow courts to create a "pro bono appointment program that would . . . help provide argument at-bats for aspiring appellate lawyers" by matching them with "underprivileged clients who need quality legal representation."  He cites to such a program in Arizona. This idea is genius. I could see law school clients jumping on board too.

Hopefully this new year and the vaccine rollout will see some normalcy return to our appellate courts. But, I hope too that we capitalize on all the technological advancements with remote oral argument to increase access to justice and lower costs for clients.

February 1, 2021 in Appellate Advocacy, Appellate Court Reform, Appellate Justice, Appellate Practice, Current Affairs, Federal Appeals Courts, Oral Argument, State Appeals Courts, United States Supreme Court, Web/Tech | Permalink | Comments (0)

Sunday, December 13, 2020

Tips for Zoom Court & Moot Court: Follow In-Person Best Practices Even More Closely

Tired of online court, school, happy hour, family holidays, and more?  Me too.  However, we also know some form of virtual court is here to stay, and based on the number of great pointers judges from across the county have shared with us this month, we can all still improve. 

Moreover, in reflecting on the tips I’ve seen lately, I was struck by how many of these pointers apply to any argument, in-person or virtual, and how they track what we have long told law students in moot court.  As we evolve from a largely in-person court system, where we had some telephonic and online conferences, to our future, which could involve many more electronic appearances, we should not lose sight of those moot court pointers from law school. And for those of us teaching oral advocacy, we should remember to share best practices for preparation and professionalism which will serve our students in any argument, online or in-person.

Recently, Judge Pierre Bergeron shared helpful tips on preparing for oral argument.  You can see his blog here:  Judge Pierre Bergeron's Tips.  He advises counsel to practice, with a moot court if possible, know the record and case law, provide a roadmap of argument points at the beginning, and be especially cognizant of the need to pause periodically “in an effort to invite questions.”  Id.  These tips apply equally to in-person arguments. 

Similarly, Madison Alder’s piece for Bloomberg Law, Wear Pants, Sequester Pets: Five Tips From Judges for Zoom Court, has excellent advice from judges for online arguments and court appearances in general.  See Madison Alder, Wear Pants, Sequester Pets (Bloomberg Dec. 8, 2020).  As Alder notes, the “virtual venues have worked so well,” some “courts plan on using them long after the virus is gone.”  Id.  Therefore, all lawyers who appear in court need to be as proficient in online argument as they hopefully are for in-person proceedings.

Online court platforms vary (federal courts often do not use Zoom, for example), just like courthouses, and “’Lawyers should prepare themselves for venues they’re not familiar with,’” said Chief Judge William Johnson of the District New Mexico.  See id.  Thus, “preparing a presentation ahead of time is still crucial.”  Id.  Just as in traditional courthouses, counsel should practice standing at a podium or sitting and looking directly at a webcam.  See id.  I advise my students to distill their oral argument notes to just one piece of paper, supported by one binder of organized cases and record pages to take to the podium, and that format works well online, where paper shuffling can be magnified on Zoom. 

Somehow, despite myriad reminders to dress professionally, we still hear frequent complaints from the bench about attorney attire.  Alder recommends:  “Dressing properly means wearing professional attire from head to toe, not just head to waist.”  Id.  “’You never know when you’ll need to stand up in a pinch, which can make for an embarrassing moment if you’re wearing shorts,’ Illinois Supreme Court Chief Justice Anne Burke said.”  Id.  The key:  “’Besides the same make-sure-you’re-communicating-well lessons that apply in a courtroom—is remembering that this is a courtroom and a formal proceeding. Zoom can make people less formal,’” Southern District of Texas Chief Judge Lee Rosenthal said.  Id.

We teach law school moot court advocates not to read from notes, allowing them to “read the bench” and make eye contact with judges.  This lesson matters even more for online arguments, where the format makes true eye contact impossible.  To be as present as possible, online lawyers (and students) should “make sure they do things like keeping the dogs in the other room, closing the window if the lawnmower is going, and making sure their children aren’t there,” said Chief Judge Rosenthal.  Id.  

Finally, we all need to be more attentive to virtual context clues in online arguments.  “The virtual platform makes it more important for lawyers to pay attention to the tone of a judge’s voice, Jed Rakoff, a senior judge in the Southern District of New York, said.”  Id.  Tuning in to a judge’s tone is important for lawyers “’because that’s the main remaining clue as to whether they’re scoring or not,’” Rakoff said.  Id.  As Eastern District of California Chief Judge Kimberly Mueller explained, “It’s as important as ever to pay attention to the judge’s signals, so if you are talking too long, be ready to wind up.’”  Id.  And, using Judge Bergeron’s point on pausing to allow questions, online advocates should watch for judges’ body language showing they are about to unmute or ask a question. 

In my house, with two adults working full-time online and a high school student taking online classes while managing a Zoom social and extracurricular schedule, we are weary of an online-only world.  I know many law students and lawyers feel the same way.  But at least we can find a silver lining (in addition to the great commute) from the online court experience, as the skills we must hone for the best online arguments will make us better advocates in-person too.  

Be well!

December 13, 2020 in Appellate Advocacy, Appellate Practice, Appellate Procedure, Current Affairs, Federal Appeals Courts, Law School, Legal Profession, Moot Court, Oral Argument, State Appeals Courts, Web/Tech | Permalink | Comments (0)

Wednesday, July 29, 2020

Developments in the Law School Advocacy Community

This has, out of necessity, been a busy summer for the law school advocacy community. Some exciting and important developments this week:

Guidance for Conducting Moot Court Competitions in 2020–21: Throughout this summer, a group of advocacy professors and coaches coordinated by Rob Galloway, the Associate Director of Appellate Advocacy at South Texas College of Law Houston, has met weekly to discuss issues related to running moot court competitions and programs in the COVID era. This week, the group published a comprehensive set of recommendations for administering moot court competitions in 2020–21. The document is signed by 78 advocacy teachers, many of whom also administer interscholastic and intramural competitions. It offers insights complied by three working groups on handling competitions in These Challenging Times. Like the best-practices guidance for courts published recently by the American Association of Appellate Lawyers, the document stresses that competition organizers should strive to bring as much normal as possible into the new normal: make COVID-era competitions as much like what we're used to as we can. But, as those of us who've administered virtual arguments and competitions have discovered, doing this well requires thoughtful adaptation. The document give soup-to-nuts advice on how to adapt competition rules and procedures without digging unduly deeply into the technological weeds. If you or someone you care about runs a moot court competition, I respectfully urge you to read it and to follow up with our group if there's anything we can do to help.

The National Online Moot Court Competition: The pandemic has prompted a consortium of law schools—inspired in part by pioneering efforts in the trial-advocacy community—to create a new moot court tournament. Registration is open now. The registration form is accessible here; the rules here; the proposed competition timeline here. It is especially cool that this competition is built from the ground up as a virtual competition. Schools that register will receive a technology package designed to make sure that all teams compete on a level virtual playing field. The competition's rules and design thoughtfully incorporate the practices I discuss above. This is not surprising; the representatives of the schools sponsoring the tournament were active in producing the guidance. And, more generally, the tournament looks to incorporate general best practices for moot court: advocates will argue in four preliminary rounds and will be guaranteed a large, well-qualified pool of brief scorers and oral argument judges.

The new leaders of the National Association of Legal Advocacy Educators: I posted three weeks ago about the election of officers for this new organization. It is done. The group's prospective members have voted on an excellent slate of candidates. As was inevitable, they have chosen a great board that brings together advocacy professors and coaches with deep and wide-ranging experience:

 

July 29, 2020 in Law School, Moot Court, Web/Tech | Permalink | Comments (0)

Saturday, July 25, 2020

Using Peer Review for LRW Teaching and in Appellate Practice Too:  Peerceptiv and Eli Review

Tired of seeing yet another post on how to ______ [fill in the blank:  teach, write, argue, live] in our new virtual reality?  Exhausted from never leaving your home and Zooming all day?  Me too. 

In fact, I was reluctant to write one more blog on online writing tools.  However, my efforts to add new virtual tools to my teaching arsenal introduced me to two peer review software systems I believe can help us in the classroom:  Peerceptiv, https://peerceptiv.com/, and Eli Review, https://elireview.com/.  These peer review programs make anonymous online feedback easy, and encourage the writers to learn by editing others.  They also reminded me how much any law practice can increase attorney writing skills by using peer review.  See, e.g., Kwangsu Cho and Charles MacArthur, Learning by Reviewing, 103 J. of Ed. Psych. 73, 84 (Feb. 2011)   https://eric.ed.gov/?id=EJ933615

As an of counsel appellate lawyer at a large law firm, I often had the chance to be an “intermediate editor” who reviewed junior lawyers’ briefs before sending them on to the partners.  While I had been using informal peer review in my adjunct teaching for a few years at that point, I did not truly see how much editing others’ work makes us better writers until I experienced the phenomenon in practice.  When I noticed I was making the same annoying mistakes I’d been correcting as an editor, I knew my work for the junior associates was making me a far better writer.  Eli Review has a nice blog post on this “giver’s gain.”  https://elireview.com/2017/03/28/givers-gain/

My positive reviewing experience prompted me to add more ungraded peer review in my teaching and made me an advocate for the review process with clients and supervisors.  Like in-house moot court, the practice of adding an intermediate editor is not possible in every situation.  However, if you practice in a large firm or agency, consider adding a layer of review by mid-level writers to young attorneys’ work.  This review can actually save fees, by shortening partner review time, and can help create better briefs across the board.  And if you are in a smaller practice or have no budget for formal peer review, think about the techniques you like in your opponents’ papers, and incorporate those into your own writing.

In the digital classroom, we can use technology to enhance the peer review process.  Many thanks to Prof. Tracy Norton of the Touro Law Center for introducing me to Peerceptiv and for being incredibly generous with her time by running a Peerceptiv demo for the LRW community.  Similarly, I send thanks to Prof. Brian Larson of the Texas A&M University School of Law, who introduced me to Eli Review and also spent an incredible amount of time helping the LRW community with an Eli Review demo.  Neither Prof. Norton or Prof. Larson have any connection to these products, and I also have no affiliation with these companies and am just sharing their information to help others.

From Profs. Norton and Larson, I learned both programs ask students to submit a writing assignment online and then provide feedback on other students’ writing for the same assignment.  Students follow a set rubric in their reviews, and instructors can include the quality of the reviews students provide as part of their writing grades.  The whole process can be anonymous.  Professors using these programs raved about the technical support and positive student feedback from both.  Peerceptiv costs students slightly less than Eli Review, and both can be “textbooks” for your classes at less than $30 a year. 

The genius in each product is the science and math behind the assessment scores and review prompts.  Each product truly helps students grow as writers by combining the established science on peer review and some neat online features.  The math and engineering majors in my home called the programs “elegant.”

For example, Peerceptiv has the peers give a grade of 1-7 on the assignment and complete a four-part review.  Then, each student grades the reviews he or she received on a 1-7 scale.  Peerceptiv then assigns an overall rating for the assignment of 1-7 based on a combination of the student’s writing score and reviewing score.  The professor can set the percentages each score is worth, and the prof can also give reviews him/herself and assign a higher level of credit in the grade to his/her review.  Peerceptiv docs points when a review or assignment is late.  See https://www.peerceptiv.com/why-peerceptiv-overview/#curriculum.

If the Peerceptiv number system seems too much like the dreaded undergraduate “peer grading” to you, consider Eli Review.  Instead of assigning a number ranking to a student's writing and reviews, Eli Review asks students to pull the most helpful comments out of their peers’ reviews and make an express revision plan saying how they will incorporate the comments.  Eli Review does ask students to rate the quality of the reviews on a 1-5 star basis, with only truly exceptional reviews earning five stars.  See https://elireview.com/learn/how/.  This level of assessment forces the writer to give better reviews and thereby learn more about writing, but can help avoid concerns about someone other than a professor grading work. 

This fall, I will use Eli Review for short writing like simple case illustrations, and then will progress to peer-reviewed trial brief argument sections in the spring.  I plan to use Eli Review only for anonymous, ungraded work.  My goal is to give students the “aha” moment I had when reviewing briefs as an intermediate editor, and to help them gain the skill of self-diagnosing writing problems. 

Thanks for reading another note on online writing tools.  I wish you all good health, and a safe  trip outside sometime soon too. 

July 25, 2020 in Appellate Advocacy, Appellate Practice, Law School, Legal Profession, Legal Writing, Web/Tech | Permalink | Comments (0)

Wednesday, July 22, 2020

AAAL Best Practices for Remote Oral Argument

image from images.law.com

This week, the American Academy of Appellate Lawyers issued a thoughtful set of best-practice recommendations for courts hearing remote oral argument. As this press release explains, the recommendations are the work of a task force of AAAL fellows with experience in remote oral arguments. The Fellows' guidance is driven by the AAAL's longstanding position—which we've previously discussed here—that "oral argument is, and should remain, an important part of the appellate process."

Notably, the report doesn't embrace remote oral argument as the new normal. Although it acknowledges that, post-pandemic, continued availability of remote options could make argument practical where it might not otherwise be—a good thing—it stresses that remote argument is a pale substitute for in-person argument. So remote argument shouldn't become "an automatic or self-justifying way of doing things when it is no longer necessary." Instead, the report emphasizes bringing normal into the new: a key animating principle is that courts and advocates should strive to make remote oral argument as much like in-person argument as possible.

The suggestions it offers are practical and actionable both for courts and for advocates. The Academy's fellows urge courts to use a video-based platform over an audio-only solution and present experience-driven reasons why. At the same time, the report wisely identifies adequate sound quality as preeminent for participants and listeners. And it offers solid advice about how to achieve that: among other things, it urges advocates and judges to consider environmental factors like room size and features. Small spaces with hard walls might produce echoes. Stuff like curtains and bookshelves help reduce echoes and ambient noise. And so on.

July 22, 2020 in Appellate Practice, Oral Argument, Web/Tech | Permalink | Comments (0)

Saturday, July 11, 2020

Guest Post--Caution Ahead: Breakout Groups Can Fail

We are thrilled to welcome Professor Susan Smith Bakhshian of Loyola Law School Los Angeles as our guest author.  Susan has taught LRW and doctrinal law for many years at Loyola, where she is a Clinical Professor of Law and Director of Bar Programs. She is the co-author of Clearing the Last Hurdle: Mapping Success on the California Bar Exam.  This summer, she taught entirely online using Brightspace and Zoom.  You can reach Susan at [email protected]

 

Caution Ahead:  Breakout Groups Can Fail

Breakout rooms are great. But. Wait for it. They can fail.  Break out rooms are terrific for everything from a way to let students chat and get to know each other, to in class exercises and writing assignments. And the experience is usually great.

Breakout rooms are not a substitute for physical classrooms, but they can give students a few minutes to socialize, provide variety in instruction, and accomplish learning objectives. 

So when do breakout rooms go wrong? Groups can go wrong a variety of ways. While the tech can fail, which is a new problem, the other failures are nothing new. A student may decline to participate fully. Group dynamics can unravel. Disputes can arise.

Before

An ounce of prevention is worth a pound of cure. Groups need clear instructions to stay on track. Using a slide in class or posting a handout before class goes a long way to making sure students understand that breakout groups are real assignments and not a class break. By posting slides before class, shy or anxious students are able to preview the group assignment and more fully participate in class.

During

Pop in.  Video conference software simulates walking around the room. Once students realize the professor might drop in the group, they stay on track. This feature is especially helpful when I see that the random assignment has created a group of several weaker students or one with too many natural leaders.  I usually go to those groups first. Even if all of the groups are doing fine without any help, I also just like to say “hello.”

After

Require a deliverable. If the groups know going into the exercise that a written product is due or that anyone in the group may be called on, they will stay on task better.  Formal and informal deliverables both work well. Ask for each group to craft an email to the professor, require a post, or ask the group to return to the full class ready to answer a question or present their best ideas.

For those who have not tried a breakout room, an easy, but effective assignment is to have the groups make a list of best (and worst) practices for online learning. They have great tips for each other ranging from natural lighting solutions to how to use the “hide my video” feature to get more comfortable being on video. This assignment works as an ice-breaker in an early class or anytime you want to cover professionalism. As attorneys, they will need to be proficient at using video conferencing software, even after a return to more live interaction. A quick mention that job interviews may be online gets everyone in the group more interested in discussing best practices.

Bottom line, breakout groups are flexible and effective in online teaching.

July 11, 2020 in Books, Law School, Legal Profession, Legal Writing, Web/Tech | Permalink | Comments (0)

Saturday, June 27, 2020

Moving from Pandemic Emergency Zoom Oral Arguments to True Oral Argument Online:  Preparation and Professionalism

 In March, we had only hours to transition from in-person teaching and law practice to remote options.  As many internet memes show, that led to some memorable court appearances sans pants, from closets and bathrooms.  Recently, we’ve been able to step back and assess our remote experiences to see what we can use for better practice and teaching, even as we return to in-person work.  I’ve attended several excellent sessions on online teaching, and I send kudos to William & Mary Law for its fantastic two-day Conference for Excellence in Teaching Legal Research & Writing Online.  (If you could not attend, you can view asynchronous postings here:  https://law.wm.edu/academics/intellectuallife/conferencesandlectures/excellence_online_teaching/index.php.)  Like many of you, my inbox is full of invites for even more webinars and conferences I am not able to attend. 

Luckily, Jill Wheaton of Dykema Gossett recently wrote a summary of the May 4, 2020 ABA Appellate Judges Council CLE webinar on “Appellate Advocacy in the Age of COVID-19.”  The ABA’s program featured judges, a state appellate court chief clerk, and appellate practitioners speaking on how appeals courts will use remote appearances moving forward.  As Wheaton explained, the panel presented “thoughts about, and recommendations regarding, telephone or video appellate arguments” and suggested counsel “do everything they can to make a remote argument as much like an in-person argument as possible.”  Jill M. Wheaton, Appellate Advocacy in the Age of COVID-19, Appellate Issues--2020 Special Edition 1 (ABA May 27, 2020).  Overall, the recommendations for practitioners stressed professionalism in how we approach video appearances.  In other words, be prepared and yes, wear pants.

Part of our preparation for oral argument today should include a test run of our technology.  Whenever possible, appellate practitioners should do moot courts before oral arguments.  Now, we should make our moot courts a test of both online systems and legal arguments.  Since many courts already used some type of internal video conferencing before COVID-19—and a few trial and appellate courts allowed video argument on occasion before 2020--the clerks and judges are already familiar with some remote platforms.  Id.  They expect us to be familiar with the platforms as well.  In fact, many courts have videos of past virtual oral arguments online, and counsel can watch the videos as part of their oral argument preparation. 

We should also be as professional as possible in every detail of our online appearances.  Hopefully, we know to avoid the meme-worthy mistakes of March and April, by dressing in full suits and using a professional-looking digital background or physical space free of clutter and noise for a video appearance.  The ABA panel stressed smaller points as well.  For example, many courts still expect counsel to rise when the bailiff calls the case, and the panel judges noted they prefer advocates to stand when speaking.  Id. at 2.  Therefore, consider either using an adjustable desk, so you can stand when speaking but sit when opposing counsel argues, or use a stool so you can stay at eye level.  The practitioners on the ABA panel suggested using a stack of books to raise your computer to standing level if needed, and to be sure your camera is on the top of your monitor to help you look directly at the judges during the argument.  Id.   Finally, counsel should remember they will be on camera for the entire hearing, even when opposing counsel is speaking.  Id.  Thus, find a way to communicate unobtrusively with co-counsel and your client, if needed.  

We all want life to “return to normal,” but some form of remote oral arguments will no doubt remain after COVID-19 leaves.  For now, “courts have been forced to become creative to continue to advance their dockets, requiring the bench and bar to become creative as well.”  Id. at 3.  Hopefully, these tips from the ABA panel can help us all be more creative, prepared and professional for this new normal.   

June 27, 2020 in Appellate Advocacy, Appellate Practice, Current Affairs, Federal Appeals Courts, Legal Profession, Moot Court, Oral Argument, State Appeals Courts, Web/Tech | Permalink | Comments (0)

Tuesday, May 26, 2020

Coronavirus isn't the Only Infection to Worry About Right Now.

Computer virus

In May, while the world was still trying to adjust to life during quarantine, the Texas Office of Court Administration was hit by a ransomware attack. While the details are still a bit sketchy because of an ongoing investigation, we do know that the State refused to pay the demanded ransom, shut down the infected systems, and has struggled since then to continue its work via alternate channels.

For appellate attorneys, this has been particularly frustrating. The systems that were shut down include some of the case notification mechanisms, so attorneys are finding out via social media whether they won or lost an appeal. In some cases, the court's access to the record appears to have been lost, so advocates are being asked to help provide case information and records back to the court. Throughout it all, Texas courts have somehow managed to not only continue to work but to lead in holding remote oral arguments and hearings and in continuing to push their dockets despite the quarantine and a crippled IT infrastructure.

In a past life, I worked as a systems administrator and technician, and even wore a "white hat" while hacking to test security. So I am familiar with the challenges in preventing ransomware attacks. This post, however, isn't written for the IT crowd. I hope to give some advice to the attorneys and professors who generally ignore such posts, but are often the source of the problem.

First, you need to know that ransomware attacks generally follow a set pattern. The attackers implant software that helps them gain control of a system, usually be encrypting data so it is no longer usable. They then notify the victim of the attack and demand a ransom, usually in bitcoin or another cryptocurrency. If the ransom is paid, they promise to decrypt the data. Sometimes they do, sometimes they do not.

Where do you, the user, fit into this scheme? Usually, you are the point of infection. By taking a few precautions you might prevent the next attack, or help with the restoration that follows.

1.     Don't be the Source of the Infection.

Most ransomware is spread by Phishing, or emails that entice you to click a link that then loads the software onto your computer. Your IT department is serious when it asks you not to click on links from outside sources. The same goes for email attachments, and for links sent via text.

Some attacks start with "social engineering," or gaining access to sensitive information from users that can be used to guess passwords. Avoid the social-media posts that ask you cute personal questions and share with your friends. Even if your password isn't related to your date of birth, favorite pet, child's name, or other seemingly harmless bit of data, one of your friends' passwords might well be. Or, the attacker might use that information to personalize an email phishing attack that is just too hard to resist.

Finally, avoid using public wifi, or if you do, use the VPN that your employer has most likely set up for you. This is probably less common now that we are trying to stay in place, but is still a common source of attack.

2.     Help Preserve your Data.

If there is an attack, the target is the sensitive data that you hold. Most likely, that data will be locked away and inaccessible for awhile, if not forever. If your firm or court is going to recover, it needs your help.

Make sure that you keep up with backups. And, if you are working from a court's electronic record available online, do yourself (and the court) a favor and download that information rather than just relying on the online version. After suffering data corruption and other issues, I even email myself drafts of briefs as I progress in writing so that nothing is lost. The idea is to keep multiple copies on multiple storage devices, so that if one fails, there is still a way to recover. Some sensitive data will have to be more restricted, but in general, on appeal at least, we are working with public records that can be stored in multiple places.

3.     Remember that Confidentiality is a Ethical Responsibility.

Ransomware attacks are up across the board. There are even some healthcare providers that have been targeted, although some of them have been offered "discounts" on the ransom because they are essential service providers. Don't think that you are not a target. More importantly, don't think that your client's confidential information is not a target.

Indeed, law firms are increasingly the target of security intrusions. To protect clients, Comment 8 to Rule 1.1 of the ABA Model Rules imposes a duty of competence that includes keeping abreast of the "benefits and risks associated with relevant technology." Recently, Formal Opinion 483 clarified that the lawyer's duties include both taking all reasonable efforts to protect clients from data breaches and informing them when one occurs.

In the end, protecting client data is the professional responsibility of the attorney. You can't just blindly rely on your IT department or contractor and avoid that responsibility. Instead, you must be aware of the vulnerable world we live in, and take steps to be safe with not just your personal health, but the health of your data as well.

(Image attribution: "Virus" by kai Stachowiak, CC0 public domain license)

May 26, 2020 in Appellate Practice, Current Affairs, Legal Profession, State Appeals Courts, Web/Tech | Permalink | Comments (0)

Monday, April 27, 2020

My Final Thoughts on Moot Court in the Age of Coronavirus

Nearly a lifetime ago (ok, it was just a month ago), I posted tips on how to conduct a virtual moot court competition. Since that post we have had some other great posts on remote oral argument and presentation, including these tips from Texas Supreme Court Justice Eva Guzman. 

We held the final round of our moot court competition on April 16.  Based on that experience, and a few other things I learned along the way, I thought I would offer my final thoughts and tips on virtual moot court competitions, in case we are all doing this again in the fall.

(1) Stagger start times.  For our competition, we typically had two separate panels of three judges.  Each panel heard two arguments--one starting at 5:30 pm and one starting at 6:30 pm.  In my earlier suggestions, I recommended having separate Zoom links for each argument even if the panel was the same.  That definitely worked well.  But, if I could do it over, I would have had one panel start either 15 minutes earlier or 15 minutes later than the other panel.  Why?  Well, I "zoomed" into the first argument for each panel, just to make sure that the judges were present and that there weren't any questions.  I ended up having one Zoom open on my laptop and one open on my tablet.  This was a lot to manage, especially if there were issues that needed to be resolved.  A 15 minute staggered start time would have alleviated some of my stress.

(2) Have back-ups.  I wish that I had designated a back-up bailiff and judge for each round.  We only had one judge who wasn't able to make it, but we did have bailiff sound/video issues.  I was able to get those issues resolved with minimal delay, but having a designated back-up would have been even easier.

(3) Develop an online scoring survey. We ask our judges to fill out a fairly detailed score sheet.  I take the scores and enter them into a complicated spreadsheet that incorporates the judges' scores and the student's brief score.  When we have an in-person competition, I can look at the score sheets right away and identify anything that isn't filled out correctly.  For an online competition, I had to wait to receive the score sheets. Then, if there were any problems, I had to get in touch with the judges.  This wasn't an issue with the early rounds, but by the eliminate rounds, I needed to notify the students advancing quite promptly.  If we do this again, I will work with our IT department to develop some sort of online tool that the judges fill out instead.  This would hopefully help me get the scores sooner, and also ensure that the score sheets are completely filled out.

In addition to these general points, here are a few points from the final round:

(1) Use and circulate a background.  The version of Zoom on my home laptop allows me to use a background without a green screen.  I wish that I had circulated a background to the students and judges to use to make it a little more uniform.

(2) Figure out an online timer.  I didn't use an online timer. Rather, my plan was to hold up time cards.  I regret that choice.  The time cards didn't show up with the background, so I ended up holding up fingers instead.  I wish that I had tested the time cards to know that they wouldn't work. Then I would have definitely figured out how to put a small clock on the screen.

(3) Expect the unexpected (or be sure to lock your office door).  Our final round started at 5:30 pm on April 16. I had told my spouse in the weeks leading up to the final argument that he would be on toddler duty all night long.  I ordered dinner to be delivered, and reiterated to him right before the round began that I was unavailable. Well, as luck would have it, at about 5:50 pm my very tall, just turned 2 year old discovered how to open doors.  And, as I am sure that you have guessed, the first door that he opened was the one right into my office as the Respondent was arguing. My microphone was muted, and the background kept him mostly hidden, but he was still a bit visible (as was my husband who, with a look of horror on his face, tried to quickly remove him from the scene).  In hindsight, it was pretty humorous.  I wasn't able to keep a poker face on while it happened, which I felt bad about.  Now I know to lock my office door if I don't want to be disturbed.

April 27, 2020 in Appellate Advocacy, Law School, Moot Court, Oral Argument, Web/Tech | Permalink | Comments (0)