Tuesday, January 31, 2017
Nick Parrillo has posted on SSRN a draft of his article, The Endgame of Administrative Law: Governmental Disobedience and the Judicial Contempt Power. Here’s the abstract:
Scholars of administrative law focus overwhelmingly on lawsuits to review federal government action while assuming that, if plaintiffs win such lawsuits, the government will do what the court says. But in fact, the federal government’s compliance with court orders is imperfect and fraught, especially with orders compelling the government to act affirmatively. Such orders can strain a federal agency’s resources, interfere with its other legally-required tasks, and force it to make decisions on little information. An agency hit with such an order will often warn the judge that it badly needs more latitude and more time to comply. Judges relent, cutting slack and extending deadlines. The plaintiff who has “won” the suit finds that victory was merely the start of a tough negotiation that can drag on for years.
These compliance negotiations are little understood. Basic questions about them are unexplored, including the most fundamental: What is the endgame? That is, if the judge concludes that the agency has delayed too long and demanded too much, is there anything she can do, at long last, to make the agency comply?
What the judge can do, ultimately, is the same thing as for any disobedient litigant: find the agency (and its high officials) in contempt. But do judges actually make such contempt findings? If so, can judges couple those findings with the sanctions of fine and imprisonment that give contempt its potency against private parties? If not, what use is contempt? The literature is silent on these questions, and conventional research methods, confined to appellate case law, are hopeless for addressing it. There are no opinions of the Supreme Court on the subject, and while the courts of appeals have handled the problem many times, they have dealt with it in a manner calculated to avoid setting clear and general precedent.
Through an examination of thousands of opinions (especially of district courts), docket sheets, briefs, and other filings, plus archival research and interviews, this Article provides the first general assessment of how federal courts handle the federal government’s disobedience. It makes four conclusions. First, the federal judiciary is willing to issue contempt findings against agencies and officials. Second, while several federal judges believe they can (and have tried to) attach sanctions to these findings, the higher courts have exhibited a virtually complete unwillingness to allow sanctions, at times swooping down at the eleventh hour to rescue an agency from incurring a budget-straining fine or its top official from being thrown in jail. Third, the higher courts, even as they unfailingly thwart sanctions in all but a few minor instances, have bent over backward to avoid making pronouncements that sanctions are categorically unavailable, deliberately keeping the sanctions issue in a state of low salience and at least nominal legal uncertainty. Fourth, even though contempt findings are practically devoid of sanctions, they have a shaming effect that gives them substantial if imperfect deterrent power.
The efficacy of litigation against agencies rests on a widespread perception that federal officials simply do not disobey court orders and a concomitant norm that identifies any violation as deviant. Contempt findings, regardless of sanctions, are a means of weaponizing that norm by designating the agency and official as violators and subjecting them to shame. But if judges make too many such findings, and especially if they impose (inevitably publicity-grabbing) sanctions, they may risk undermining the perception that officials always comply and thus the norm that they do so. The judiciary therefore may sometimes pull its punches to preserve the substantial yet limited norm-based power it has.
Tuesday, January 24, 2017
Now on the Courts Law section of JOTWELL is Linda Mullenix’s essay, Infusing Civil Rulemaking with Economic Theory. Linda reviews Paul Stancil’s recent article, Substantive Equality and Procedural Justice, which is forthcoming in the Iowa Law Review.
Friday, January 20, 2017
Aaron Bruhl has posted on SSRN a draft of his article, One Good Plaintiff is Not Enough. Here’s the abstract:
This Article concerns an aspect of Article III standing that has figured in many of the highest-profile controversies of recent years, including litigation over the Affordable Care Act, immigration policy, and climate change. Although the federal courts constantly emphasize the importance of ensuring that only proper plaintiffs invoke the federal judicial power, the Supreme Court and other federal courts have developed a significant exception to the usual requirement of standing. This exception holds that a court entertaining a multiple-plaintiff case may dispense with inquiring into the standing of each plaintiff as long as the court finds that one plaintiff has standing. This practice of partially bypassing the requirement of standing is not limited to cases in which the plaintiffs are about to lose on other grounds anyway. Put differently, courts are willing to assume that all plaintiffs have standing as long as one plaintiff has it and then decide the merits either for or against all plaintiffs despite doubts as to the standing of some of those plaintiffs. We could call this the “one-plaintiff rule.”
This Article examines the one-plaintiff rule from normative and positive perspectives. On the normative side, the goal is to establish that the one-plaintiff rule is erroneous in light of principle, precedent, and policy. All plaintiffs need standing, even if all of them present similar legal claims and regardless of the form of relief they seek. To motivate the normative inquiry, the Article also explains why the one-plaintiff rule is harmful as a practical matter, namely because it assigns concrete benefits and detriments to persons to whom they do not belong. The Article’s other principal goal is to explain the puzzle of how the mistaken one-plaintiff rule could attain such widespread acceptance despite the importance usually attributed to respecting Article III’s limits on judicial power. The explanatory account assigns the blame for the one-plaintiff rule to the incentives of courts and litigants as well as to the development of certain problematic understandings of the nature of judicial power.
Thursday, January 19, 2017
Joanna Schwartz has posted on SSRN a draft of her article, How Qualified Immunity Fails. Here’s the abstract:
Qualified immunity is a judicially created doctrine that shields government officials from constitutional claims for money damages, even if those officials have violated plaintiffs’ constitutional rights, so long as those constitutional rights are not clearly established. Courts and commentators share the assumption that the doctrine affords a powerful protection to government officials. And the Supreme Court has repeatedly explained that qualified immunity must be as powerful as it is to protect government officials from burdens associated with participating in discovery and trial. Yet the Supreme Court has relied on no empirical evidence to support its assertions that litigation imposes these burdens on government officials, or that qualified immunity doctrine protects against them.
This Article reports the results of the largest and most comprehensive study to date of the role qualified immunity plays in constitutional litigation, with particular attention paid to the frequency with which qualified immunity disposes of cases before discovery and trial. Based on my review of 1183 cases filed against law enforcement defendants in five federal court districts, I find that qualified immunity infrequently functions as expected. Fewer than 1% of Section 1983 cases in my dataset were dismissed at the motion to dismiss stage and just 2% were dismissed at summary judgment on qualified immunity grounds. After describing my findings, this Article considers the implications of these findings for descriptive accounts of qualified immunity’s role in constitutional litigation, the extent to which qualified immunity doctrine meets its policy goals, and possible adjustments to the balance struck between individual and government interests in qualified immunity doctrine.
Will Baude has posted on SSRN a draft of his article, Is Qualified Immunity Unlawful? Here’s the abstract:
The doctrine of qualified immunity operates as an unwritten defense to civil rights lawsuits brought under 42 U.S.C. § 1983. It prevents plaintiffs from recovering damages for violations of their constitutional rights unless the government official violated “clearly established law,” usually requiring a specific precedent on point. This article argues that the doctrine is unlawful and inconsistent with conventional principles of statutory interpretation.
Members of the Supreme Court have offered three different justifications for imposing such an unwritten defense on the text of Section 1983. One is that it derives from a common law “good faith” defense; another is that it compensates for an earlier putative mistake in broadening the statute; the third is that it provides “fair warning” to government officials, akin to the rule of lenity.
But on closer examination, each of these justifications falls apart, for a mix of historical, conceptual, and doctrinal reasons. There was no such defense; there was no such mistake; lenity ought not apply. And even if these things were otherwise, the doctrine of qualified immunity would not be the best response.
The unlawfulness of qualified immunity is of particular importance now. Despite the shoddy foundations, the Supreme Court has been reinforcing the doctrine of immunity in both formal and informal ways. In particular, the Court has given qualified immunity a privileged place on its agenda reserved for few other legal doctrines besides habeas deference. Rather than doubling down, the Court ought to be beating a retreat.
Wednesday, January 18, 2017
I have posted my latest article, Trade Secrets, Extraterritoriality, and Jurisdiction to SSRN.
Twenty years ago, Congress passed the Economic Espionage Act of 1996 which criminalized trade secret misappropriation and authorized broad domestic and international enforcement measures against trade secret misappropriation. At the time of its passage, the EEA was lauded by the business community, but it was heavily criticized by scholars who worried that the statute was too broad and too protectionist. In the intervening years, the business sector renewed its complaints about the insufficiency of U.S. trade secret laws, and scholars continued to express skepticism about using criminal law to enforce trade secret policy. Congress recently passed a new statute, the Defend Trade Secrets Act of 2016, which creates a federal private right of action under the EEA for trade secret misappropriation and economic espionage, and authorizes a variety of remedies including injunctions, damages, and seizure of property.
In 2003, I published a student note examining the EEA and arguing that the broad statutory language and potential for extraterritorial enforcement created problems for the United States given our commitments to the Agreement on Trade-Related Aspects of Intellectual Property Rights (“TRIPS agreement”). Given the recent legislative efforts to expand the EEA to include private enforcement, it is time to revisit and update research on the EEA. This Article examines the new problems and challenges private enforcement of the EEA might present. In particular, this Article considers whether the problems of extraterritorial criminal enforcement extend to the civil context.
This Article proceeds in three parts. Part I gives a brief overview of the DTSA and its relationship to the EEA. Part II demonstrates that expanding the EEA to include civil enforcement creates personal jurisdiction problems. Part III argues that the doctrine of forum non conveniens presents yet another barrier to DTSA proceedings in U.S. courts. The Article concludes by noting that the jurisdictional necessities of civil enforcement under the DTSA set businesses on a collision course with the direction of personal jurisdiction and forum non conveniens law for which they have largely advocated the past few decades. In other words, viewing the DTSA through a jurisdictional lens reveals some of the underlying, understated, and confused purposes of the statute.
Thursday, January 12, 2017
Curtis Bradley and Neil Siegel have published Historical Gloss, Constitutional Conventions, and the Judicial Separation of Powers, 105 Geo. L.J. 255 (2017). Here’s the abstract:
Scholars have increasingly focused on the relevance of post-Founding historical practice to discerning the separation of powers between Congress and the Executive Branch, and the Supreme Court has recently endorsed the relevance of such practice. Much less attention has been paid, however, to the relevance of historical practice to discerning the separation of powers between the political branches and the federal judiciary—what this Article calls the “judicial separation of powers.” As the Article explains, there are two ways that historical practice might be relevant to the judicial separation of powers. First, such practice might be invoked as an appeal to “historical gloss”—a claim that the practice informs the content of constitutional law. Second, historical practice might be invoked to support nonlegal but obligatory norms of proper governmental behavior—something that Commonwealth theorists refer to as “constitutional conventions.” To illustrate how both gloss and conventions enrich our understanding of the judicial separation of powers, the Article considers the authority of Congress to “pack” the Supreme Court and the authority of Congress to “strip” the Court’s appellate jurisdiction. This Article shows that, although the defeat of Franklin Roosevelt’s Court-packing plan in 1937 has been studied almost exclusively from a political perspective, many criticisms of the plan involved claims about historical gloss; other criticisms involved appeals to constitutional conventions; and still others blurred the line between those two categories or shifted back and forth between them. Strikingly similar themes emerge in debates in Congress in 1957–1958, and within the Justice Department in the early 1980s, over the authority of Congress to prevent the Court from deciding constitutional issues by restricting its appellate jurisdiction. The Article also shows—based on internal Executive Branch documents that have not previously been discovered or discussed in the literature—how Chief Justice John Roberts, while working in the Justice Department and debating Office of Legal Counsel head Theodore Olson, failed to persuade Attorney General William French Smith that Congress has broad authority to strip the Court’s appellate jurisdiction. The Article then reflects on the implications of historical gloss and conventions for the judicial separation of powers more generally.
Wednesday, January 11, 2017
David Jaros (University of Baltimore) and Adam Zimmerman (Loyola LA) have posted Judging Aggregate Settlement to SSRN.
While courts historically have taken a hands-off approach to settlement, judges across the legal spectrum have begun to intervene actively in “aggregate settlements”—repeated settlements between the same parties or institutions that resolve large groups of claims in a lockstep manner. In large-scale litigation, for example, courts have invented, without express authority, new “quasi-class action” doctrines to review the adequacy of massive settlements brokered by similar groups of attorneys. In recent and prominent agency settlements, including ones involving the SEC and EPA, courts have scrutinized the underlying merits to ensure settlements adequately reflect the interests of victims and the public at large. Even in criminal law, which has lagged behind other legal systems in acknowledging the primacy of negotiated outcomes, judges have taken additional steps to review iterant settlement decisions routinely made by criminal defense attorneys and prosecutors.
Increasingly, courts intervene in settlements out of a fear commonly associated with class action negotiations—that the “aggregate” nature of the settlement process undermines the courts’ ability to promote legitimacy, loyalty, accuracy and the development of substantive law. Unfortunately, when courts step in to review the substance of settlements on their own, they may frustrate the parties’ interests, upset the separation of powers, or stretch the limits of their ability. The phenomenon of aggregate settlement thus challenges the judiciary’s duty to preserve the integrity of the civil, administrative, and criminal justice systems.
This Article maps the new and critical role that courts must play in policing aggregate settlements. We argue that judicial review should exist to alert and press other institutions—private associations of attorneys, government lawyers, and the coordinate branches of government—to reform bureaucratic approaches to settling cases. Such review would not mean interfering with the final outcome of any given settlement. Rather, judicial review would instead mean demanding more information about the parties’ competing interests in settlement, more participation by outside stakeholders, and more reasoned explanations for the trade-offs made by counsel on behalf of similarly situated parties. In so doing, courts can provide an important failsafe that helps protect the procedural, substantive, and rule-of-law values threatened by aggregate settlements.
Thursday, January 5, 2017
Now running on the Courts Law section of JOTWELL is my essay, Comparative Avoidance. I review Erin Delaney’s recent article, Analyzing Avoidance: Judicial Strategy in Comparative Perspective, 66 Duke L.J. 1 (2016).
Wednesday, December 28, 2016
Aaron-Andrew Bruhl has posted on SSRN a draft of his article The Jurisdictional Canon, which is forthcoming in the Vanderbilt Law Review. Here’s the abstract:
This Article concerns the interpretation of jurisdictional statutes. The fundamental postulate of the law of the federal courts is that the federal courts are courts of limited subject-matter jurisdiction. That principle is reinforced by a canon of statutory interpretation according to which statutes conferring federal subject-matter jurisdiction are to be construed narrowly, with ambiguities resolved against the availability of federal jurisdiction. This interpretive canon is over a century old and has been recited in thousands of federal cases, but its future has become uncertain. The Supreme Court recently stated that the canon does not apply to many of today’s most important jurisdictional disputes. The Court’s decision is part of a pattern, as several cases from the last decade have questioned the canon’s validity, a surprising development given what appeared to be the canon’s entrenched status.
This state of flux and uncertainty provides an ideal time to assess the merits and the likely future trajectory of the canon requiring narrow construction of jurisdictional statutes. This Article undertakes those tasks. First, it conducts a normative evaluation of the canon and its potential justifications. The normative evaluation requires consideration of several matters, including the canon’s historical pedigree, its relationship to constitutional values and congressional preferences, and its ability to bring about good social outcomes. Reasonable minds can differ regarding whether the canon is ultimately justified, but the case for it turns out to be weaker than most observers would initially suspect. Second, the Article attempts, as a positive matter, to identify the institutional and political factors that have contributed to the canon’s recent negative trajectory and that can be expected to shape its future path. The canon’s future is uncertain because it depends on the interaction of a variety of matters including docket composition, interest-group activity, and the Supreme Court's attitude toward the civil justice system.
This Article’s examination of the jurisdiction canon has broader value beyond the field of federal jurisdiction because it sheds some incidental light on the more general questions of why interpretive rules change, how methodological changes spread through the judicial hierarchy, and how the interpretive practices of the lower courts vary from those of the Supreme Court.
Tuesday, December 27, 2016
Lonny Hoffman has an essay up on the University of Chicago Law Review Online, Plausible Theory, Implausible Conclusions. Lonny responds to William Hubbard’s recent article, A Fresh Look at Plausibility Pleading, 83 U. Chi. L. Rev. 693 (2016).
Monday, December 19, 2016
Now on the Courts Law section of JOTWELL is Jay Tidmarsh’s essay, Discovery Costs and Default Rules. Jay reviews a recent paper by Brian Fitzpatrick and Cameron Norris, One-Way Fee Shifting After Summary Judgment.
Tuesday, December 13, 2016
Shirin Sinnar has posted on SSRN a draft of her article, The Lost Story of Iqbal, which is forthcoming in the Georgetown Law Journal. Here’s the abstract:
The Supreme Court’s 2009 decision in Ashcroft v. Iqbal, which transformed pleading standards across civil litigation, is recognized as one of the most important cases of contemporary civil procedure. Despite the abundant attention the case has received on procedural grounds, the Court’s representations of Javaid Iqbal, the plaintiff in the case, and the post-9/11 detentions out of which his claims arose have received far less critique than they deserve. The decision presented a particular narrative of the detentions that may affect readers’ perceptions of the propriety of law enforcement practices, the scope of the harm they impose on minority communities, and their ultimate legality. This Article contests that narrative by recovering the lost story of Iqbal. It first retells the story of Iqbal himself — the Pakistani immigrant and cable repair technician whom the opinion presented only categorically as a foreigner, a terrorist suspect, and, at best, a victim of abuse. Drawing on the author’s interview of Iqbal in Lahore, Pakistan, in 2016 and other available evidence, the Article reconstructs the facts of Iqbal’s immigrant life, his arrest and detention in the wake of the September 11 attacks, and the enduring consequences of being labeled a suspected terrorist. Second, the Article recounts the role of race and religion in the post-9/11 immigrant detentions, challenging the Court’s account of the detentions as supported by an “obvious” legitimate explanation. Juxtaposing the lost story of Iqbal and the detentions against the Court’s decision ultimately sheds light on the ability of procedural decisions to propagate particular normative visions and understandings of substantive law without the full recognition of legal audiences. Nearly fifteen years after the September 11 attacks and the ensuing mass detentions, Iqbal demands attention to its substance — to the profound questions of race, law, and security that have become even more urgent in the face of new calls for the exclusion of individuals on racial and religious grounds.
Thursday, December 8, 2016
Now on the Courts Law section of JOTWELL is Robin Effron’s essay, Time to Say Goodbye to Forum Non Conveniens? Robin reviews Maggie Gardner’s recent article, Retiring Forum Non Conveniens, 92 N.Y.U. L. Rev. (forthcoming 2017).
Monday, November 28, 2016
Now on the Courts Law section of JOTWELL is Kevin Walsh’s essay, Equity, the Judicial Power, and the Problem of the National Injunction. Kevin reviews Sam Bray’s article, Multiple Chancellors: Reforming the National Injunction.
Monday, November 14, 2016
Ed Cheng has posted on SSRN a draft of his article, Detection and Correction of Legal Publication Bias. Here’s the abstract:
Judges, attorneys, and academics commonly use case law surveys to ascertain the law and to predict or make decisions. In some contexts, however, certain legal outcomes may be more likely to be published (and thus observed) than others, potentially distorting impressions from case surveys. In this paper, I propose a method for detecting and correcting legal publication bias based on ideas from multiple systems estimation (MSE), a technique traditionally used for estimating hidden populations. I apply the method to a simulated dataset of admissibility decisions to confirm its efficacy, then to a newly collected dataset on false confession experts, where the model estimates that the observed 16% admissibility rate may be in reality closer to 28%. The article thus identifies and draws attention to the potential for legal publication bias, and offers a practical statistical tool for detecting and correcting it.
Thursday, November 10, 2016
Friday, October 28, 2016
Bob Bone has posted on SSRN a draft of his article Tyson Foods and the Future of Statistical Adjudication, which will be published in the North Carolina Law Review. Here’s the abstract:
Statistical adjudication, the practice of using sampling and other statistical techniques to adjudicate large case aggregations, is highly controversial today. In all its forms, statistical adjudication decides cases on the basis of statistical extrapolation rather than case-specific facts. For example, a court adjudicating a large class action might try a random sample of cases, average the trial verdicts, and give the average to all the other cases in the aggregation. In Wal-Mart Stores, Inc. v. Dukes, the Supreme Court rejected a sampling proposal as inconsistent with the Rules Enabling Act, calling it “Trial by Formula.” In the wake of this decision, at least one commentator declared the death of statistical adjudication.
In an important decision last term, Tyson Foods, Inc. v. Bouaphakeo, the Court changed course and breathed new life into statistical adjudication. It upheld the use of sampling to establish liability and damages in a Fair Labor Standards Act case and indicated that the procedure might be available in other cases as well. The Court’s opinion is far from clear, however, and offers little guidance to lower court judges trying to determine when and how to use the procedure in future cases.
This Article explores the impact of Tyson Foods on the future of statistical adjudication. Part I defines statistical adjudication and distinguishes it from statistical evidence. Part II shows that Tyson Foods is a case of statistical adjudication, not statistical evidence. Part III takes a closer look at the Court’s opinion in an effort to tease out factors and principles to guide future use. Part IV explores reasons for the vague discomfort with the procedure, reasons that seem to be tied to nagging doubts about the legitimacy of the procedure. Critics worry that statistical adjudication is too strange a fit with adjudication, too substantive to be legitimately implemented as procedure, and too mechanical to count as a proper form of adjudicative reasoning. Part IV argues that statistical adjudication is not as strange as it might seem, that its outcome effects do not make it too substantive, and that while it substitutes a mechanical decision algorithm for the usual reasoning process, it does so in a way that can be justified as legitimate. It is time that we recognize statistical adjudication for what it is: a useful procedural tool that, when carefully designed and selectively deployed, is capable of adjudicating large case aggregations fairly and efficiently.
Thursday, October 27, 2016
Erin Delaney has posted on SSRN her article, Analyzing Avoidance: Judicial Strategy in Comparative Perspective, 66 Duke L.J. 1 (2016). Here’s the abstract:
Courts sometimes avoid deciding contentious issues. One prominent justification for this practice is that, by employing avoidance strategically, a court can postpone reaching decisions that might threaten its institutional viability. Avoidance creates delay, which can allow for productive dialogue with and among the political branches. That dialogue, in turn, may result in the democratic resolution of — or the evolution of popular societal consensus around — a contested question, relieving the court of its duty. Many scholars and judges assume that, by creating and deferring to this dialogue, a court can safeguard its institutional legitimacy and security.
Accepting this assumption arguendo, this Article seeks to evaluate avoidance as it relates to dialogue. It identifies two key factors in the avoidance decision that might affect dialogue with the political branches: first, the timing of avoidance (i.e., when in the life cycle of a case does a high court choose to avoid); and, second, a court’s candor about the decision (i.e., to what degree does a court openly acknowledge its choice to avoid). The Article draws on a series of avoidance strategies from apex courts around the world to tease out the relationships among timing, candor, and dialogue. As the first study to analyze avoidance from a comparative perspective, the Article generates a new framework for assessing avoidance by highlighting the impact of timing on the quality of dialogue, the possible unintended consequences of candor, and the critical trade-offs between avoidance and power.
Monday, October 24, 2016
Today on the Courts Law section of JOTWELL is Steve Vladeck’s essay, Bringing in the Jury. Steve reviews Suja Thomas’s recent book, The Missing American Jury: Restoring the Fundamental Constitutional Role of the Criminal, Civil, and Grand Juries (2016).