Wednesday, March 18, 2020
Ronen Perry has posted to SSRN The Law and Economics of Online Republication. The abstract provides:
Jerry publishes unlawful content about Newman on Facebook, Elaine shares Jerry’s post, the share automatically turns into a tweet because her Facebook and Twitter accounts are linked, and George immediately retweets it. Should Elaine and George be liable for these republications? The question is neither theoretical nor idiosyncratic. On occasion, it reaches the headlines, as when Jennifer Lawrence’s representatives announced she would sue every person involved in the dissemination, through various online platforms, of her illegally obtained nude pictures. Yet this is only the tip of the iceberg. Numerous potentially offensive items are reposted daily, their exposure expands in widening circles, and they sometimes “go viral.”
This Article is the first to provide a law and economics analysis of the question of liability for online republication. Its main thesis is that liability for republication generates a specter of multiple defendants which might dilute the originator’s liability and undermine its deterrent effect. The Article concludes that, subject to several exceptions and methodological caveats, only the originator should be liable. This seems to be the American rule, as enunciated in Batzel v. Smith and Barrett v. Rosenthal. It stands in stark contrast to the prevalent rules in other Western jurisdictions and has been challenged by scholars on various grounds since its very inception.
The Article unfolds in three Parts. Part I presents the legal framework. It first discusses the rules applicable to republication of self-created content, focusing on the emergence of the single publication rule and its natural extension to online republication. It then turns to republication of third-party content. American law makes a clear-cut distinction between offline republication which gives rise to a new cause of action against the republisher (subject to a few limited exceptions), and online republication which enjoys an almost absolute immunity under § 230 of the Communications Decency Act. Other Western jurisdictions employ more generous republisher liability regimes, which usually require endorsement, a knowing expansion of exposure or repetition.
Part II offers an economic justification for the American model. Law and economics literature has showed that attributing liability for constant indivisible harm to multiple injurers, where each could have single-handedly prevented that harm (“alternative care” settings), leads to dilution of liability. Online republication scenarios often involve multiple tortfeasors. However, they differ from previously analyzed phenomena because they are not alternative care situations, and because the harm—increased by the conduct of each tortfeasor—is not constant and indivisible. Part II argues that neither feature precludes the dilution argument. It explains that the impact of the multiplicity of injurers in the online republication context on liability and deterrence provides a general justification for the American rule. This rule’s relatively low administrative costs afford additional support.
Part III considers the possible limits of the theoretical argument. It maintains that exceptions to the exclusive originator liability rule should be recognized when the originator is unidentifiable or judgment-proof, and when either the republisher’s identity or the republication’s audience was unforeseeable. It also explains that the rule does not preclude liability for positive endorsement with a substantial addition, which constitutes a new original publication, or for the dissemination of illegally obtained content, which is an independent wrong. Lastly, Part III addresses possible challenges to the main argument’s underlying assumptions, namely that liability dilution is a real risk and that it is undesirable.
Monday, February 3, 2020
Andrew Selbst has posted to SSRN Negligence and AI's Human Users. The abstract provides:
Negligence law is often asked to adapt to new technologies. So it is with artificial intelligence (AI). But AI is different. Drawing on examples in medicine, financial advice, data security, and driving in semi-autonomous vehicles, this Article argues that AI poses serious challenges for negligence law. By inserting a layer of inscrutable, unintuitive, and statistically-derived code in between a human decisionmaker and the consequences of that decision, AI disrupts our typical understanding of responsibility for choices gone wrong. The Article argues that AI’s unique nature introduces four complications into negligence: 1) unforeseeability of specific errors that AI will make; 2) capacity limitations when humans interact with AI; 3) introducing AI-specific software vulnerabilities into decisions not previously mediated by software; and 4) distributional concerns based on AI’s statistical nature and potential for bias.
Tort scholars have mostly overlooked these challenges. This is understandable because they have been focused on autonomous robots, especially autonomous vehicles, which can easily kill, maim, or injure people. But this focus has neglected to consider the full range of what AI is. Outside of robots, AI technologies are not autonomous. Rather, they are primarily decision-assistance tools that aim to improve on the inefficiency, arbitrariness, and bias of human decisions. By focusing on a technology that eliminates users, tort scholars have concerned themselves with product liability and innovation, and as a result, have missed the implications for negligence law, the governing regime when harm comes from users of AI.
The Article also situates these observations in broader themes of negligence law: the relationship between bounded rationality and foreseeability, the need to update reasonableness conceptions based on new technology, and the difficulties of merging statistical facts with individual determinations, such as fault. This analysis suggests that though there might be a way to create systems of regulatory support to allow negligence law to operate as intended, an approach to oversight that it not based in individual fault is likely to be a more fruitful approach.
Thursday, January 9, 2020
Frank Pasquale has posted to SSRN Data-Informed Duties in AI Development. The abstract provides:
Law should help direct—and not merely constrain—the development of artificial intelligence (AI). One path to influence is the development of standards of care both supplemented and informed by rigorous regulatory guidance. Such standards are particularly important given the potential for inaccurate and inappropriate data to contaminate machine learning. Firms relying on faulty data can be required to compensate those harmed by that data use—and should be subject to punitive damages when such use is repeated or willful. Regulatory standards for data collection, analysis, use, and stewardship can inform and complement generalist judges. Such regulation will not only provide guidance to industry to help it avoid preventable accidents. It will also assist a judiciary that is increasingly called upon to develop common law in response to legal disputes arising out of the deployment of AI.
Tuesday, July 9, 2019
3d Cir: Amazon Is a Seller of Goods Through Website, Even if Owned by 3rd Parties; CDA Not Applicable Except as to Failure to Warn
In Oberdorf v. Amazon.com, Inc., the Third Circuit held that Amazon was a "seller" for purposes of Pennsylvania state law when it sold items on its website through Amazon Marketplace. Amazon Marketplace connects buyers to third-party sellers on Amazon's website. Amazon does not own the goods and in many cases does not deliver them. After determining Amazon to be a seller, the court further held that section 230 of the Communications Decency Act of 1996 only shields Amazon from failure to warn claims. Section 230 states: "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." The Third Circuit panel unanimously held that this provision would immunize Amazon from failure to warn claims because the warnings or lack thereof were provided by a third party. No immunity would be available, however, for non-speech-related claims such as manufacturing or design defects.
The holding that Amazon is a seller is unusual, perhaps even novel. A recent Fourth Circuit case, based on Maryland law, came to the opposite conclusion. In the "flaming headlamp" case, reported on here, the Fourth Circuit affirmed the district court's finding that Amazon was not a seller under nearly identical circumstances. In a recent related case, Herrick v. Grindr, reported on here, the Second Circuit concluded that section 230 of the CDA did protect Grindr from claims brought when an angry ex-boyfriend allegedly created fake profiles that induced numerous men to come to plaintiff's home and work demanding sex.
Andrew Keshner of MarketWatch has a piece on the Third Circuit case, with a focus on the CDA holding here.
Friday, June 7, 2019
Nathan Cortez has posted to SSRN A Black Box for Patient Safety?. The abstract provides:
Technology now makes it possible to record surgical procedures with striking granularity. And new methods of artificial intelligence (A.I.) and machine learning allow data from surgeries to be used to identify and predict errors. These technologies are now being deployed, on a research basis, in hospitals around the world, including in U.S. hospitals. This Article evaluates whether such recordings – and whether subsequent software analyses of such recordings – are discoverable and admissible in U.S. courts in medical malpractice actions. I then argue for reformulating traditional "information policy" to accommodate the use of these new technologies without losing sight of patient safety concerns and patient legal rights.
Monday, May 20, 2019
UVa Law has a podcast, "Common Law," which is co-hosted by tortsprof Leslie Kendrick (she is Vice Dean). The most recent version features Ken Abraham and alum Michael Raschid, chief legal officer and vice president of operations at Perrone Robotics, discussing the effect of autonomous vehicles on tort and insurance.
Monday, January 7, 2019
Couple breaks up. Upset former lover creates fake profiles on a dating app that leads to harassment of ex, including over a dozen instances of people showing up at the person's home and workplace ready for sex. The victim files police reports and eventually obtains a restraining order against the company that created the dating app. The victim sues the company alleging, among other things, products liability. The trial court dismisses the action based on section 230 of the Telecommunications Decency Act of 1996, protecting those providing interactive computer services from the statements of third parties. Today the Second Circuit hears an appeal of that case, Herrick v. Grindr. In the meantime, Dave Ingram of NBC has an interesting piece on the issue of whether apps qualify as products for purposes of products liability.
Tuesday, July 31, 2018
Robert Chesney & Danielle Keats Citron have posted to SSRN Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. The abstract provides:
Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection. Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors.
While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.
Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions.
Monday, April 2, 2018
Ken Abraham & Bob Rabin have posted to SSRN Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era. The abstract provides:
The United States is on the verge of a new era in transportation, requiring a new legal regime. Over the coming decades, there will be a revolution in driving, as manually-driven cars are replaced by automated vehicles. There will then be a radically new world of auto accidents: most accidents will be caused by cars, not by drivers. Thus far, however, proposals for reform have failed to address with precision the distinctive issues that will be posed during the long transitional period in which automated vehicles share the roadway with conventional vehicles, or during the succeeding period that will be dominated by accidents between automated vehicles. A legal regime for this new era should more effectively and sensibly promote safety and provide compensation than the existing tort doctrines governing driver liability for negligence and manufacturer liability for product defects will be able to do. In a world of accidents dominated by automated vehicles, these doctrines will be anachronistic and obsolete. We present a proposal for a more effective system, adopting strict manufacturer responsibility for auto accidents. We call this system Manufacturer Enterprise Responsibility, or “MER.” In describing and developing our proposal for MER, we present the first detailed, extensively analyzed approach that would promote deterrence and compensation more effectively than continued reliance on tort in the coming world of accidents involving automated vehicles.
Tuesday, November 7, 2017
Friday, October 13, 2017
We have heard a lot about technology changing tort law in the form of autonomous vehicles. Now Giant Foods is experimenting with a roving robot in its grocery stores. "Marty" has a number of skills: he can check prices and help with stocking. His main job, however, is to scan the aisles for potential slip hazards on the floor. If the technology is successful, we may have safer stores and fewer tort cases. PennLive has the story. Youtube has video.
Wednesday, February 1, 2017
At Singularity Hub, Ryan Abbott, professor of law and medicine, discusses coming changes in technology and how they might affect tort law:
Abbott appears to be the first to suggest in a soon-to-be-published paper that tort law treat AI machines like people when it comes to liability issues. And, perhaps more radically, he suggests people be judged against the competency of a computer when AI proves to be consistently safer than a human being.
Safety is also the big reason why Abbott argues that in the not-too-distant future, human error in tort law will be measured against the unerring competency of machines.
“This means that defendants would no longer have their liability based on what a hypothetical, reasonable person would have done in their situation, but what a computer would have done,” Abbott writes. “While this will mean that the average person’s best efforts will no longer be sufficient to avoid liability, the rule would benefit the general welfare.”
The full article is here.
Updated: Alberto Bernabe comments at Torts Blog.
Wednesday, July 6, 2016
A man driving a Tesla Model S in Florida has become the first self-driving car fatality. Statements by Tesla and NHTSA concur that, while in Autopilot mode, the car failed to distinguish the white side of a turning tractor trailer from the bright May sky; the brakes were not applied. The man's family has retained an attorney. This case will begin sorting out all of the unanswered questions created by the new technology. The ABA Journal has details.
Updated: Analysis from The Guardian here.
Thursday, June 9, 2016
The Pittsburgh Post-Gazette has a Pennsylvania-focused story on 3-D printing. The disruptive nature of the technology for products liability has been obvious for several years. There are very few cases, but attorneys have started to ponder the issues. The story has several takeaway points. First, attorneys expect the early cases to focus on medical and auto parts. Second, the role of computer-aided design (CAD) software as a blueprint for designs will be important:
Products liability attorney Mihai M. Vrasmasu of Shook, Hardy & Bacon said that, when dealing with companies that use 3-D printing, liability issues can generally be broken down to three categories: when a manufacturer buys or licenses a design that is used to print the product, when a manufacturer modifies that file before printing the product, and when a manufacturer designs the file.
Finally, especially in the early period of uncertainty, it is crucial to use contracts to manage liability.
Thursday, January 21, 2016
On Friday, Canada's first law against "revenge porn," the non-consensual sharing of intimate images often by an ex-lover, went into effect in Manitoba. The law provides a civil remedy for the victim against the perpetrator. Explicit consent is required before such images may be shared. In the U.S., 9 states have civil remedies and 27 states have criminal provisions regarding revenge porn. VICE News has the story.
Tuesday, July 14, 2015
At JD Supra, Chris Jones of Sands Anderson in Virginia discusses some of the considerations, which include the expansion of products liability into automobile accident cases and the potential need for a post-sale duty to warn in jurisdictions that have not adopted it.
Tuesday, January 27, 2015
Patrick Hubbard (South Carolina) has just published Sophisticated Robots: Balancing Liability, Regulation, and Innovation in the Florida Law Review. The abstract provides:
Our lives are being transformed by large, mobile, “sophisticated robots” with increasingly higher levels of autonomy, intelligence, and interconnectivity among themselves. For example, driverless automobiles are likely to become commercially available within a decade. Many people who suffer physical injuries from these robots will seek legal redress for their injury, and regulatory schemes are likely to impose requirements on the field to reduce the number and severity of injuries.
This Article addresses the issue of whether the current liability and regulatory systems provide a fair, efficient method for balancing the concern for physical safety against the need to incentivize the innovation that is necessary to develop these robots. This Article provides context for analysis by reviewing innovation and robots’ increasing size, mobility, autonomy, intelligence, and interconnections in terms of safety—particularly in terms of physical interaction with humans—and by summarizing the current legal framework for addressing personal injuries in terms of doctrine, application, and underlying policies. This Article argues that the legal system’s method of addressing physical injury from robotic machines that interact closely with humans provides an appropriate balance of innovation and liability for personal injury. It critiques claims that the system is flawed and needs fundamental change and concludes that the legal system will continue to fairly and efficiently foster the innovation of reasonably safe sophisticated robots.
Tuesday, October 21, 2014
GA: Parents May Be Liable for Negligent Supervision in Failure to Have Child Take Down Fake Facebook Page
On October 10, the Court of Appeals of Georgia allowed a claim to go forward against the parents of a middle-school-aged child who created a fake Facebook page for a classmate and posted defamatory statements. In Georgia, parents have a duty to supervise their children with regard to conduct that poses an unreasonable risk of harming others. The court's decision was based on the fact that the parents did not compel their child to take down the fake Facebook page after they became aware of it. The page remained up for approximately 11 months after the parents learned of its existence. The case is Boston v. Athearn.
Thanks to Mark Weber for the tip.
Monday, September 8, 2014
The Texas Supreme Court has ruled that a court can order an author to delete a defamatory post, but cannot prohibit the author from reposting the statements because that would be an unlawful prior restraint of free speech. The deletion remedy is novel. The Texas Lawyer has the story.
Wednesday, February 12, 2014