Tuesday, July 9, 2019
3d Cir: Amazon Is a Seller of Goods Through Website, Even if Owned by 3rd Parties; CDA Not Applicable Except as to Failure to Warn
In Oberdorf v. Amazon.com, Inc., the Third Circuit held that Amazon was a "seller" for purposes of Pennsylvania state law when it sold items on its website through Amazon Marketplace. Amazon Marketplace connects buyers to third-party sellers on Amazon's website. Amazon does not own the goods and in many cases does not deliver them. After determining Amazon to be a seller, the court further held that section 230 of the Communications Decency Act of 1996 only shields Amazon from failure to warn claims. Section 230 states: "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." The Third Circuit panel unanimously held that this provision would immunize Amazon from failure to warn claims because the warnings or lack thereof were provided by a third party. No immunity would be available, however, for non-speech-related claims such as manufacturing or design defects.
The holding that Amazon is a seller is unusual, perhaps even novel. A recent Fourth Circuit case, based on Maryland law, came to the opposite conclusion. In the "flaming headlamp" case, reported on here, the Fourth Circuit affirmed the district court's finding that Amazon was not a seller under nearly identical circumstances. In a recent related case, Herrick v. Grindr, reported on here, the Second Circuit concluded that section 230 of the CDA did protect Grindr from claims brought when an angry ex-boyfriend allegedly created fake profiles that induced numerous men to come to plaintiff's home and work demanding sex.
Andrew Keshner of MarketWatch has a piece on the Third Circuit case, with a focus on the CDA holding here.
Friday, June 7, 2019
Nathan Cortez has posted to SSRN A Black Box for Patient Safety?. The abstract provides:
Technology now makes it possible to record surgical procedures with striking granularity. And new methods of artificial intelligence (A.I.) and machine learning allow data from surgeries to be used to identify and predict errors. These technologies are now being deployed, on a research basis, in hospitals around the world, including in U.S. hospitals. This Article evaluates whether such recordings – and whether subsequent software analyses of such recordings – are discoverable and admissible in U.S. courts in medical malpractice actions. I then argue for reformulating traditional "information policy" to accommodate the use of these new technologies without losing sight of patient safety concerns and patient legal rights.
Monday, May 20, 2019
UVa Law has a podcast, "Common Law," which is co-hosted by tortsprof Leslie Kendrick (she is Vice Dean). The most recent version features Ken Abraham and alum Michael Raschid, chief legal officer and vice president of operations at Perrone Robotics, discussing the effect of autonomous vehicles on tort and insurance.
Monday, January 7, 2019
Couple breaks up. Upset former lover creates fake profiles on a dating app that leads to harassment of ex, including over a dozen instances of people showing up at the person's home and workplace ready for sex. The victim files police reports and eventually obtains a restraining order against the company that created the dating app. The victim sues the company alleging, among other things, products liability. The trial court dismisses the action based on section 230 of the Telecommunications Decency Act of 1996, protecting those providing interactive computer services from the statements of third parties. Today the Second Circuit hears an appeal of that case, Herrick v. Grindr. In the meantime, Dave Ingram of NBC has an interesting piece on the issue of whether apps qualify as products for purposes of products liability.
Tuesday, July 31, 2018
Robert Chesney & Danielle Keats Citron have posted to SSRN Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. The abstract provides:
Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with “deep fake” technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did. Machine learning techniques are escalating the technology’s sophistication, making deep fakes ever more realistic and increasingly resistant to detection. Deep-fake technology has characteristics that enable rapid and widespread diffusion, putting it into the hands of both sophisticated and unsophisticated actors.
While deep-fake technology will bring with it certain benefits, it also will introduce many harms. The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases. Deep fakes will exacerbate this problem significantly. Individuals and businesses will face novel forms of exploitation, intimidation, and personal sabotage. The risks to our democracy and to national security are profound as well.
Our aim is to provide the first in-depth assessment of the causes and consequences of this disruptive technological change, and to explore the existing and potential tools for responding to it. We survey a broad array of responses, including: the role of technological solutions; criminal penalties, civil liability, and regulatory action; military and covert-action responses; economic sanctions; and market developments. We cover the waterfront from immunities to immutable authentication trails, offering recommendations to improve law and policy and anticipating the pitfalls embedded in various solutions.
Monday, April 2, 2018
Ken Abraham & Bob Rabin have posted to SSRN Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era. The abstract provides:
The United States is on the verge of a new era in transportation, requiring a new legal regime. Over the coming decades, there will be a revolution in driving, as manually-driven cars are replaced by automated vehicles. There will then be a radically new world of auto accidents: most accidents will be caused by cars, not by drivers. Thus far, however, proposals for reform have failed to address with precision the distinctive issues that will be posed during the long transitional period in which automated vehicles share the roadway with conventional vehicles, or during the succeeding period that will be dominated by accidents between automated vehicles. A legal regime for this new era should more effectively and sensibly promote safety and provide compensation than the existing tort doctrines governing driver liability for negligence and manufacturer liability for product defects will be able to do. In a world of accidents dominated by automated vehicles, these doctrines will be anachronistic and obsolete. We present a proposal for a more effective system, adopting strict manufacturer responsibility for auto accidents. We call this system Manufacturer Enterprise Responsibility, or “MER.” In describing and developing our proposal for MER, we present the first detailed, extensively analyzed approach that would promote deterrence and compensation more effectively than continued reliance on tort in the coming world of accidents involving automated vehicles.
Tuesday, November 7, 2017
Friday, October 13, 2017
We have heard a lot about technology changing tort law in the form of autonomous vehicles. Now Giant Foods is experimenting with a roving robot in its grocery stores. "Marty" has a number of skills: he can check prices and help with stocking. His main job, however, is to scan the aisles for potential slip hazards on the floor. If the technology is successful, we may have safer stores and fewer tort cases. PennLive has the story. Youtube has video.
Wednesday, February 1, 2017
At Singularity Hub, Ryan Abbott, professor of law and medicine, discusses coming changes in technology and how they might affect tort law:
Abbott appears to be the first to suggest in a soon-to-be-published paper that tort law treat AI machines like people when it comes to liability issues. And, perhaps more radically, he suggests people be judged against the competency of a computer when AI proves to be consistently safer than a human being.
Safety is also the big reason why Abbott argues that in the not-too-distant future, human error in tort law will be measured against the unerring competency of machines.
“This means that defendants would no longer have their liability based on what a hypothetical, reasonable person would have done in their situation, but what a computer would have done,” Abbott writes. “While this will mean that the average person’s best efforts will no longer be sufficient to avoid liability, the rule would benefit the general welfare.”
The full article is here.
Updated: Alberto Bernabe comments at Torts Blog.
Wednesday, July 6, 2016
A man driving a Tesla Model S in Florida has become the first self-driving car fatality. Statements by Tesla and NHTSA concur that, while in Autopilot mode, the car failed to distinguish the white side of a turning tractor trailer from the bright May sky; the brakes were not applied. The man's family has retained an attorney. This case will begin sorting out all of the unanswered questions created by the new technology. The ABA Journal has details.
Updated: Analysis from The Guardian here.
Thursday, June 9, 2016
The Pittsburgh Post-Gazette has a Pennsylvania-focused story on 3-D printing. The disruptive nature of the technology for products liability has been obvious for several years. There are very few cases, but attorneys have started to ponder the issues. The story has several takeaway points. First, attorneys expect the early cases to focus on medical and auto parts. Second, the role of computer-aided design (CAD) software as a blueprint for designs will be important:
Products liability attorney Mihai M. Vrasmasu of Shook, Hardy & Bacon said that, when dealing with companies that use 3-D printing, liability issues can generally be broken down to three categories: when a manufacturer buys or licenses a design that is used to print the product, when a manufacturer modifies that file before printing the product, and when a manufacturer designs the file.
Finally, especially in the early period of uncertainty, it is crucial to use contracts to manage liability.
Thursday, January 21, 2016
On Friday, Canada's first law against "revenge porn," the non-consensual sharing of intimate images often by an ex-lover, went into effect in Manitoba. The law provides a civil remedy for the victim against the perpetrator. Explicit consent is required before such images may be shared. In the U.S., 9 states have civil remedies and 27 states have criminal provisions regarding revenge porn. VICE News has the story.
Tuesday, July 14, 2015
At JD Supra, Chris Jones of Sands Anderson in Virginia discusses some of the considerations, which include the expansion of products liability into automobile accident cases and the potential need for a post-sale duty to warn in jurisdictions that have not adopted it.
Tuesday, January 27, 2015
Patrick Hubbard (South Carolina) has just published Sophisticated Robots: Balancing Liability, Regulation, and Innovation in the Florida Law Review. The abstract provides:
Our lives are being transformed by large, mobile, “sophisticated robots” with increasingly higher levels of autonomy, intelligence, and interconnectivity among themselves. For example, driverless automobiles are likely to become commercially available within a decade. Many people who suffer physical injuries from these robots will seek legal redress for their injury, and regulatory schemes are likely to impose requirements on the field to reduce the number and severity of injuries.
This Article addresses the issue of whether the current liability and regulatory systems provide a fair, efficient method for balancing the concern for physical safety against the need to incentivize the innovation that is necessary to develop these robots. This Article provides context for analysis by reviewing innovation and robots’ increasing size, mobility, autonomy, intelligence, and interconnections in terms of safety—particularly in terms of physical interaction with humans—and by summarizing the current legal framework for addressing personal injuries in terms of doctrine, application, and underlying policies. This Article argues that the legal system’s method of addressing physical injury from robotic machines that interact closely with humans provides an appropriate balance of innovation and liability for personal injury. It critiques claims that the system is flawed and needs fundamental change and concludes that the legal system will continue to fairly and efficiently foster the innovation of reasonably safe sophisticated robots.
Tuesday, October 21, 2014
GA: Parents May Be Liable for Negligent Supervision in Failure to Have Child Take Down Fake Facebook Page
On October 10, the Court of Appeals of Georgia allowed a claim to go forward against the parents of a middle-school-aged child who created a fake Facebook page for a classmate and posted defamatory statements. In Georgia, parents have a duty to supervise their children with regard to conduct that poses an unreasonable risk of harming others. The court's decision was based on the fact that the parents did not compel their child to take down the fake Facebook page after they became aware of it. The page remained up for approximately 11 months after the parents learned of its existence. The case is Boston v. Athearn.
Thanks to Mark Weber for the tip.
Monday, September 8, 2014
The Texas Supreme Court has ruled that a court can order an author to delete a defamatory post, but cannot prohibit the author from reposting the statements because that would be an unlawful prior restraint of free speech. The deletion remedy is novel. The Texas Lawyer has the story.
Wednesday, February 12, 2014
Wednesday, November 13, 2013
A colleague asked me about this last week; I confess that I had not considered it. Now Kyle Colonna has posted his Note to SSRN. Entitled Autonomous Cars and Tort Liability, the abstract provides:
With the passing of time, cars are becoming more autonomous and independent of human intervention. However, with this shift in control from humans to technology, there also comes a shift in liability. While autonomous cars will eliminate many accidents caused by human error, many others will result due to technological malfunctions. In order to ensure that autonomous cars enter the marketplace in a timely fashion, the liability of autonomous car manufacturers requires mitigation. This Note examines the legal issues surrounding autonomous cars, including tort liability, and proposes a means by which the liability issues surrounding autonomous cars may be fashioned in order to effectuate a timely implementation of autonomous cars in the marketplace.
Wednesday, October 16, 2013
Brendan Kenny, who launched Twin Cities eDiscovery Forum a year ago, is back at it:
The SF Bay Area eDiscovery Forum is having its inaugural meeting on October 21st, 2013 from 8:00–9:00 a.m. at the offices of Hanson, Bridgett, LLP, 425 Market Street, 26th Floor, San Francisco. RSVP to Chelsea Doctors by at CDoctors@HansonBridgett.com or by phone at 415-995-6465, by October 18, 2013.
The first meeting will discuss e-mediation.
And here is a link to the invitation: http://tinyurl.com/lekrvyz
Wednesday, September 5, 2012
In a federal case in Virginia, the number of"likes" an allegedly defamatory Facebook page received was admissible, but a punitive damages award was reduced. Peter Vieth of Virginia Lawyers Weekly has the story:
A federal judge says a dog trainer who claimed he was defamed by online accusations of animal abuse was entitled to tell a jury how many people “liked” the offending Facebook page, a federal judge has ruled.
Nevertheless, U.S. District Judge James Cacheris said the jury’s “grossly excessive” $60,000 punitive damages verdict in favor of the dog trainer should be cut by three quarters. Cacheris says the defendant can either accept the reduction of punitives to $15,000 or take a new trial.
The full story is here.