Media Law Prof Blog

Editor: Christine A. Corcos
Louisiana State Univ.

Friday, August 27, 2021

Blitz on Deepfakes and Other Non-Testimonial Falsehoods: When Is Belief Manipulation (Not) First Amendment Speech? @OCULAW

March Jonathan Blitz, Oklahoma City University, has published Deepfakes and Other Non-Testimonial Falsehoods: When is Belief Manipulation (Not) First Amendment Speech? 23 Yale J.L. & Tech. 160 (2020). Here is the abstract.

With a kind of machine learning known as “deep learning,” a computer program can quickly teach itself to recreate a person’s image or voice, manipulate it—like a puppeteer controlling a puppet—and blend it seamlessly into an environment the person never inhabited. This kind of fake video or audio, a “deepfake,” can be very difficult to distinguish from genuine camera footage. Does this kind of deepfake video or audio constitute “speech” protected by the First Amendment even when it is intended to deceive? Those who have begun to address this question have answered with a tentative “yes.” In the twenty-first century, video and audio recordings are a means of expression: We frequently share video and audio clips on social media both as acts of creative expression, and to convey information. Deepfakes are fake recordings - depicting events that haven’t occurred - but this doesn’t mean they lose First Amendment protection. The Supreme Court made clear - in the 2012 case, United States v. Alvarez, that the First Amendment presumptively protects verbal lies, so the visual lies in deepfake videos should receive First Amendment protection as well. Intentionally deceptive deepfakes might still be restricted, but only when they cause harm beyond the false belief they generate. This article, however, argues that we need a more nuanced view of deepfakes’ First Amendment status. We fail to fully understand how deepfake deception works if we define it as nothing more than the visual analogue of a verbal lie. Audiences understand that verbal lies come from speakers - and if they distrust the speaker, they will likely distrust the speaker’s claims. Video and audio recordings, by contrast, have often been viewed as consisting at least partly as evidence outside of a speaker’s authorship: Such recordings have often been treated as more reliable evidence of an event’s occurrence, on the understanding that (at least in previous decades, and to some extent even today) a video couldn’t easily be made to show an event unless it actually took place. For example, even someone who doubted the defendant’s verbal claim in United States v. Alvarez that he had been awarded Congressional Medal of Honor in 1987 might find it much harder to withhold belief in the reality of a video showing the defendant receiving a Medal of Honor from President Reagan in a 1987 ceremony. The video here would likely be understood by its audience not merely as an analogue of the defendant’s own claims or “testimony,” but rather as camera-generated evidence that was created, and stands outside of, a person’s words. A deepfake of such a Medal of Honor ceremony is thus arguably more akin to a record in a fake government Web database or register of Medal of Honor winners - a kind of deception that the Justices in Alvarez indicated in dicta would not receive First Amendment protection. Like such a fake government Web site, a deepfake would successfully deceive only because, unlike the speaker’s words, it is mistakenly viewed as a record outside of the speaker’s authorship, and deriving from some more source more insulated against the speaker’s manipulation. Courts could conceivably respond to the rise of deepfake technology by finding that - just as each person must serve as their own “watchman for truth” in assessing others’ words - they must, now that deepfakes are a reality, do so also when they (skeptically) assess the veracity of each video recording they see (or, for that matter, any other evidence that might be fabricated - from fake physical objects to false GPS readings). But just as courts in Fourth Amendment cases have refused to let advances in surveillance technology, or other digital technology, crowd out either the privacy interests protected by the Fourth Amendment or the security interests it has to leave room for, so they should - in First Amendment cases on deepfakes and other technology - refuse to allow the First Amendment right of authorship respected in Alvarez expand into a general First Amendment right to manipulate others’ environment (or their perceptions of it). This does not mean that courts can simply treat even intentionally deceptive deepfakes as wholly outside the coverage of the First Amendment. Because intentionally deceptive deepfakes can be very hard to distinguish from deepfakes that are exercises of creative fiction or other expression, restrictions on deepfakes should generally be subject to intermediate scrutiny under the First Amendment. Because even intentionally deceptive deepfakes might be targeted by government on the basis of the political beliefs they support rather than their deceptiveness, such restrictions should also be subject to viewpoint neutrality requirements. Ultimately, the type of law most likely to survive such First Amendment constraints will often likely be one that mandates disclosure of the deepfake’s nature rather than a prohibition on its creation or dissemination.

Download the article from SSRN at the link.

| Permalink