Friday, March 15, 2024

Did Biden Administration Officials Impermissibly Coerce Social Media to Moderate Content?

The Supreme Court will consider the issue on Monday, March 18. Here's my argument preview, from the ABA Preview of United States Supreme Court Cases, with permission:

FACTS

Two states and five individuals sued the government after social-media platforms removed or downgraded their posts. The plaintiffs claimed that 67 federal entities, officials, and employees coerced or significantly encouraged the platforms to censor their posts, which primarily related to COVID-19 and the 2020 election. In their Statement of the Case, starting at page 2 of their brief, the plaintiffs set out what they call “a broad pressure campaign designed to coerce social-media companies into suppressing speakers, viewpoints, and content disfavored by the government.” Here are some highlights, according to the plaintiffs. (The Fifth Circuit’s opinion contains a similar narrative.)

  • On January 23, 2021, in response to an anti-vaccine tweet by Robert F. Kennedy, the White House asked Twitter (now X) to “get moving on the process for having it removed ASAP.” The White House also asked, “And then if we can keep an eye out for tweets that fall in this same genre that would be great.” (The plaintiffs claim that this was part of a larger coordination effort between the Biden Administration’s “transition and campaign teams” and Twitter.)
  • Government entities, including the White House in “close cooperation” with the Surgeon General’s Office, asked social-media platforms for reports on their content-moderation policies. Government actors also suggested content-moderation policies to social-media platforms to monitor and restrict misinformation about COVID-19.
  • The White House told one platform that “[i]nternally, we have been considering our options on what to do about it.” It sent another platform a list of regulatory proposals for content moderation and said, “spirit of transparency—this is circulating around the [White House] and informing thinking.”
  • On May 5, 2021, the White House Press Secretary asked platforms to “stop amplifying untrustworthy content, disinformation, and misinformation, especially related to COVID-19, vaccinations, and elections.” On July 16, President Biden said that platforms were “killing people” by not moderating false content. Four days later, the White House Communications Director said that the Administration was considering “whether these companies should be held liable for publishing false information,” including “amending the Communications Decency Act, or Section 230 of the act,” which gives social-media platforms immunity from claims based on third-party content.
  • According to the plaintiffs, “[t]he platforms capitulated to virtually all White House demands going forward, and ‘began taking down content and deplatforming users they had not previously targeted.’” Moreover, “platforms responded by treating the CDC as the final authority on what could and could not be posted on their platforms.”
  • The FBI and CISA held regular meetings with platforms and pressured them to moderate content, including election content posted by Americans, sometimes under threat of legislation. In 2020, the FBI and CISA urged platforms to moderate “hacked materials,” which the platforms used to “promptly . . . suppress the New York Post’s Hunter Biden laptop story shortly before the 2020 election.”
  • In 2020, CISA launched the “Election Integrity Partnership” (later called the “Virality Project”) to facilitate cooperation between the government, research agencies, and social-media platforms on content-moderation policies. The plaintiffs say that this effort “involve[d] extremely tight federal-private collaboration, with dozens of points of contact and cooperation.”

The district court held that the actions of seven groups of defendants transformed social-media platforms’ content-moderation decisions into state action, and that the government actions violated the First Amendment. The court entered a sweeping preliminary injunction, ordering those defendants and hundreds of thousands of employees of defendant agencies not to engage in ten types of speech. The injunction also contained some carve-outs, allowing the government to inform platforms of content involving “criminal activity,” “national security threats,” and certain other content.

The Fifth Circuit agreed that the government violated free speech, but it narrowed the injunction. (The government vigorously disputed many of the district court’s factual findings. The Fifth Circuit did not rely on many of those findings, but nevertheless held that the findings it credited were sufficient to support a narrowed injunction.) The Fifth Circuit’s injunction prohibited the government and its employees from taking any action, “formal or informal, directly or indirectly, to coerce or significantly encourage social-media companies to” moderate content “containing protected free speech.” The injunction said that this included “compelling the platforms to act, such as by intimating that some form of punishment will follow a failure to comply with any request, or supervising, directly, or otherwise meaningfully controlling the social-media companies’ decision-making processes.”

The Court stayed the injunction and agreed to hear the case.

CASE ANALYSIS

This case raises three issues. Let’s take them one at a time.

Standing

The government argues that the plaintiffs lack standing, because they failed to show that they suffered any cognizable injuries that are traceable to the government’s conduct or that could be redressed by judicial relief. The government says that the plaintiffs rely mostly on past instances when social media platforms moderated their posts, and that these instances occurred before the challenged government actions. Moreover, the government claims that the Fifth Circuit’s injunction restricting future government action cannot redress those past injuries, and that the plaintiffs failed to establish a real threat of future injuries that the injunction could redress. The government contends that the plaintiffs cannot establish standing based on their “generalized desire to listen to other social-media users.” According to the government, no court has endorsed such a “limitless theory.” Finally, the government asserts that the states lack standing, because states have no First Amendment rights.

The plaintiffs counter that they suffered, and continue to suffer, harms from the government’s ongoing pressure campaign against social-media platforms. They say that the government pressured platforms to censor their posts, to censor specific topics and viewpoints on which they speak, to adopt moderation policies that apply against the plaintiffs, and to censor other speakers that the plaintiffs read and re-post. The plaintiffs contend that the government’s actions harmed the states, which have “sovereign interests in posting their own speech and in following the speech of their citizens on social media, especially political speech.” The plaintiffs assert that the government’s efforts are ongoing, and that their harms are therefore “virtually certain to recur during the pendency of the case,” so that the Fifth Circuit’s injunction redresses their harm.

First Amendment

As a general matter, private social-media platforms are not restricted by the First Amendment. That’s because the First Amendment only applies against the government, not private actors. As a result, an individual has no First Amendment claim against a private social-media platform, even if the platform moderated the individual’s post based on government information, persuasion, or criticism. Indeed, the government communicates with social-media platforms all the time in order to help those platforms make content-moderation decisions that protect national security, public health, and other public interests. And social-media platforms often moderate content in response to government information and even persuasion.

That said, a private social-media platform becomes a state actor subject to First Amendment restrictions when the government “compels” it “to take a particular action,” Manhattan Community Access Corp. v. Halleck, 139 S. Ct. 1921 (2019), or “significantly encourages” it to take action.

In assessing whether government actors merely informed or persuaded, on the one hand, or impermissibly coerced or significantly encouraged, on the other, courts look to the particular circumstances of the government’s actions. In particular, they look to whether, under the specific facts, the government threatened consequences or offered incentives that effectively compelled a private party to act in a certain way.

The government argues that officials’ actions fell well short of “coercive threats” or “significant encouragement.” It says that no government official offered inducements to social-media platforms, and that no official from the FBI, CDC, CISA, or the Surgeon General’s Office threatened any platform with adverse consequences or offered any positive inducements. Instead, the government says that officials “largely just provided the platforms with information.” As to White House officials, the government says that they only provided “general responses to press questions untethered from any specific content-moderation request.”

The government argues that the Fifth Circuit erred in concluding otherwise. In particular, the government says that the Fifth Circuit wrongly “deemed all of the platforms’ content-moderation activities to be state action by radically expanding the state-action doctrine.” (Emphasis in original.) For example, the government contends that the Fifth Circuit wrongly concluded that FBI communications were inherently coercive just because “the FBI is a law-enforcement agency.” Moreover, the government contends that the Fifth Circuit wrongly concluded that government “significant encouragement” only required minimal government “entanglement” with the platforms, when in fact “significant encouragement” requires much more.

The government argues that the Fifth Circuit’s approach would lead to “startling” results. It says, for example, that the Fifth Circuit’s approach would sharply restrict the ability of government officials to speak on important matters of public concern, including national security and public health. Moreover, the government contends that the Fifth Circuit’s approach would constrain the ability of private social-media platforms to moderate content on these issues, because they would be considered “state actors” subject to the First Amendment.

The plaintiffs counter that government officials’ behavior constitutes both significant encouragement and coercion. As to significant encouragement, they claim that officials’ conduct “involves deep government entanglement in private decisionmaking based on relentless pressure from federal officials, including ‘the most powerful office in the world.’” As to coercion, the plaintiffs contend that government officials “employ[ed] a battery of explicit and implicit threats and pressure to ‘bend’ platforms ‘to the government’s will.’”

But even if the government hasn’t coerced the platforms, the plaintiffs argue that the government is “engaged in joint action with the platforms” by “conspir[ing] with platforms through endless private meetings and communications reflecting extensive, direct federal involvement in specific decisions.” According to the plaintiffs, this “entwine[ment]” transforms the private platforms’ content-moderation decisions into state actions under the First Amendment.

Injunction

The government argues that the Fifth Circuit’s injunction is impermissibly vague and overbroad. It says that the lower court failed to “identify any facts demonstrating that respondents will likely suffer irreparable harm in the future” to support the injunction. Moreover, the government contends that the injunction includes individuals who are not parties to the case, and impermissibly “covers any governmental communication about moderation of content on any topic posted by any user on any platform.” Finally, the government asserts that the injunction would “harm the government and the public by chilling a host of legitimate Executive Branch communications.”

The plaintiffs counter that the Fifth Circuit’s injunction is properly tailored. They say that it only prevents the government “from coercing and significantly encouraging the suppression of protected speech,” and that “[e]xtending the injunction across platforms and speakers is imperative to grant complete relief.” Moreover, they contend that other equitable factors favor the injunction. In particular, they claim that the government will continue its behavior, violating the rights of “millions” of Americans, and that “the likelihood of ongoing and repeated injuries to the Plaintiffs is overwhelming.”

SIGNIFICANCE

This case tests when and how the government can work with private social-media companies to address third-party content that threatens public health, electoral integrity, and other critical public interests. Just to draw on a couple examples from this case: How far can the government go in urging social-media platforms to moderate false information about COVID-19 vaccinations? How far can it go in urging platforms to moderate false information about the time or location of elections, or false information alleging election fraud?

On the one hand, social-media companies have long sought to address dangerous third-party content through content-moderation policies. And the government has long worked with these corporations (and other media) to inform and protect the public from those threats.

In recent times, the government has provided briefings, notices, and alerts to social-media corporations regarding third-party content that raises threats related to foreign and domestic terrorism, “covert foreign malign actor[s],” and (as here) public health and electoral security. Government officials have also often spoken publicly on a range of issues related to social media, including the public harms that can come from widespread false information on social-media platforms. The government can speak to private actors; it can even try to persuade them. Indeed, the government would be hard-pressed to govern without this power.

But on the other hand, heavy-handed government involvement with platforms and their content-moderation decisions—especially over politically charged topics—could raise the specter of government censorship.

In figuring where to draw the line, look for the Court to probe the specific behavior of various government actors and the larger context of that behavior. In particular, look for the justices to focus on what government actors actually said or did; whether their statements or actions contained threats or inducements; and how the social-media platforms likely understood their statements or actions; among other, similar questions. The Court may conclude that some government action crossed the line, and that other government action didn’t.

This case comes to the Court just weeks after the Court heard arguments in two other social-media cases, Moody v. NetChoice and NetChoice v. Paxton. Those cases test whether Florida’s and Texas’s laws restricting social-media platforms from moderating third-party content violate the First Amendment. To state the obvious: all of these cases are politically loaded. In the NetChoice cases, the states worry that social-media platforms censor politically conservative speech. In this case, the plaintiffs contend that the Biden Administration is causing them to censor that speech, and that it’s doing so for political reasons.

But as in the NetChoice cases, just because this case is politically loaded, don’t assume that the justices (or at least all of them) will lean toward their conventionally accepted political preferences. That’s because any rule or approach that the Court applies in this case will apply equally if and when the political tables turn.

https://lawprofessors.typepad.com/conlaw/2024/03/did-biden-administration-officials-impermissibly-coerce-social-media-to-moderate-content.html

Cases and Case Materials, First Amendment, News, Speech | Permalink

Comments

Post a comment