Questions? +1 (202) 335-3939 Login
Trusted News Since 1995
A service for global professionals · Thursday, July 11, 2024 · 726,910,121 Articles · 3+ Million Readers

Moody Decision Confirms First Amendment Protects Online Platforms

The Supreme Court's much-awaited decision in Moody v. NetChoice LLC, No. 22-277 (U.S. July 1, 2024) confirmed that social media and other online platforms are protected by the First Amendment when they publish third-party content, including in exercising their "editorial" discretion to select, organize, display, promote, demote, or block such content — even if that discretion is exercised through algorithms or other automated systems.

Although the court's formal holding in the case was procedural — addressing how lower courts must analyze facial First Amendment challenges — the court majority spent considerable time explaining that key First Amendment precedents protecting intermediaries that compile and publish others' speech apply with full force to online services like Facebook and YouTube.

The decision makes clear that the challenged Texas and Florida laws are unconstitutional as applied to platforms' moderation of third-party content, repeatedly rebukes the 5th Circuit's contrary approach, and bolsters online services' positions in ongoing and future cases involving other regulations and related private claims.

Case Background

In Moody, the court considered challenges to two 2021 laws, from Texas and Florida, restricting whether and how social media platforms select, edit, arrange, remove, present, or block third-party content. The Florida law prohibits a broad range of large interactive online services from "censor[ing]" users by deleting, altering, labeling, or deprioritizing their posts based on their content or author. Fla. Stat. §§ 501.2041(1)(b), 501.2041(1)(g). The Texas law similarly prevents large platforms from "censor[ing]" a user or a user's expression based on viewpoint, including taking any action to "block, ban, remove, deplatform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression." Tex. Civ. Prac. & Rem. Code Ann. §§ 143A.001(1), 143A.002(a), 143A.006. (Both laws also require covered platforms to explain their content moderation decisions, but that requirement was not a focus of the court's decision.)

NetChoice LLC and the Computer & Communications Industry Association brought facial First Amendment challenges on behalf of their members, which include, for example, companies that operate services like Facebook and YouTube. District courts in both states entered preliminary injunctions, holding that each law unconstitutionally interfered with platforms' First Amendment rights to make editorial judgments about how they select and arrange content. The 11th Circuit upheld the Florida injunction for essentially the same reasons, while the 5th Circuit reversed, reasoning that the original meaning of the First Amendment afforded no protection to the platforms' editorial choices and content moderation, and that even if it did, Texas could regulate those choices to prevent private "censorship" and promote "a diversity of ideas" online.

The Court's Decision

With Justice Kagan writing for a 6-3 majority that included Chief Justice Roberts and Justices Sotomayor, Kavanaugh, Barrett, and (for all but one part) Justice Jackson, the court left both injunctions in place, clarified how district courts should analyze facial First Amendment challenges, and remanded for proceedings consistent with that framework. But the court did not stop there. The majority found it "necessary to say more about how the First Amendment relates to the law's content-moderation provisions, to ensure that the facial analysis proceeds on the right path." Moody v. NetChoice LLC, --- U.S. ---, 2024 WL 3237685 at *9 (U.S. July 1, 2024). It stressed "[t]hat need is especially stark for the Fifth Circuit," whose First Amendment analysis "was wrong" and required correction to "prevent" that court "from repeating its errors." Id.

Key Takeaways

Four aspects of the court's opinion in Moody stand out.

The First Amendment Protects Platforms' Editorial Discretion To Select Content

Applying a trio of cases — Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974), Pacific Gas & Electric Co. v. Public Utility Commission of California, 475 U.S. 1 (1986), and Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 U.S. 557 (1995) — the court held that online platforms have a First Amendment right to select and curate third-party speech into published compilations — that is, to moderate user-generated content by deciding whether to publish, block, promote, or demote it in other users' feeds. Id. at *11, *14-15. Explaining that the government violates this right when it "interferes with such editorial choices," id. at *11, the court stated that both the Texas and Florida laws — which would compel platforms to publish content in their user feeds they now exclude — are likely unconstitutional. Id. at *16. The rule articulated by the court would also naturally invalidate mandates not to publish content that is now published, as what matters is whether state action seeks to "change the speech of private actors" in any respect. Id.

The decision puts to rest arguments against application of the Tornillo line of cases to online platforms. Rejecting the contention that online platforms ought to be treated as passive conduits without First Amendment rights, the court distinguished PruneYard Shopping Center v. Robins, 447 U.S. 74 (1980), and Rumsfeld v. FAIR, Inc., 547 U.S. 47 (2006). Unlike the complaining parties in those cases — a shopping mall and law schools that were not "engaged in any expressive activity" — online platforms' content moderation is editorial activity that enjoys the First Amendment's protection. 2024 WL 3237685 at *11. The court clarified that it makes no difference that platforms "include[] most items and exclude[] just a few." Id. at *12. The First Amendment, the court affirmed, protects even a "focused editorial choice" from state action that seeks to alter a publisher's choice of content. Id. Finally, the court made clear that simply disagreeing with "the mix of speech" a platform elects to publish is "not [a] valid, let alone substantial" interest capable of overcoming these protections to justify interference with a platform's publication of content under any standard of First Amendment review. Id. at *12, 15.

The First Amendment Protects How Platforms Display Curated Content, Including Through Algorithms

The court's analysis underscored that the First Amendment protects not just the selection of content, but how platforms choose to display and publish that content. See, e.g., id. at *5 (protection applies to a platform's choices about "how to display" content "the way it wants"); *14 (protection applies to "choices about whether — and if so how" — to display content); *15 (protection applies to "how the display will be ordered and organized"); *16 (government may not interfere with "the way [online] platforms are selecting and moderating content"). Specifically, the court held that the First Amendment protects "the use of algorithms" to "personalize[]" and target particular content to particular users through a "continually updating stream" like a news feed, whether "based on a user's expressed interests and past activities" or on the platforms' own prioritization decisions. Id. at *13.

This is an important ruling. It makes clear that platforms do not forfeit their First Amendment rights when they automate the editorial judgments they use to moderate content. And it underscores that the choice of automating editorial decisions to present content is itself an expressive choice the First Amendment protects. This casts significant constitutional doubt upon pending and recently enacted regulations that would restrict or prohibit platforms from using algorithms to present personalized compilations of content to users. Compare, e.g., S7694A (N.Y. 2024) (enacting the Stop Addictive Feeds Exploitation (SAFE) for Kids Act) and SB 976 (Cal.) (pending legislation to enact the Protecting Our Kids from Social Media Addiction Act likewise restricting services that provide any "[a]ddictive feed" of content).

To be sure, the court left open the possibility that some types of automated content moderation and publication — including hypothetical AI-generated decision-making divorced from human instruction — may lack First Amendment protection if these methods "respond solely to how users act online … without any regard to independent content standards" established by humans. 2024 WL 3237685 at *13 n.5; see also id. at *18 (Barrett, J., concurring) (same caveat). But the court recognized these hypotheticals could be resolved with evidence demonstrating any nexus to human decision-making, and that, in any case, the present record sufficed to show that the regulated algorithm-powered editorial practices in these cases fell comfortably within the First Amendment's protection. Id. at *13.

The First Amendment Presents a Barrier to Private Suits That Would Alter the Presentation of Protected Content

The court's decision has important implications not only for government regulation but also for civil lawsuits. Several hundred private plaintiffs — in conjunction with most attorneys general and several dozen school districts — have, for example, sued major social media platforms claiming the way those platforms present content is "addictive" and has caused minors to suffer a variety of mental health problems. See generally In re Social Media Adolescent Addiction Litig., 22-md-3047-YGR (N.D. Cal.). Plaintiffs in these and other cases have also claimed that platforms' content recommendation algorithms constitute "defective products." The court's determination that the First Amendment protects the way platforms publish third parties' speech — including the "mix of speech" presented to users "through algorithms" that generate "continually updated" and "personalized" news feeds — indicates that suits seeking to punish or alter these protected choices should founder on the First Amendment. See 2024 WL 3237685 at *5, *11, *13-16.

Additional Hurdles for Facial First Amendment Challenges

Finally, the court's actual holding in Moody was procedural, addressing what a plaintiff must show to bring a facial First Amendment challenge to a law. This aspect of the decision may affect how such cases are pled and litigated in the future.

The Moody court held that it is not enough in such a lawsuit to show that a law is unconstitutional in its "heartland applications," even if those are the law's intended settings. Id. at *8-9, *17. Instead, the plaintiff must also show that "a substantial number of the law's applications are unconstitutional, judged in relation to the statute's plainly legitimate sweep," considering all of its possible applications. Id. at *8 (quotation omitted). This rule is in some tension with prior challenges like Brown v. Entertainment Merchants Association, 564 U.S. 786 (2011), which held a California law to be facially invalid under the First Amendment without any such analysis. Compare id. at 821, 839 (Thomas, J., dissenting) (criticizing majority opinion on this basis). But the court's rule and mandate to the lower courts effectively clarifies that this formulation of the overbreadth doctrine — what had become an independent basis to invalidate a law under the First Amendment, separate from a tiers-of-scrutiny analysis — is now an element of any First Amendment claim for facial invalidity. 2024 WL 3237685 at *9. Though still not as onerous as the "no set of circumstances" standard required outside the First Amendment context under United States v. Salerno, 481 U.S. 739, 745 (1987), this new test is a higher bar than most courts have applied in recent First Amendment litigation. We expect it is one plaintiffs will be able to satisfy in many cases, but it will impose an additional burden on such litigation.

What's Next

The challenges to the Florida and Texas laws now return to the lower courts. On remand, the platforms will need to demonstrate the laws' facial overbreadth. Alternatively, the platforms may pursue relief as applied just to their moderation of content, relying on the court's conclusion that both the Texas and Florida laws are likely unconstitutional in these heartland applications. The latter approach would likely result in a narrower injunction, but one that would, in practical effect, provide effective relief for covered services. First Amendment plaintiffs challenging other state and federal statutes should also now give more consideration to framing their claims on an as-applied basis, including in the alternative.

Moody is above all a win for the First Amendment, and for those who have argued that the First Amendment applies in full force online. The Supreme Court's clear guidance to lower courts bolsters platforms' arguments in affirmative challenges to the regulation of their publication practices, as well as in private suits where platforms are defending claims targeting the alleged effects of the way they choose to edit and present protected speech to audiences.

Note: The authors filed an amicus brief on behalf of former Representative Christopher Cox and Senator Ron Wyden in Moody, supporting NetChoice and CCIA.

Powered by EIN Presswire
Distribution channels: Law


EIN Presswire does not exercise editorial control over third-party content provided, uploaded, published, or distributed by users of EIN Presswire. We are a distributor, not a publisher, of 3rd party content. Such content may contain the views, opinions, statements, offers, and other material of the respective users, suppliers, participants, or authors.

Submit your press release