Is the First Amendment obsolete in the age of TikTok? The constitutional law protecting free speech was developed when there were far fewer opportunities to reach a significant audience than there are today, and those opportunities had to be zealously guarded from government censors. But as the Princeton sociology professor Zeynep Tufekci and the Columbia law professor Tim Wu have argued, speech opportunities in the Internet age are plentiful, and drowning out or algorithmically manipulating what people hear and read may be a greater threat than traditional suppression in controlling speech.1

The First Amendment protects speech only from interference by public authorities, but some of the most powerful forces controlling speech are private—the large social media platforms such as X, Facebook, and Instagram. Like the apple in Eden, social media has simultaneously brought us knowledge and introduced (or exacerbated) a host of problems, including hyperpolarization, depression, extremism, Russian interference in our elections, Donald Trump, and the insurrection of January 6.

For decades the major newspapers, television networks, and radio stations were the principal gatekeepers and moderators of our national dialogue. We got all the news that they deemed fit to print or broadcast—but not much else. Opportunities to reach a wide audience were rare and expensive.

Today anyone can effectively be a self-publisher with broad access to the public through multiple social media platforms. But when anyone can publish, without having to satisfy an editor or curator that what they say is factual, newsworthy, or ethical, the public conversation is at risk of being overrun by chaos, appeals to the lowest common denominator, expressions of bigotry and hatred, and false speech—both unintentional “misinformation” and intentional “disinformation.”

That social media is fundamentally broken is one of the few things that find bipartisan agreement these days. Liberals are very troubled, as Social Media, Freedom of Speech and the Future of Our Democracy, a collection of essays edited by the First Amendment scholars Lee Bollinger and Geoffrey Stone, illustrates. In his contribution, Larry Kramer, former dean of Stanford Law School and former president of the Hewlett Foundation, claims that nothing less than the survival of our democracy is at stake:

Left unchecked, the presently evolving information environment must be expected to unmake our democratic constitutional systems. Perhaps not imminently, but with certainty over time.

George Washington University law professor Mary Anne Franks sees women and minorities as the primary victims:

When platforms are overrun by death threats, rape threats, harassment campaigns, and exposure of private information, many people—especially women, minorities, and other marginalized and vulnerable groups—go silent.

Conservatives are equally unhappy, but they see themselves as the real victims. Fighting back against the large platforms’ allegedly liberal bias, Republicans in Florida and Texas have enacted laws forbidding platforms to take down or even deemphasize any posts based on their viewpoint. Missouri, Louisiana, and several individual users who have had their posts taken down have sued the Biden administration, asserting that it is illegally pressuring platforms to suppress speech with which it disagrees.

As Justice Elena Kagan acknowledged last year during oral argument in another social media case, the justices “are not, like, the nine greatest experts on the Internet.” But their rulings in a handful of cases this term will determine the future of free speech on this critically important and deeply flawed speech platform. The Court will decide a pair of cases—NetChoice, LLC v. Paxton and Moody v. NetChoice, LLC—brought by a trade group representing social media platforms challenging the Texas and Florida laws that seek to regulate the content moderation choices of large platforms. In Murthy v. Missouri, the Court will hear the Biden administration’s appeal of a lower court order that barred certain federal agencies from coercing or even “significantly encouraging” platforms to take down content that government officials considered to be false and to pose a risk to public health or safety—such as vaccine misinformation—or to the electoral process.

And in two other cases, Lindke v. Freed and O’Connor-Ratcliff v. Garnier, the Court will decide whether government officials are bound by the First Amendment when they use “personal” social media pages to conduct government business; the Constitution plainly applies to official government sites, but these cases ask whether the same rules should extend to ostensibly personal sites used to discuss and announce government policies. When all is said and done, this Supreme Court term will likely be the most consequential yet for free speech on the Internet.

There is no doubt much to criticize about social media. But there is also an unfortunate tendency to see it as the source of all evil. Many of the criticisms leveled at social media are not unique to it. It is said to promote information bubbles, in which people are rarely exposed to views that challenge their presuppositions and biases. But the same is true of MSNBC and Fox News, as well as much talk radio and many print outlets, from The Nation to National Review. Social media companies’ algorithms are condemned for amplifying extreme views, because those views generate the most online engagement and profits. But all forms of for-profit media are driven by what sells, and what sells is often what is most controversial or sensational.

Advertisement

Misinformation and disinformation are often cited as Internet evils, whether the work of Russian agents, domestic malefactors, or unwitting dupes. But again, the problem is hardly restricted to social media. It’s because false and misleading information is so plentiful in all realms that the fact-checking industry finds full employment during political campaigns. Social media poses particular concerns because it is subject to so little editorial gatekeeping, in part because Congress has given platforms, unlike other publishers, immunity from liability for posting content created by others. But the traditional news media are no strangers to spreading mis- and disinformation, as the run-up to the Iraq war amply illustrated, and as ideologically biased cable news channels remind us daily. Social media did not invent disinformation or foreign influence in other countries’ elections—indeed, the United States has practiced it for decades.

It may be some consolation, moreover, that there is scant evidence that misinformation has been a significant cause of the much-lamented decline of our democracy. A study of Russian Internet propaganda during the 2016 presidential campaign, for example, found that exposure to the posts was extremely concentrated, with 1 percent of users accounting for 70 percent of total views. Most of those who saw the messages were Republicans, and among those few who saw the disinformation at all, the Russian posts were vastly outnumbered by domestic news sources. The study found little basis for concluding that exposure to the Russian propaganda affected polarization, attitudes, or voting behavior.2

Having an editor exercising news judgment surely tends to reduce error, and social media lacks that feature. But the appeal of social media is precisely that individuals can publish without having to convince an editor that their views are worthy or correct. It is the absence of close moderation at the front end that makes social media posting available to everyone. Were platforms to review every post before it appears, they would no longer be nearly as accessible as they are. As of 2021, users worldwide generated some 500 million tweets, four million gigabytes of Facebook data, and 720,000 hours of YouTube videos every day. At that volume, careful editorial review is simply impossible, and algorithms designed to identify content that violates the platform’s content rules must operate very broadly, making errors inevitable. Much like free speech, cheap speech, one of the Internet’s main allures, has many benefits, but they come with inescapable costs.

In fact, no social media platform is literally open to all messages; they all engage in some content moderation, prohibiting certain messages, favoring others, and deemphasizing still others. If content were not moderated at all, the platforms would be useless. Your “feed” would be filled not with material that might interest you but with whatever was most recently or most frequently posted. Spam, irrelevant garbage, pornography, and hate speech would become regular features of your favorite platforms. (And you think it’s bad now?) Content moderation policies could certainly be improved, but it is far from clear how to do so—and empowering government to impose the rules is a treatment likely worse than the disease. Emily Bazelon encapsulates the dilemma:

When it comes to the regulation of speech, we are uncomfortable with government doing it; we are uncomfortable with social media or media titans doing it. But we are also uncomfortable with nobody doing it at all.

No one other than Elon Musk and Mark Zuckerberg likes the fact that they exercise so much power over what we see on X, Facebook, and Instagram, but what is the alternative? Would we be happier with government officials (Donald Trump, Ron DeSantis, Gavin Newsom, Betsy DeVos, Bernie Sanders) making those decisions? And if we want to see posts we are interested in and not see those that are a waste of time or worse, someone has to curate them. Content moderation by private social media platforms, then, is a little bit like democracy: the worst system of governance, apart from all the alternatives.

The question presented in the two NetChoice cases before the Supreme Court is whether the First Amendment permits governments to set the rules by which platforms choose what messages to accept, reject, amplify, or deemphasize. Both states’ laws purport to require large social media platforms to be more speech-friendly. Texas, for example, prohibits platforms with more than 50 million users from taking down, deemphasizing, or otherwise discriminating against a user’s post because of the viewpoint it expresses. The law’s defenders insist that the state is simply seeking to require the major social media outlets to behave as if they were bound by the First Amendment and not to censor on the basis of viewpoint. Since these platforms have become perhaps the most important public forums of our time, the argument goes, they should be governed by the rules that have long applied to government-owned public forums (such as parks, sidewalks, and streets).

Advertisement

Florida’s law is a little different, but it too seeks to replace the private platforms’ editorial judgment with the views of the legislature. It requires favored treatment for political candidates during campaigns, as well as for “journalistic enterprises,” and otherwise it mandates that any content moderation must be conducted “consistently” under publicly available standards, with detailed justifications whenever a post is taken down or minimized.

The first difficulty with these laws is that social media platforms are private entities. As such, the First Amendment not only does not require them to be neutral or consistent toward speech, it affirmatively guarantees their right to be nonneutral and inconsistent. When The New York Times decides what articles to publish, it discriminates on the basis of content and viewpoint. When a bookstore decides what books to stock, it does so on the basis of content and viewpoint. And when a film production company decides which documentaries to produce, it too necessarily makes judgments on the basis of content and viewpoint. Editing and curating require content discrimination, and the First Amendment protects those decisions. The government cannot tell newspapers what to publish, bookstores what to sell, or filmmakers what movies to make. For the same reason, the First Amendment protects the editorial judgments of social media platforms.

Texas and Florida argue that these platforms are not themselves speaking but merely providing a venue for others’ speech. But that is also true of a bookstore, a newspaper’s op-ed page, and a film production company. These entities may not themselves speak. But they curate what they offer, and that curation necessarily involves decisions about the content and viewpoint of the speech they publish or disseminate. Every social media platform has a content moderation policy that guides what can and cannot appear on its sites and what content it promotes or demotes. And it’s difficult to see why those editorial judgments should not be just as protected as a bookstore’s.

The Supreme Court in 1974 struck down a Florida state law that required newspapers to afford a “right of reply” to political candidates who received negative coverage in their pages. The Court reasoned that the First Amendment protects “the exercise of editorial control and judgment,” including “the choice of material” and the “treatment of public issues and public officials—whether fair or unfair.” The same principle ought to apply to social media platforms.

Defenders of the Florida and Texas social media laws cite another case in which the Supreme Court permitted some government control over broadcast television. There, it upheld the “fairness doctrine,” a law requiring broadcast stations to cover public issues in a fair manner. But the Court justified that exception to the general hands-off rule on the grounds that access to radio and television broadcast frequencies was a scarce resource distributed by the government. The Court has already rejected taking a similar approach to the Internet, because there is nothing scarce about online speech opportunities.

Florida and Texas also argue that large social media platforms should be treated like common carriers, such as phone companies and Federal Express. Those entities are required by law to serve all comers, without assessment of content. But it is possible to run a phone or delivery service without judging the content of the messages or packages delivered. It is not possible to operate a social media platform without content moderation. And under the First Amendment, such judgment must be left in private, not government, hands.

Even if government-mandated viewpoint neutrality were constitutional, it would be a terrible idea. It would mean, for example, that a platform that allowed posts encouraging suicide awareness would also have to allow posts encouraging suicide. It would mean that if a platform published antiracist posts or messages condemning antisemitism, it would also have to publish racist taunts and “Genocide to the Jews.” It would bar platforms from taking down hate speech, for that is by definition a form of viewpoint discrimination. And it’s far from clear how one could possibly implement viewpoint neutrality across billions of posts daily.

So government-imposed content moderation is not the answer. But this doesn’t mean that there is nothing we can do to address the problems posed by social media. At the root of many concerns is the fact that a few companies control access to the major platforms. If there were fifty Facebooks, we would be less worried about the content moderation policies of any particular one. Antitrust enforcement and laws that promote competition (such as interoperability mandates, which enable new entrants to the field) would address the concentration of economic power without regulating speech, and should face few First Amendment obstacles. The United States is currently suing Google, for example, for antitrust violations with respect to its search engine.

But whether antitrust efforts succeed or not, the principal responsibility for reform of content moderation policies will have to lie with the platforms themselves. Tim Wu has urged the development of professional norms for social media companies, much like the press developed its own set of norms for ethical journalism. Were federal law to impose ethics rules on newspapers, they would almost certainly violate the newspapers’ rights of free speech and free press. But nothing stops the trade from developing its own standards—and it has, not because the government compelled it to do so but because its legitimacy demanded them.

Platforms face similar legitimation pressures. As Stanford law professor Evelyn Douek writes, because platforms’ content decisions are immune from government regulation or legal challenge, companies could as a legal matter make decisions about whether to take down content or bar a user based on wholly arbitrary practices. Facebook and Twitter “could determine the future of former president Trump’s account by coin-flip and no court would uphold any challenge.” “So why,” she asks, when they did bar Trump, “did they instead elect to write long, tortured blog posts or invoke elaborate procedures in trying to rationalize their decisions? The answer is both intuitive and seemingly irrational for profit-driven companies: Legitimacy matters.” Facebook, for example, has created a Facebook Oversight Board, comprised of independent experts who review challenges to its content moderation decisions and issue lengthy opinions, often finding fault and urging reform.

What principles should guide platforms’ content moderation decisions? Whatever else one might say, they are not those the First Amendment imposes on government. Like newspapers and bookstores, social media platforms can refuse to publish or distribute content simply because they find it offensive, distasteful, false, or unworthy for virtually any reason. The government, by contrast, cannot regulate speech on those grounds. And private platforms need not be content- or viewpoint-neutral; indeed, they cannot function without constantly making such judgments. Nor are they bound to publish all speech that is protected by the First Amendment. Platforms routinely bar nudity, pornography, hate speech, and support of terrorism and other violence. Yet virtually all such speech is protected under the First Amendment from government prohibition.

At the same time, all the large platforms allow users to post messages to the site without prior approval or endorsement. And while the platforms must curate the content on their sites to keep them from being overrun by offensive and irrelevant material, the presumption seems to be that they should err on the side of tolerating rather than suppressing speech. In practice, then, a norm of free access has prevailed for the vast majority of speakers, even as the platforms strive to keep some types of harmful material off their sites.

So, no, the First Amendment is not obsolete. It protects social media platforms, and their billions of users, from government censorship, as the Supreme Court is likely to rule in the NetChoice cases, and possibly from informal efforts by government officials to suppress content, as the Court may rule in the Missouri case. It protects access by citizens to at least some government officials’ websites. In short, it limits what the government can do with respect to social media—even when the government claims it is doing so in the name of free speech values.

But in addition, the First Amendment’s spirit, if not its letter, guides the platforms themselves as they strive to provide an open forum, engaging in as little moderation as necessary to keep the sites useful. The fact that powerful private corporations control much of what we see and hear is nothing new. It’s just that new private entities have entered the field. Social media, like more traditional media, is affirmatively protected by the First Amendment, and as a practical matter the platforms are critical to ensuring that we actually have something approximating what the Supreme Court in New York Times v. Sullivan described as “uninhibited, robust, and wide-open” debate. It’s messy. It’s far from ideal. It will sometimes mean that people are exposed to communication that deeply offends them, and that some voices and messages will get more amplification than others. But that is the price of freedom—now as much as it was in James Madison’s day.


This article was originally published online February 23, 2024.