Dangerous waters at the Supreme Court today
On social media regulation, SCOTUS must avoid extremes
Right now, the Supreme Court is hearing arguments in two cases on whether state governments can restrict social media platforms’ ability to remove accounts and moderate content. (Listen in here.)
After Facebook, Twitter, YouTube, and other social media companies deplatformed then-President Donald Trump in January 2021, Florida and Texas passed restrictive laws in response. Three years later, those laws are now before the Supreme Court.
Whichever way the Court rules, it should tread cautiously. Because how the Supreme Court fells, prunes, or upholds these statutes matters. Dangers lurk on either side of the spectrum, as both the platforms’ and the states’ most extreme positions leave room for social media to be manipulated in ways that undermine our democracy.
To see why, imagine two scenarios. In the first, the owner of a major social media platform, keen to help their preferred political candidate, deplatforms the opposing candidate weeks before an election. In the second, in an effort to sway an election, a state attorney general files criminal charges against a major platform that has taken steps to stop the spread of false rumors that would boost the attorney general’s preferred candidate.
If the Court adopts the platforms’ approach wholesale, the government would be powerless to intervene in the first scenario, because the Constitution (in their view) bars nearly all regulations on the content they carry. On the flip side, if the Court sides fully with the Fifth Circuit or Florida, there is little to stop Orwellian speech-policing by government officials, because the Fifth Circuit held, and Florida argues, that platforms have no First Amendment right to moderate content.
There are three main issues the Court must navigate carefully to chart a course between these extreme positions.
First, the Court must be precise about the limits of large platforms’ First Amendment rights. Platforms undoubtedly have a First Amendment right to curate content. But the largest platforms’ discretion cannot be wholly unconstrained.
Large social media platforms are the new hubs of our digital democracy. Where citizens might once have encountered one another in person, they now meet and communicate online — and platforms should not have full control over who participates in those conversations and how. If the country’s largest platforms decide tomorrow to ban users based on their race, as University of Chicago Professor Genevieve Lakier and others point out, the government can and should extend some version of public accommodations protections to the largest platforms, like those that protect access to bus stations, shops, restaurants, and theaters and other public meeting places. Whatever newspaper-like First Amendment rights large platforms possess, the Court should make clear that the largest platforms’ speech rights do not fully insulate them from narrow access and safety regulations.
Second, the Court should reject the platforms’ contention that the government has no interest in protecting access to speech online. The Supreme Court has long recognized that the government has a legitimate interest in preventing non-government actors from repressing speech. That interest is particularly strong for speech by political candidates, and there is a long and continuing history of protecting access to it on broadcast radio and television stations.
The platforms argue that laws designed to protect access to speech don’t apply to online behavior, because unlike broadcast stations and cable companies, social media companies have no physical control over access to expression online. That argument ignores a fundamental shift in our information ecosystem: we have moved from a content-scarce, attention-rich world, to a content-rich, attention-scarce world, in which social media platforms’ largest asset is their access to their audience. Does anyone really doubt that the largest social media platforms exercise as much or more “bottleneck control” over what speech we are exposed to than the radio station operators of the 1930s, television broadcasters of the 1960s, or cable companies of the 1990s? Today, when most people get their news online, it would make no sense for the government to be able to require radio broadcasters to broadcast candidate speech but not require the same of the largest social media platforms. Surely the government should be able to ensure candidates have access to those platforms, though they will need to be narrowly tailored to this new medium.
Third, the Court should be mindful of the effects of permitting fifty states to establish nationwide, burdensome social media regulations. Just two weeks ago, the Court expressed grave concerns about the chaos that might ensue if each state was allowed to decide whether Mr. Trump is disqualified from office. Similar concerns loom large in Monday’s cases. Allowing any one state to dictate the terms of public debate on a major social media platform could create a race-to-the-bottom, with the most interventionist state governing discourse for the country.
However the Court rules, it would be wise to heed Justice Robert Jackson’s words and temper any “doctrinaire logic with a little practical wisdom.” If it decides that social media platforms have no rights to independently curate content, that would enable opportunistic officials to bend the nation’s largest channels of communication for their own ends. In contrast, to rule that the largest platforms have carte blanche control over the largest forums for online conversation gives them too much power to manipulate political debate and risks insulating them from even narrow public accommodations and safety regulations. The First Amendment compels neither result, and a healthy, sustainable democracy demands a middle path.