One of the many ways in which President Donald Trump has bent the boundaries of conventional norms of presidential behavior is his use of Twitter as a vehicle for both presidential pronouncements and more troubling forms of extreme political rhetoric and race-baiting. While it is not the most immediately troubling violation of presidential norms, Trump’s use of Twitter poses an important test for the future of our digital public sphere. In a landmark case, the Second Circuit upheld a lower court ruling that the President violated the First Amendment when he blocked users opposed to his policies from following his Twitter account.
Writing for the panel, Judge Barrington D. Parker agreed with plaintiffs that President Trump’s twitter was a “public forum” within the meaning of the First Amendment and that by blocking critics from participating on his feed, he engaged in unconstitutional viewpoint discrimination. Judge Parker limited the holding the to the facts of the case, noting that “[w]hether First Amendment concerns are triggered when a public official uses his account in ways that differ from those presented on this appeal” would depend on an officials’ specific use and presentation of her Twitter, thus avoiding ruling on whether Twitter itself is a “public forum” for purposes of the First Amendment. But what the Knight Institute case opens up is a much broader discussion that academics have been advancing for a while: how to legally structure and regulate our increasingly digital public sphere. This debate will ultimately have to include and extend beyond the kinds of First Amendment expansions contemplated in the Knight case.
As the Court reasoned in Knight Institute, the President deploys his Twitter handle in the way conventional presidents have used a White House press briefing: as a way to convey public and policy pronouncements. This in turn means that, even though the handle is formally a private expression of a private individual (Trump of course uses his own personal handle, not the official White House Twitter handle), the fact that it functionally served as a public forum in this context was persuasive to the Court. What is most compelling about this line of reasoning is that it seeks to bring First Amendment concerns, values, and legal tools to help structure the operation of the new, online systems that are increasingly the foundation of our speech infrastructure. But at the same time, there is a real concern that existing First Amendment tools may be of limited use for fully addressing the challenges posed by the online public sphere.
First, the move to hold Twitter as a public forum, even in the limited context of political leaders blocking followers, is fraught. The holding can cut in multiple directions. Since the decision, there have already been lawsuits filed against Alexandria Ocasio-Cortez by two different New York politicians she blocked from her Twitter account. And indeed the restriction on blocking followers could have other consequences particularly for women and leaders of color online who may be subjected to more aggressive forms of online harassment. And if Facebook, Twitter, and Google are deemed public forums for broader First Amendment purposes, this would make it very difficult to engage in more heavy forms of content curation and editorial oversight.
More broadly, there is a question, as Tim Wu has provocatively put it, whether the First Amendment itself is “obsolete” in the modern era. As Wu has argued, we are no longer in an era of state suppression of speech—the conditions that gave rise to our current doctrine. Rather, we are in a more complex hybrid situation of too much information, and of private actors whose for-profit business models premised on maximizing user attention and interaction to sell ads, create a range of incentives and practices that are problematic for democratic deliberation. As Frank Pasquale has argued, this attention- and ad-based business model makes virality the “metric of online success”. In effect, this undermines democratic discourse, shifting the operation of our public square to being premised on automated for-profit considerations rather than on what would best serve democratic debate. Zeynep Tufecki has similarly noted that this apparently “golden age” of free speech is in fact an era where social media platforms, by virtue of their microtargeting algorithms and susceptibility to misinformation has created a “phantom public sphere”, where weaponized misinformation and tightly-coordinated minority communities can exercise outsized influence on the public discourse. In this context, as Tufekci has argued, censorship and the harms to free speech take a different form, arising not from open state suppression but rather through “epidemics of disinformation,” as bots, trolls, and information leaks hijack the attention spans of traditional media to “undercut the credibility of valid information sources”.
Twitter, Facebook, Google, and YouTube thus function as critical speech intermediaries who effectively control the terms of our (online) speech—yet they manage this speech through a set of principles and practices that look very different from First Amendment norms. Cases like Knight Institute thus provide an important first step in reconciling the First Amendment with this new digital public sphere. But the development of a modern legal regime for this digital public sphere will likely require other tools to be brought into the mix: not just formal First Amendment doctrine, but regulatory and legislative tools as well. As the Federal Communications Commission and the creation of a legal regime around radio and broadcast was critical to making real a mid-century vision of communication and speech, so too do we need a modern-day regulatory regime for the digital public sphere.
A broader legal regime for online speech will have to reckon with the fact that social media platforms operate as our foundational modern communications infrastructure. To gain traction on this reality, a better analogy for platforms is not the First Amendment per se, but rather the public utility tradition of public law. Over a century ago as the country grappled with the rise of industrialization and new forms of privately-controlled infrastructure—from railroads to water to communications to transit—progressive reformers pioneered the regulation of these entities as public utilities. For these reformers, these private actors, by virtue of their control over critical public services, had accumulated outsized power and influence. But because of the importance of those goods and services, these actors had to be regulated and overseen by government to assure that they served the public good. This idea of the public utility offers an compelling analogy for new modern utilities including internet platforms. Like traditional utilities, internet platforms are the key infrastructure for critical public services of news, information, and speech itself.
The public forum doctrine expanded in Knight Institute represents a kind of public utility regulation extended to platforms through the First Amendment. In the longer run, this public utility analogy for platforms suggests at least three additional forms of regulation that might be needed, through statutory and regulatory vehicles rather than constitutional ones.
First, platforms will have to develop some equivalent to neutrality and common carriage norms. Like classic public utilities, there needs to be a protection for universal access, prohibiting the blocking of disfavored users or content or hidden forms of prioritization. This idea of common carriage is most directly analogous to the holding of the Knight case.
Second, we should consider regulations that impose firewalls, restrictions on the powers of these private firms so that they are no longer incentivized to self-deal or commingle business models. Consider, for example how information platforms like Google have incentives to put a thumb on the scale for their own products, or for information that serves paid clients—the same fear that drove the net neutrality debate. Shifts to these incentives would help mitigate some of the pathologies of the attention- and ad-based platform business model noted above.
Finally, we might contemplate even bolder structural reforms that shift the incentives for these platforms to traffic in misinformation, virality, and other pathologies. The attention-based business model is at the heart of this, and proposals from algorithmic accountability of platforms to a data tax are only just at the early stages of development.
These policies will take more research and debate. The Knight case by itself does not solve the pathologies of the digital public sphere. But it is an important opening salvo in a longer legal fight over our digital public sphere—and the urgent need to update and reimagine how our Constitutional and statutory and regulatory regime does or doesn’t serve the foundational democratic need for an equitable, inclusive, and responsive public sphere.