The Yale Law Journal

VOLUME
127
2017-2018
Forum

Real Talk About Fake News: Towards a Better Theory for Platform Governance

09 Oct 2017

Following the 2016 U.S. presidential election, “fake news” has dominated popular dialogue and is increasingly perceived as a unique threat to an informed democracy. Despite the common use of the term, it eludes common definition.1 One frequent refrain is that fake news—construed as propaganda, misinformation, or conspiracy theories—has always existed,2 and therefore requires no new consideration. In some ways this is true: tabloids have long hawked alien baby photos and Elvis sightings. When we agonize over the fake news phenomenon, though, we are not talking about these kinds of fabricated stories.

Instead, what we are really focusing on is why we have been suddenly inundated by false information—purposefully deployed—that spreads so quickly and persuades so effectively. This is a different conception of fake news, and it presents a question about how information operates at scale in the internet era. And yet, too often we analyze the problem of fake news by focusing on individual instances,3 not systemic features of the information economy. We compound the problem by telling ourselves idealistic, unrealistic stories about how truth emerges from online discussion. This theoretical incoherence tracks traditional First Amendment theories, but leaves both users and social media platforms ill-equipped to deal with rapidly evolving problems like fake news.

This rupture gives us an excellent opportunity to reexamine whether existing First Amendment theories adequately explain the digital public sphere. This Essay proceeds in three Parts: Part I briefly outlines how social media platforms have relied piecemeal on three discrete theories justifying the First Amendment—the marketplace of ideas, autonomy and liberty, and collectivist views—and why that reliance leaves platforms ill-equipped to tackle a problem like fake news. Part II then takes a descriptive look at several features that better describe the system of speech online, and how the manipulation of each feature affects the problem of misinformation. Finally, Part III concludes with the recommendation that we must build a realistic theory—based on observations as well as interdisciplinary insights—to explain the governance of private companies who maintain our public sphere in the internet era.

I. moving beyond the marketplace

As a doctrinal matter, the First Amendment restricts government censorship, but as a social matter, it signifies even more.4 As colloquially invoked, the “First Amendment” channels a set of commonly held values that are foundational to our social practices around free speech. When, for example, individuals incorrectly identify criticism as “violating First Amendment rights,” they actually seek to articulate a set of values crucial to the public sphere, including the ability to express and share views in society.5 The First Amendment shapes how we imagine desirable and undesirable speech. So conceived, it becomes clear that our courts are not the only place where the First Amendment comes to life.

One implication of this understanding is that First Amendment theory casts a long shadow, which even private communications platforms6—like Facebook, Twitter, and YouTube—cannot escape. Internet law scholar Kate Klonick deftly illustrates how these three private platforms should be understood as self-regulating private entities, governing speech through content moderation policies:

A common theme exists in all three of these platforms’ histories: American lawyers trained and acculturated in First Amendment law oversaw the development of company content moderation policy. Though they might not have ‘directly imported First Amendment doctrine,’ the normative background in free speech had a direct impact on how they structured their policies.7

But First Amendment thinking comes in several flavors. Which of these visions of the First Amendment have platforms embraced?

A. Existing First Amendment Theories

Three First Amendment theories predominate: the marketplace of ideas, autonomy, and collectivist theories. However, as this Section demonstrates, none of these fully captures online speech.

One option is the talismanic “marketplace of ideas.” Recognized as the “theory of our Constitution,” the marketplace metaphor imagines that robust engagement with a panoply of ideas yields the discovery of truth—eventually.8”More speech” should be the corrective to bad speech like falsehoods.9 This vision predictably tilts away from regulation, on the logic that intervention would harm the marketplace’s natural and dynamic progression.10 That progression involves ideas ‘competing’ in the marketplace, a conception with two fundamental shortcomings, each relevant in an era of too much available information: What happens when individuals do not interact with contrary ideas because they are easy to avoid? And what happens when ideas are not heard at all because there are too many?

The marketplace also does not neatly address questions of power, newly relevant in the internet era. The marketplace metaphor sprang forth at a time when the power to reach the general population through “more speech” was confined to a fairly homogenous, powerful few. Individuals may have had their own fiefdoms of information—a pulpit, a pamphlet—but communicating to the masses was unattainable to most. Accordingly, the marketplace never needed to address power differentials when only the powerful had the technology to speak at scale. The internet, and particularly social media platforms, have radically improved the capabilities of many to speak, but the marketplace theory has not adjusted. For example, how might the marketplace theory address powerful speakers who drown out other voices, like Saudi Arabian “cyber troops” who flood Twitter posts critical of the regime with unrelated content and hashtags to obscure the offending post?11 As adopted by the platforms, the marketplace theory offers no answer. Put differently, the marketplace-as-platform theory only erects a building; there are no rules for how to behave once inside. This theory yields little helpful insight for a problem like fake news or other undesirable speech.

A second, related vision explains First Amendment values through the lens of individual liberty.12 What counts here is only the “fundamental rule” that “a speaker has the autonomy to choose the content of his own message” because speech is a necessary exercise of personal agency.13 All that matters is that one can express herself. Naturally, this theory also creates a strong presumption against centralized interference with speech.14 While certainly enticing—and conveniently neutral for social media platforms interested in building a large user base—this theory is piecemeal. Focusing only on the self-expressive rights of the singular speaker offers no consideration of whether that speech is actually heard. It posits no process through which truth emerges from cacophony. In fact, it is not clear that fake news, as an articulation of one’s self-expression, would even register as a problem under this theory.

Third, and far less fashionable, is the idea that the First Amendment exists to promote a system of political engagement.15 This “collectivist,” or republican, vision of the First Amendment considers more fully the rights of citizens to receive information as well as the rights of speakers to express themselves. Practically and historically, this has meant a focus on improving democratic deliberation: for example, requiring that broadcasters present controversial issues of public importance in a balanced way, or targeting media oligopolies that could bias the populace. This theory devotes proactive attention to the full system of speech.16

The republican theory, which accounts for both listeners and speakers, offers an appealingly complete approach. The decreased costs of creating, sharing, and broadcasting information online means that everyone can be both a listener and a speaker, often simultaneously, and so a system-oriented focus seems appropriate. But the collectivist vision, like the marketplace and autonomy approaches, is still cramped in its own way. The internet—replete with scatological jokes and Prince cover songs—involves much more than political deliberation.17 And so any theory of speech that focuses only on political outcomes will fail because it cannot fully capture what actually happens on the internet.

B. Which First Amendment Vision Best Explains Online Speech?

Online speech platforms—bound by neither doctrine nor by any underlying theories—have, in practice, fused all three of these visions together.

At their inception, many platforms echoed a libertarian, content-neutral ethos in keeping with the marketplace and autonomy theories. For example, Twitter long ago declared itself to be the “free speech wing of the free speech party,” straying away from policing user content except in limited and extreme circumstances.18 Reddit similarly positions itself as a “free speech site with very few exceptions,” allowing communities to determine their own approaches to offensive content.19 Mark Zuckerberg’s argument that “Facebook is in the business of letting people share stuff they are interested in” presents an autonomy argument if ever there were one.20 In the wake of the 2015 Charlie Hebdo attacks in Paris, Zuckerberg specifically vowed not to bow to demands to censor Facebook, 21 and then did so again in 2016, when he explained to American conservative leaders that the Facebook platform was “a platform for all ideas.”22Taken at face value, platforms offer little recourse in response to undesirable speech like hate speech or fake news.

Platforms have also, however, long invoked the language of engagement, albeit not political engagement. Platforms have long governed speech through reference to their community or through user guidelines that prohibit certain undesirable, but not illegal, behavior.23 For example, Reddit, which otherwise claims a laissez-faire approach to moderation, collaborates on moderation with a number of specific communities—including r/The_Donald, a subcommunity that vehemently and virulently supports the forty-fifth President of the United States.24 YouTube prohibits the posting of pornography; at Facebook, community standards ban the posting of content that promotes self-injury or suicide.25 None of their content policies stem from altruism. If users dislike the culture of a platform, they will leave and the platform will lose. For exactly that reason, platforms have taken measures—of varying efficacy—to police spam and harassment,26 and in doing so to build a culture most amenable to mass engagement.

Ultimately, ambiguity serves neither the platforms nor their users. To users, hazy platform philosophy obscures any meaningful understanding of how platforms decide what is acceptable. Many wondered, in the wake of a recent leak, why Facebook’s elaborate internal content moderation rules could justify deleting hate speech against white men, but allowed hate speech against black children to remain online.27 To platforms, philosophical indeterminacy over speech theories means there are few guiding stars to help navigate high-profile and rapidly evolving problems like fake news.

Moreover, even taking existing First Amendment theories separately, the fake news phenomenon illustrates how each theory fails to account for conspicuous phenomena that affect online speech. The marketplace theory, for example, fails to account for how easily accessible speech from many actors might change the central presumption that ideas compete to become true; the autonomy theory ignores that individuals are both speakers and listeners online; and the republican theory, in focusing only on political exchanges, casts aside much of the internet. All fail to account for how speech flows at a global and systemic scale, possibly because such an exercise would have been arduous if not impossible before social media platforms turned ephemeral words into indexed data.

These previously ephemeral interactions are now accessible to a degree of granularity that can enable new theories about how speech works globally at the systemic level. What insights might emerge if we focused on system-level operation, looking at the system from a descriptive standpoint? In the next Section, I will identify several systemic features of online speech, with a particular focus on how they are manipulated to produce fake news.

II. what does the system tell us about fake news?

As the notable First Amendment and internet scholar Jack Balkin cautioned in 2004, “in studying the Internet, to ask ‘What is genuinely new here?’ is to ask the wrong question.”28 What matters instead is how digital technologies change the social conditions in which people speak, and whether that makes more salient what has already been present to some degree.29 By focusing on what online platforms make uniquely relevant, we can discern social conditions that influence online speech, both desirable and not.

Below, I offer five newly conspicuous features that shape the ecosystem of speech online. Each of these features’ manipulation exacerbates the fake news problem, but importantly none are visible—or addressable—under the marketplace, autonomy, or collectivist views of the First Amendment.

A. Filters

An obvious feature of online speech is that there is far too much of it to consume. Letting a thousand flowers bloom has an unexpected consequence: the necessity of information filters.30

The networked, searchable nature of the internet yields two interrelated types of filters. The first is what one might call a “manual filter,” or an explicit filter, like search terms or Twitter hashtags. These can prompt misinformation: for example, if one searches “Obama birthplace,” one will receive very different results than if one searches “Obama birth certificate fake.” Manual filters can also include humans who curate what is accessible on social media, like content moderators.31

Less visible are implicit filters, for example algorithms that either watch your movements automatically or change based on how you manually filter. Such filters explain how platforms decide what content to serve an individual user, with an eye towards maximizing that user’s attention to the platform. Ev Williams, co-founder of Twitter, describes this process as follows: if you glance at a car crash, the internet interprets your glancing as a desire for car crashes and attempts to accordingly supply car crashes to you in the future.32 Engaging with a fake article about Hillary Clinton’s health, for example, will supply more such content to your feed through the algorithmic filter.

That suggested content, sourced through the implicit filter, might also become more extreme. Clicking on the Facebook page designated for the Republican National Convention, as BuzzFeed reporter Ryan Broderick learned, led the “Suggested Pages” feature to recommend white power memes, a Vladimir Putin fan page, and an article from a neo-Nazi website.33 It is this algorithmic pulling to the poles, rooted in a benign effort to keep users engaged, that unearths fake news otherwise relegated to the fringe.

B. Communities

Information filters, like the ones described above, have always existed in some form. We have always needed help in making sense of vast amounts of information. Before there were algorithms or hashtags, there were communities: office break rooms, schools, religious institutions, and media organizations are all types of community filters. The internet has changed, however, how digital communities can easily transcend the barriers of physical geography. The internet is organized in part by communities of interest, and information can thus be consumed within and produced by communities of distant but like-minded members. Both sides of this coin matter, especially for fake news.

Those focused on information consumption have long observed that filters can feed insular “echo chambers,” further reinforced by algorithmic filtering.34 Even if you are the only person you personally know who believes that President Barack Obama was secretly Kenyan-born, you can easily find like-minded friends online.

Notably, individuals also easily produce information, shared in online communities built around affinity, political ideology, hobbies, and more. At its best, this capability helps to remedy the historic shortcomings of traditional media: as danah boyd points out, traditional media outlets often do not cover stories like the protests in Ferguson, Missouri, in 2014, the Dakota Access Pipeline protests, or the disappearance of young black women until far too late.35 At its worst, the capability to produce one’s own news can cultivate a distrust of vaccines or nurture rumors about a president’s true birthplace. Through developing their own narratives, these communities create their own methods to produce, arrange, discount, or ignore new facts.36 So, even though a television anchor might present you with a visual of Obama’s American birth certificate, your online community—composed of members you trust—can present to you alternative and potentially more persuasive perspectives on that certificate.37

Taken together, this creates a bottom-up dynamic for developing trust, rather than focusing trust in top-down, traditional institutions.38 In turn, that allows communities to make their own cloistered and potentially questionable decisions about how to determine truth—an ideal environment to normalize and reinforce false beliefs.

C. Amplification

The amplification principle explains how misinformation cycles through filters and permeates communities, which are in turn powered by the cheap, ubiquitous, and anonymous power of the internet. Amplification happens in two stages: first, when fringe ideas percolate in remote corners of the internet, and second, when those ideas seep into mainstream media.

Take, for example, the story of Seth Rich, a Democratic National Committee staffer found tragically murdered as a result of what Washington, D.C. police maintain was a botched robbery gone awry. WikiLeaks alluded to a connection between his unfortunate demise and his possibly leaking to them with little fanfare.39 Weeks later, however, a local television affiliate in D.C. reported that a private investigator was looking into whether the murder was related to Rich allegedly providing email hacks of the Democratic National Committee to WikiLeaks.40 Message boards on 4chan, 8chan, and Reddit grasped at these straws, launching their own vigilante investigations and further inquiries.41 This is the first stage of amplification.

The second stage begins when those with a louder bullhorn observe the sheer volume of discussion, and the topic—true or not—becomes newsworthy in its own right. In the case of Rich, this happened when a number of prominent and well-networked individuals on Twitter circulated the conspiracy to their hundreds of thousands of followers using the hashtag #SethRich. That drew the attention of Fox News and its pundits, whose followers range in the millions, and in turn Breitbart and Drudge Report, which seed hundreds of blogs and outlets.42

The amplification dynamic matters for fake news in two ways. First, it reveals how online information filters are particularly prone to manipulation—for example, by getting a hashtag to trend on Twitter, or by seeding posts on message boards—through engineering the perception that a particular story is worth amplifying. Second, the two-tier amplification dynamic uniquely fuels perceptions of what is true and what is false. Psychologists tell us that listeners perceive information not only logically, but through a number of “peripheral cues” which signal whether information should be trusted. Cues can include whether the speaker is reliable (why trust in the source of information matters),43 a listener’s prior beliefs (why one’s chosen communities matter),44 and, most notably, the familiarity of a given proposition (why one’s information sources matter).45 The latter point is crucial here: individuals are more likely to view repeated statements as true. (Advertising subsists on this premise: of course you will purchase the detergent you have seen before.)

Imagine, then, how many times a listener might absorb tidbits of the Seth Rich story: on talk radio on the way to work, through water cooler chat with a Reddit-obsessed co-worker, scrolling through Facebook, a scan of one’s blogs, a group text, pundit shows promoting the conspiracy, or on the local television’s evening news debunking it. Manifesting on that many platforms will, psychological research informs us, command attention and persuade. Even when something is as demonstrably bankrupt as the Seth Rich conspiracy, the false headline will be rated as more accurate than unfamiliar but truthful news.46

D. Speed

The staggering pace of sharing, and how it influences amplification, is particularly critical for understanding the spread of fake news.

Platforms are designed for fast, frictionless sharing. This function accelerates the amplification cycle explained above, but also targets it for maximum persuasion at each step. For example—before it was effectively obliterated from the internet47—a popular neo-Nazi blog called The Daily Stormer hosted a weekly “Mimetic Monday,”48 where users posted dozens of image macros—the basis of memes—to be shared on Facebook, Twitter, Reddit, and other platforms.49 Witty and eye-catching, if frequently appalling, macros like these allow rapid experimentation with talking points and planting ideas. Such efforts were responsible for spreading misinformation about French President Emmanuel Macron before the 2017 election.50 This experimental factory is called “shitposting,”51 and the fast, frictionless sharing across platforms is the machinery that helps the factory distribute at scale. Before social media platforms, this type of experimentation would have been phenomenally slow, or required resource-intensive focus groups.

Memes are a convenient way to package this information for distribution: they are easily digestible, nuance-free, scroll-friendly, and replete with community-reinforcing inside jokes. Automation software known as “bots”—whether directed by governments52 or by people like the “Trumpbot overlord” named “Microchip”—are also often credited with circulating misinformation, because of how well they can trick algorithmic filters by exaggerating a story’s importance.53

Bots, however, are not the only ones to blame for rapid distribution. Almost sixty percent of readers share links on social media without even reading the underlying content.54 Sharing on platforms is not only an exercise in communicating rational thought, but also signaling ideological and emotional affinity.

This explains, in part, why responses debunking fake news do not travel as quickly. For example, if one clicks on a story because one is already ideologically inclined to believe in it, there is less interest in the debunking—which likely means that the debunking would not even surface on one’s feed in the first instance. It also explains why certain false ideas are so persistent: they are designed, in an effective and real-time laboratory, to be precisely that way.

E. Profit Incentives

Social media platforms make fake news uniquely lucrative. Advertising exchanges compensate on the basis of clicks for any article, which creates the incentive to generate as much content as possible with as little effort as possible. Fake news, sensational and wholly fabricated, fits these straightforward economic incentives. This yields everything from Macedonian teenagers concocting stories about the American election55 to user-generated make-your-own-fake-news generators falsely claiming that specific Indian restaurants in London had been caught selling human meat.56 These types of websites, particularly those that are hyperpartisan and thus primed to attract attention, have exploded in popularity: A BuzzFeed News study illustrated that over one hundred new pro-Trump digital outlets were created in 2016.57

There are two noteworthy elements to this uptick. First, the mechanics of advertising on these platforms facilitates the distribution of fake content: there is no need for a printing press, delivery trucks, or access to airtime. Cheap distribution means more money, only strengthening the incentive. Second, platforms render the appearance of advertisements and actual news almost identical.58 This further muddies the water between what is financially motivated and what is not.

III. toward a more robust theory

Thinking in terms of the full system of speech—that is, considering filters, communities, amplification, speed, and profit incentives—gives us a far more detailed portrait of how misinformation flourishes online. It also provides a blueprint for what platforms are doing to curb fake news, all of which would make little sense under the more traditional theories described in Part I.

For example, platforms have exercised their ubiquitous filtering capabilities to target fake news. Google recently retooled its search engine to try to prevent conspiracy and hoax sites from appearing in its top results,59 while YouTube decided that flagged videos that contain controversial religious or supremacist content will be put in a limited state where they cannot be suggested to other users, recommended, monetized, or given comments or likes.60 And Facebook has partnered with fact-checkers to flag conspiracies, hoaxes, and fake news; flagged articles are less likely to surface on users’ news feeds.61 These tweaks, at least conceptually, should influence the algorithmic filters that yield information.

Similarly, Facebook has overtly recognized that speed and amplification can contribute to misinformation. It now deprioritizes links that are aggressively shared by suspected spammers, on the theory that these links “tend to include low quality content such as clickbait, sensationalism, and misinformation.”62 Facebook is also launching features that push users to think twice before sharing a story, by juxtaposing their link with other selected “Related Articles.”63 Twitter specifically targets bots, looking for those that may game its system to artificially raise the profile of misinformation and conspiracies.64

Recognizing the profit element, Google and Facebook have both barred fake news websites from using their respective advertising programs.65 Facebook has also eliminated the ability to spoof domains pretending to be real publications to profit from those who click through to the underlying sites, which are replete with ads.66 This may speak to profit-oriented fake news, but not to propaganda and misinformation that is fueled by nonfinancial incentives.67

These systemic features can also help us interrogate concepts whose definitions have long been assumed. Take, for example, the concept of censorship. Traditionally, and in the speaker-focused marketplace and autonomy theories, censorship evokes something very specific: blocking the articulation of speech. As prominent sociologist Zeynep Tufekci argues, however, censorship now operates via information glut—that is, drowning out speech instead of stopping it at the outset.68 As with the Saudi Arabian example referenced above, Tufekci points to the army of internet trolls deployed by the Chinese and Russian governments to distract from critical stories and to wear down dissenters through the manipulation of platforms.69 If platforms are the epicenters of this new censorship, misinformation is the method: the point of censorship by disinformation is to destroy attention as a key resource.70 What results, Tufekci explains, is a “frayed, incoherent, and polarized public sphere that can be hostile to dissent.”71 This all becomes visible when information filters are taken into account.

It would be easy to conclude that platforms—best positioned to address the aforementioned features—should alone shoulder the burden to prevent fake news. But asking private platforms to exercise unilateral, unchecked control to censor is precarious.72 Few factors would constrain possible abuses. For example, Jonathan Zittrain raises the possibility of Facebook manipulating its end-users by using political affiliation to alter voting outcomes—something that could be impervious to liability as protected political speech.73 No meaningful accountability mechanism exists for these platforms aside from public outcry, which relies on intermediaries to divine what platforms are actually doing. And yet, the other extreme—a content-neutral and hands-off approach—offers empty guidance in the face of organized fake news or other forms of manipulation.

Instead, we must collectively build a theory that accounts for these shifting sands, one that provides workable ideals rooted in reality. Scaffolding for that theory can be found in what Balkin has termed the “democratic culture” theory, which seeks to ensure that each individual can meaningfully participate in the production and distribution of culture.74 A focus on culture, not politics, does more than remedy the central gap of the collectivist view while maintaining its system-wide focus. It also helps us expand our focus beyond legal theory to relevant disciplines like social psychology, sociology, anthropology, and cognitive science. For example, once we understand amplification as a relevant concept, we should account for the psychology of how people actually come to believe what is true—not only through rational deliberation, but also by using familiarity and in-group dynamics as a proxy for truth. Building on this frame will require more meaningful information from the platforms themselves.

A clear theory is more important now than ever. For one, a functioning theory can bridge the widening gap of expectations between what a platform permits and what the public expects. Practically, an overarching theory can also help navigate evolving social norms. Platforms make policy decisions based on contemporary norms: for example, until recently choosing to target and takedown accounts linked to foreign terrorists but not those linked to white nationalists and neo-Nazis, even though both types of organizations perpetuate fake news domestically.75 We have to understand how definitions of tricky and dynamic concepts, like fake news, are created, culturally contingent, and capable of evolution. Finally, and crucially, we need a theory to help direct and hold accountable the automated systems that increasingly govern speech online. These systems will embed cultural norms into their design, and enforce them through implicit filters we cannot see. Only with a cohesive theory can we begin to resolve the central conundrum confronting social media platforms: they are private companies that have built vast systems sustaining the global, networked public square, which is the root of both their extraordinary value and their damnation.

Nabiha Syed is an assistant general counsel at BuzzFeed, a visiting fellow at Yale Law School, and a non-resident fellow at Stanford Law School. All of my gratitude goes to Kate Klonick, Sabeel Rahman, Sushila Rao, Noorain Khan, Azmat Khan, Emily Graff, Alex Georgieff, Sara Yasin, Smitha Khorana, and the staff of the Yale Law Journal Forum for their incisive comments and endless patience, and to Nana Menya Ayensu, for everything always, but especially the coffee.

Preferred Citation: Nabiha Syed, Real Talk About Fake News: Towards a Better Theory for Platform Governance, 127 Yale L.J. F. 337 (2017), http://www.yalelawjournal.org/forum/real-talk-about-fake-news.