The Yale Law Journal

VOLUME
126
2016-2017
NUMBER
4
February 2017
908-1241

Privacy’s Trust Gap: A Review

Privacy

Obfuscation: A User’s Guide for Privacy and Protest

By Finn Brunton and Helen Nissenbaum

Cambridge and London: The MIT Press 2015

author. Neil Richards is Thomas and Karole Green Professor of Law, Washington University School of Law; Affiliate Scholar, The Center for Internet and Society at Stanford Law School; and Affiliated Fellow, Yale Information Society Project. Woodrow Hartzog is Starnes Professor of Law, Samford University’s Cumberland School of Law; and Affiliate Scholar, The Center for Internet and Society at Stanford Law School. For helpful suggestions and conversations, we would like to thank Frederik Borgesius, Brannon Denning, and Evan Selinger. We are especially indebted to Nico van Eijk, whose invitation for us to participate in the 2015 Amsterdam Privacy Conference at the Institute for Information Law (IViR) at the University of Amsterdam was the genesis of this Book Review.


Introduction

It can be easy to get depressed about the state of privacy these days. In an age of networked digital information, many of us feel disempowered by the various governments, companies, and criminals trying to peer into our lives to collect our digital data trails.1 When so much is in flux, the way we think about an issue matters a great deal. Yet while new technologies abound, our ideas and thinking—as well as our laws—have lagged in grappling with the new problems raised by the digital revolution.

Reading between the lines in the debate over surveillance and data collection, it is easy to think that protecting privacy is all on you. Most privacy discussion is framed in individualistic terms. For example, we talk about an individual’s “right to privacy” and whether that individual right has any meaning any more. Policymakers fight for a person’s “individual control” over personal information. Companies promise to give consumers “personal choice” to empower their personal preferences about how their information is collected, used, and shared. It is as though we are all islands, each waiting to exercise our individual ability to protect our privacy against those who would surveil us, whether they are private companies or government agents.

Thinking of privacy as an issue of personal choice, preferences, and responsibility has powerful appeal. It resonates with American ideals of individualism, democracy, and consumerism. It flatters our sense of autonomy and accommodates our diverse notions of privacy and preferences for disclosure. For instance, you might not want to broadcast the details of your life on Instagram or Snapchat, but others might. Individualistic notions of privacy lead us to favor solutions that help us choose and put us in control of our own unique lives.

Yet there is a problem with this view of the digital world, and it is a problem of power. In the digital economy, the real power is not held by individual consumers and citizens using their smartphones and laptops to navigate the twists and turns of their lives, but by the large government and corporate entities who monitor them. The digital consumer is not like the classic American myth of the cowboy, a rugged and resilient island of autonomy set against the backdrop of the digital frontier. On the contrary, she is increasingly disempowered, marginalized, and subject to monitoring and sorting by powerful institutions about whose existence she may not know, and whose activities she may not be able to resist. In the digital world, we may heap responsibility on individual users of technology, but they lack options for protecting themselves.2 This is another form of the “digital divide”—it is not merely that some people have access to technology while others do not, but that most people are vastly less powerful than the government and corporate institutions that create and control digital technologies and the personal data on which those technologies run.

If the monitored are responsible for protecting themselves, one possible strategy is to obscure their tracks, thereby turning the digital realm into a big game of hide and seek. In their book Obfuscation: A User’s Guide for Privacy and Protest, Finn Brunton and Helen Nissenbaum put forward a manifesto for the digitally weak and powerless, whether ordinary consumers or traditionally marginalized groups who lack the knowledge or means to effectively protect their digital lives from monitoring.3 They tell us at the outset that “[w]e mean to start a revolution with this book. But not a big revolution—at least, not at first.”4 Brunton and Nissenbaum develop the idea of obfuscation, which they define as “the deliberate addition of ambiguous, confusing, or misleading information to interfere with surveillance and data collection.”5 This can take many forms, but for consumers, it might include swapping their phone SIM cards with those of their friends or using software that buries genuine search engine queries in a crowd of irrelevant ones.6 Brunton and Nissenbaum argue that obfuscation is necessary to counteract information power imbalances that occur “when data about us are collected in circumstances we may not understand, for purposes we may not understand, and are used in ways we may not understand.”7

Obfuscation is attractive because it offers to empower individuals. It is a chance for people to strike back against forces that have the ability and incentive to exploit informational power for surveillance and data collection, whether government law enforcement agencies or Internet tracking and marketing companies. It carves out spaces for privacy against the powerful—a digital treehouse, French Resistance hideout, or Dagobah swamp. Obfuscation is a “weapon of the weak,” offering the romantic promise of restoring some of the digital age’s power imbalances in favor of the plucky underdog.8 Obfuscation is appealing, even seductive, but we must ultimately put it in context. Obfuscation is a powerful idea, but as Brunton and Nissenbaum are careful to admit, it is only part of the larger privacy puzzle.9

Even with this caveat, obfuscation seems ill suited to be the stuff of revolutions, because privacy built on obfuscation can be at most a second-best kind of privacy. Instead of a first-best privacy in which rules and design ensure safe, sustainable processing of personal data, and personal control is properly treated as a scarce resource, obfuscation offers only the kind of privacy that requires the disempowered to grab it for themselves. As such, it falls into the all-too-common trap of thinking about privacy in primarily individualistic terms, leveraging the weak power of individuals rather than the strong power of law and society. It reinforces the standard narrative of privacy that emphasizes control, choice, and privacy self-management above all else—a narrative that is likely doomed to failure if we continue to accept it.10

This reinforcement of the default story can be a serious problem. How we think about legal problems matters a great deal, especially in areas like privacy where technologies, economics, and social norms are in flux. The frames and metaphors we use to describe issues like privacy are essential because they allow us to understand or confuse issues, problems, and potential solutions.11

Brunton and Nissenbaum are careful to position obfuscation as a realistic, affordable, and reliably good enough tactic to protect our privacy. This is a real and important contribution. A healthy dose of pragmatism regarding how to preserve our privacy is welcome in the modern climate, in which the utopian dreams of some global regulators can sometimes create irrational and ineffective obligations regarding data. Consider, for example, the implications of a broad reading of the European “Right to Be Forgotten,” which is sometimes described as creating an internet that could be edited like Wikipedia by individuals who do not like the facts reported about them in newspapers.12 All too often, a privacy policy like this can make the perfect the enemy of the good by seeking to outright prevent or control data collection or surveillance, rather than to mitigate problems through regulations designed to serve human ends. More nuanced understandings of privacy are necessary to temper overambitious regulations which fetishize consent in ways that elevate form over function. Society’s adoption of a more pragmatic approach to privacy can also ease the pressure on regulators to adopt draconian privacy rules such as data localization laws, which can provide cover for countries seeking to preserve their own economic interests, while providing few real benefits for citizens.13 Brunton and Nissenbaum show that sometimes a bit of pragmatic privacy can be enough to do what is needed.

More fundamentally, however, pragmatism will not be enough if the conceptual foundation for protecting our privacy is deficient. In talking about the foundation for a privacy revolution, we can do better than making incremental improvements to the standard story of a highly individualistic, atomistic privacy. We must think about privacy instead as the rules which govern personal information and take into account more complex social contexts, the increasing importance of information relationships in the digital age, and our need to rely on (and share information with) other people and institutions to live our lives.14 Information relationships are relationships in which information is shared in trust and in which the rules governing the information sharing create value and deepen those relationships over time. If privacy is increasingly about these information relationships, it is also increasingly about the trust that is necessary for them to thrive, whether those relationships are with other humans, governments, or corporations. Trust is particularly important for the large tech companies with which we increasingly share vast amounts of often-intimate data. For instance, the battles that Apple and Microsoft have fought with the Federal Bureau of Investigation (FBI) and the Department of Justice (DOJ) are battles fought to earn and keep the trust of their customers.15 In this direction lies a better digital future for all of us—a digital society in which privacy rules promote trust and make life better for the humans who inhabit it. This is what we will call “first-best” privacy protection.

By contrast, obfuscation is a creature of distrust—a last resort of the weak, the marginalized, and the betrayed. Obfuscation is not merely motivated by distrust; it also creates additional distrust by hiding from the surveillance economy or intentionally feeding bad data into it. Obfuscation is a kind of pollution of the information economy, which can be useful in the short term but costly or even unsustainable in the long run. While obfuscation can be useful to those who have no other options, it can also sow distrust when injected into existing relationships. We believe that a better strategy for a sustainable digital future is to promote trustworthy information relationships—the building blocks of a digital society. In these relationships, obfuscation by one party against another can sow damaging distrust. By further undermining the potential of trust, obfuscation thus offers a kind of “second-best” privacy that will actually cause us to lose ground in the long run. If we fall prey to the pessimistic and isolationist aspects of obfuscation, we risk seeing our only option as a guerrilla war against privacy forces with which we might otherwise be able to work. It is a war we cannot win.

This Book Review proceeds in four parts. In Part I, we discuss the central arguments and contributions of Obfuscation through the lens of the standard individualistic conception of privacy. We welcome the book’s pragmatism and leveraging of practical, financial, and cognitive limitations to frustrate those who would engage in surveillance and data collection. However, we critique Obfuscation’s adoption of the individualistic conception of privacy. This account, which is the dominant story of privacy for both regulators and citizens, has been handicapped by a conceptual vocabulary that fails to fully take the importance of relationships and trust into account. Modern privacy policy and discourse thus have a trust gap, failing to account for the importance of trust to our digital society, and failing to provide the incentives to create that trust. By accepting the dominant frame, and by encouraging distrust over trust, obfuscation theory not only falls into the trust gap, but deepens it.

Against the backdrop of privacy’s trust gap, we then offer both an internal and an external critique of Brunton and Nissenbaum’s obfuscation theory. We develop our internal critique in Part II, taking issue with Brunton and Nissenbaum’s description of obfuscation as a largely solitary and independent strategy. We argue that even within the parameters of obfuscation theory, people often have to depend upon others to obfuscate effectively. Unless people can trust designers, intermediaries, confederates, and lawmakers to help them obfuscate, the tactic will frequently fail. It is those who must trust others, the weak and vulnerable, who need obfuscation the most. Yet by feeding bad data into the system, obfuscation can have the perverse effect of further corroding social trust.

In Part III, we offer a broader, external critique of obfuscation. We caution against leveraging the wisdom of obfuscation into a premature guerrilla war for our privacy. Such a strategy has an undeniable romantic appeal, but we do not yet need to resort to a guerilla war of individuals against the powerful institutions that seek our data. As lawyers, we believe that the first-best solution to problems of social power that Obfuscation catalogs is not revolution, but regulation. Although it may not always be obvious, privacy is not doomed. Law and public policy can and should play a role in promoting trust and privacy. Contrary to popular and legal rhetoric about the “death of privacy,”16 there is substantial evidence that the campaign for privacy rights can be not only viable, but also effective. It would be a mistake to cede the high ground of legal reform to fend for ourselves by embracing self-help obfuscation at the expense of trust-based solutions like confidentiality, data ethics, transparency, and data security. But by ignoring both the current evidence that privacy law can do helpful work and rejecting the potential of law, this is essentially the strategy that Brunton and Nissenbaum recommend.

In Part IV, we offer an alternative frame for thinking about privacy problems in the digital age. We propose that a conceptual revolution based upon trust is a better path forward than one based on obfuscation. Drawing upon both our prior work and that of the growing community of scholars working at the intersection of privacy and trust, we offer a blueprint for trust in our digital society. This consists of four foundations of trust—the commitment to be honest about data practices, the importance of discretion in data usage, the need for protection of personal data against outsiders, and the overriding principle of loyalty to the people whose data is being used, so that it is data and not humans that become exploited. We argue that we must recognize the importance of information relationships in our networked, data-driven society. There exist substantial incentives already for digital intermediaries to build trust. But when incentives and markets fail, the obligation for promoting trust must fall to law and policy. The first-best privacy future will remain one in which privacy is safeguarded by law, as well as private ordering and self-help.

i. obfuscation and the individualistic story of privacy

Obfuscation aims to spark a rebellion by the weak and powerless using whatever tools are available for resistance. Despite such an insurrectionist objective, Brunton and Nissenbaum surprisingly accept the terms of the privacy debate as they are. This capitulation is the source of the book’s greatest contributions as well as its most significant limitation. By accepting the status quo and leaving loftier challenges to privacy theory for another day, Brunton and Nissenbaum have the freedom to seek out realistic and practical privacy strategies that can be effective even if they are flawed or only modestly effective. But this clear-eyed realism still buys into the framework that dominates almost all our modern thinking about personal information—that privacy is largely an individual pursuit. Before we address the limitations of Obfuscation, let us first review its principal arguments and most noteworthy contributions.

A. The Appeal of Obfuscation

Obfuscation is one part a saboteur’s user manual and one part an exploration of the ethics of that sabotage. The privacy stories that dominate our news headlines show no likelihood of dying down, and they have left people bewildered and worried. This book offers one way out.

1. Obfuscation’s Argument

Brunton and Nissenbaum explicitly aim to start an obfuscation revolution.17 They seek to empower people with the potential of digital technologies to conceal, disrupt, and fight back against the exposure and manipulation of our data. They explain that “[t]he focus of our limited revolution is on mitigating and defeating present-day digital surveillance” using ready-to-hand components.18 They tout obfuscation as “a lexicon of ways to put some sand in the gears, to buy time, and to hide in the crowd of signals.”19 There is much to like in this proposal, in particular its rebuttal of simplistic notions of privacy and its offer of a weapon to those most in need of protection from the power imbalances of our digital age.

In their Introduction, the authors lay out their conceptualization of obfuscation as “the deliberate addition of ambiguous, confusing, or misleading information to interfere with surveillance and data collection.”20 The first and prototypical example of obfuscation in the book is the use of “chaff” by pilots during the Second World War to frustrate military radar and weapon targeting systems by giving off a confusing and overwhelming number of signals, all but one of them false.21 The goal of Obfuscation is to explicate that concept as a starting point for resistance and revolution by individuals or groups of individuals against the surveillance and data collection that exist within asymmetrical power relationships.

Part I of the book develops a common vocabulary of obfuscation to better understand how the technique can be generalized into a pattern.22 The authors argue that while the techniques of obfuscation can vary, they all share the attempt to thwart or frustrate the observation of others, frequently by feeding bad information towards those corporate and government watchers.23 Brunton and Nissenbaum use this Part to describe how we can “create many plausible, ambiguous, and misleading signals within which the information we want to conceal can be lost.”24 For example, they describe the collective deployment by many people of one pseudonym to confound data collection and poker players using “false tells” to trick trained observers.25 In the digital age, these techniques can include the use of software that adds hundreds of false Google search queries to each legitimate one, hiding the user’s true interests in a cloud of gibberish to thwart the building of a profile of that user.26

Part II of Obfuscation tackles the hard ethical and political problems posed by its theory, as well as questions about obfuscation’s purposes and the circumstances under which obfuscation is useful.27 Brunton and Nissenbaum draw on philosophical literature to argue that while some uses of obfuscation may be problematic, other uses (particularly by Internet users resisting surveillance) can survive a searching ethical inquiry.28 They maintain that obfuscation is not merely a useful “weapon of the weak,” but one that has significant potential to change the terms of the privacy debate by empowering individuals who are vulnerable to surveillors and data collectors that have enormous advantages in resources and ability.29

We argue that the central arguments and contributions of Obfuscation are best understood through the lens of the modern individualistic conceptualization of privacy, a lens that they implicitly adopt for individuals or groups of individuals acting together.30 This notion of privacy revolves around principles of autonomy and control. It conceives of us as each individually responsible for protecting our own little privacy islands. It is the dominant story of modern privacy law and policy31 and is reflected in the examples and rhetoric in Obfuscation. Brunton and Nissenbaum make the best of this framework by helpfully focusing on practical defenses that leverage transaction costs and surveillors’ practical, financial, and cognitive limitations to frustrate data collection and comprehension.

2. A Pragmatic Rebuttal to Overly Simplistic Notions of Privacy

Embedded in the heart of Obfuscation is pragmatism regarding the kind of privacy people can actually expect and a celebration of what we think can best be called a “good enough” privacy. The authors explicitly jettison ideal notions of privacy, recognizing that sometimes even temporary, incomplete privacy interventions can be enough to serve people’s needs.32 They note that in many situations, optimal systems for privacy, like encryption, are not possible, accessible, or desirable.33 People often need to be at least partially or temporarily visible, and their information must be somewhat comprehensible to others to interact in the world. The act of being online for any activity usually requires a certain kind of visibility.

Enter obfuscation. The authors argue that “[t]he strength of an obfuscation approach isn’t measured by a single objective standard (as safes are) but in relation to a goal and a context: to be strong enough.”34 Instead of making someone completely invisible, obfuscation can buy them time before detection. Instead of erasing one’s tracks, obfuscation can provide plausible deniability and disrupt the ability of surveillors and data collectors to profile or otherwise single out individuals. When people obfuscate, they raise the transaction cost of effective surveillance and data collection. This can temporarily delay surveillance and perhaps even discourage surveillance and data collection efforts altogether. Obfuscation does not promise protection—only the possibility of minimized risk and an outlet for protest.

The pragmatism of Obfuscation comes just at the right time in our modern privacy debate. Because there is no set definition of privacy, our resulting policy has latched onto overly idealistic notions of privacy that do not scale well. Thinking about privacy as individual control over information has resulted in myriad boilerplate contracts, and even privacy-conscious persons may not have enough time to read through all of the privacy policies they wish to opt out of.35 Additionally, thinking about privacy in terms of an individual’s secrets fails to cover information we want to share with some, but not all.36 Secrecy and control are too simplistic and unforgiving. While obfuscation lends some support to the same individualistic framework that equates privacy with control, it adeptly rebuts the misguided and myopic notion of privacy as secrecy.

One of the most common fallacies employed in our modern privacy discourse is the belief that once information is shared with others, it ceases to be private, and many scholars, including Nissenbaum, have critiqued this “secrecy paradigm” or “public/private dichotomy.”37 Yet this false binary persists in our rhetoric, law, and policy. One judge wrote that because Internet users voluntarily share information with others, privacy on social media is “wishful thinking.”38

Simplistic and myopic notions of privacy are dangerous. They compel harsh laws that elevate form over function in the name of advancing an unrealistic and unattainable privacy ideal. This happens at the expense of engaging in the kind of calculus necessary to identify accurately significant privacy problems and balance legal and technical responses with other concerns and values.39 Nissenbaum has elsewhere proposed a theory of privacy as “contextual integrity”40 to provide more nuance to the notion of privacy, and this book further advances that goal.41

The concept of obfuscation represents an important challenge to myopic notions of privacy. It demonstrates that even perceived “weak” notions of privacy can still be valuable. We see the most significant contribution of Obfuscation as a lucid embrace of the centrality of the practical, cognitive, and financial limitations of surveillors and data collectors. The authors recognize that sometimes people must be visible to others in order to function in a modern society.42 Yet even when people are necessarily visible, it is possible to preserve some sense of privacy by leveraging the structural protections and the limited abilities of those who would watch us or collect our data.43 Brunton and Nissenbaum spend much of the book demonstrating that even though obfuscation cannot provide absolute or robust privacy, in many circumstances it might be able to frustrate data collectors enough to give people the small amount of privacy they need.44

In focusing on structural and practical privacy protections, Obfuscation joins the growing chorus of voices exploring concepts like obscurity, friction, inefficiency, and structural privacy rights, which look to the relative ease or difficulty of conducting privacy-protective activities.45 These concepts are distinct yet ultimately all look to the transaction costs associated with finding, understanding, using, or sharing information.

In economic theory, transaction costs refer to the kinds of expenses necessary to engage in market exchanges.46 The concept is invaluable because our sense of privacy is in part a function of decisions made by corporations and government actors that have limited resources to spend on information collection and disclosure. The strategy of obfuscation is harmonious with the broader concept of obscurity, which is another way of using existing friction and other structural constraints to secure privacy.47 Obscurity is the idea that when information is hard or unlikely to be found or understood, it is, to some degree, safe.48 “Friction” is the idea that transaction costs can be used as a lever to make information more or less accessible, according to desired values of openness or privacy.49 Finally, structural constraints are regulators of privacy-corrosive behavior that prevent surveillance and data collection or use through technological or physical barriers in the world.50 Obfuscation is capable of being used to obscure through “noise,” to add friction through forced work on behalf of collection systems and recipients, and to leverage structural protections like limitations on automation capabilities. As such, it is a welcome contribution to the growing pragmatic approach to modern privacy, which is sensitive to the practical limitations of people and systems in identifying issues, assessing risk, and solving problems. One of Obfuscation’s greatest virtues is its recognition that pretty good privacy is often good enough.

3.The Need for Privacy Outside Trustworthy Relationships

Another strength of Obfuscation is that it equips people with another means of defense when safe relationships are not an option for sustainable data exchange. Although people need others to flourish, some data collectors are not trustworthy. The history of consumer protection is littered with scammers and others who would exploit us and our data.51 Electoral campaigns, advertisement networks, and other organizations that care more about short-term gains from data than long-term sustainable relationships also have little incentive to be trustworthy data stewards.52 Obfuscation, from this perspective, is a response to, and thus a product of, distrust.

Additionally, others that might collect people’s personal information or surveil them have no relationships with the objects of their surveillance whatsoever. For example, many surreptitious surveillors like Peeping Toms, nosy neighbors, and government intelligence agents have no relationship with those they surveil. Data brokers usually do not have a direct relationship with the subjects of the data either.53 Concepts like big data and open data presuppose downstream uses of data outside what we traditionally think of as information relationships.54 It makes little sense to talk about the importance of relationships in these contexts. And it is here where obfuscation can matter most.

Obfuscation is most useful as a weapon when our backs are against the wall. In an increasingly nationalistic and paranoid world stage, governments are dramatically increasing surveillance, particularly of minority and vulnerable populations.55Obfuscation and more robust privacy strategies like using encryption will be invaluable for resistance. States are the ultimate dominant actors in asymmetrical power relationships. When the vulnerable are out of options, obfuscation is far better than resignation. The authors point out that obfuscation can also be a tool for protest and, if nothing else, a way to express displeasure, whether it ends up effectively protecting people or not. One of the most prominent examples of obfuscation in the wake of then-candidate Donald Trump’s consideration of a Muslim registry was an organized effort by non-Muslims to register.56Doing so would add noise to the database and express protest and solidarity at the same time.

As a tool of expression and of last resort, obfuscation has much to commend it. But obfuscation can only be asked to do so much work. As we explain in greater detail below, it would be unwise to saddle it with heavy lifting.Technology alone cannot save us. Like the notion of obfuscation itself, privacy-friendly technologies unsupported by law and policy can only temporarily stave off the corrosive power of overreaching government and corporate surveillance. Technology is necessary to help create an environment for human flourishing. But it is not sufficient. The sustainable path to fixing a broken world is through social movements, participation in the democratic political process, and the rule of law.

B. Privacy Islands

Most American conversations about privacy follow a particular form, one that is followed in venues ranging from office water cooler chats to testimony before federal agencies: privacy is under threat because our modern digital society runs on human data. For example, the smartphones that most Americans carry with them are constantly collecting information about their location, reading habits, and contacts. Personal data has enormous potential to make the world a better place and has already become “the new oil,” the fuel on which much economic activity runs.57 However, the subjective privacy preferences of individuals must be balanced against data-based innovation. But if people want to protect their data, the dominant theory argues that they should help themselves. They should choose to do business with companies who share their values, and they should read privacy policies and select privacy-protective options in online platforms. Control over data and surveillance is the paramount value and is often seen as the very definition of privacy.58 Law has a role in this world, but it is limited to effectuating that control or protecting consumers against data practices that are “creepy” or demonstrably (usually financially) harmful.59

This is the dominant rhetoric of privacy: a conflict between privacy and progress to be resolved through individual consumer choice. Like a lot of worldviews, this story does ideological and political work. The dominant view gave birth to the “notice and choice” regime that molded our current data protection regime.60 It provides a justification for the maligned “third party doctrine” in Fourth Amendment law, which puts the risk of disclosure to the government on the person who shares information with others.61 It encourages us to think about information as being either “public” and known to all or “private” and known only to one person, rather than thinking about the variations between these two extremes that happen when information is shared in a relationship.62 The dominant narrative even underlies the “nothing to hide” fallacy used to excuse surveillance, as if the only relevant factor for surveillance were the bad secrets you may keep.63 But at bottom, no matter how it is phrased, the responsibility for protecting our privacy under the dominant story ultimately rests upon each of us as individuals. Under this view, we are all privacy islands.

That individualism and isolationism are the dominant frame of privacy rhetoric and policy should come as no surprise. Privacy is only occasionally conceptualized as a group or even a social project.64 Although the Anglo-American common law has a long history of protecting information in confidential relationships, privacy law in America as a self-conscious endeavor dates to Samuel Warren and Louis Brandeis’s famous 1890 article The Right To Privacy.65 That article, written to vindicate the particular injury of unwanted attention by the press, called for the recognition of a tort of invasions of privacy—emotional or dignitary injuries to the “inviolate personalit[ies]” of individual plaintiffs.66 The article famously influenced the course of American privacy law, leading to the establishment of the four “privacy torts” by William Prosser67 and the embedding of privacy rights in the core of Fourth Amendment protections.68 But by using a tort model of injury to individual plaintiffs as the essence of privacy law, Warren and Brandeis also helped move both the law and American notions of privacy away from the recognition of confidentiality—privacy in relationships of trust—and toward atomistic, individualistic conceptions of privacy outside recognized relationships.69

This phenomenon becomes clearer when we look at the state of privacy law today. In modern American law, individual rights of privacy are at the center of virtually all privacy and surveillance laws, but such rights are largely agnostic about the relationship between the data subject and data collector.70 Private rights of action created by statute are a major form of privacy enforcement in areas as wide-ranging as wiretapping law, government records, and video privacy.71 Although there is growing public enforcement of consumer privacy rights in the commercial context through investigations by the Federal Trade Commission (FTC) under its Section 5 Unfair and Deceptive Trade Practices authority, that authority is also premised upon injury to consumers or competition from deception or unfair trade practices.72 Similarly, one of the major obstacles to privacy regulation through litigation is the requirement that privacy plaintiffs demonstrate an individually traceable “injury in fact” to satisfy constitutional standing or related doctrines.73 The imposition by courts of these requirements rooted in notions of individual rights and injuries cognizable only in individual terms have ossified privacy rights in areas as diverse as government surveillance of First Amendment-protected activities and privacy rights created by statute.74

Missing from the individual view of privacy and security law is the more nuanced understanding that in a connected society, privacy is not just an individual concern, but a major building block for society as a whole. This is privacy’s trust gap. Our dominant legal framework is frequently insufficient or incapable of comprehending the real and important injuries to the trust we need to flourish in our networked, digital society. If privacy is just a matter of individual concern, behaviors and forms of surveillance that breed suspicion raise no cognizable legal issues, even though they undermine our civil liberties or our willingness to connect to others in ways that produce social value. Privacy’s trust gap thus contributes to the sense of fatalism dominating our rhetoric and hindering our policy, particularly as the law conceives of us all as individuals on our own privacy islands, rather than emphasizing our interconnections. While Obfuscation offers a useful weapon to those with little other power to avoid data collection and surveillance, the weapon is not only rooted in individual actions under the privacy islands model, but also reinforces and widens the trust gap.

ii.obfuscation requires trustworthy allies

Obfuscation is offered as a weapon of the weak, those on the “wrong” end of asymmetrical power relationships. But it is precisely the weak and vulnerable who need help from other people, organizations, and technologies in defending themselves. Thus, they cannot function effectively as islands in the way that the dominant individualistic theory of privacy would require.

While some examples of obfuscation that Brunton and Nissenbaum provide are entirely within an individual’s control (such as speaking in general terms or planting false signals), few obfuscation attempts in the digital world, where most data is collected, are truly solitary affairs. Online attempts at obfuscation usually require the cooperation of (and thus, vulnerability to) at least one of two different kinds of parties: designers and confederates.

A. Obfuscation’s Call for a Lonely Revolution

Unfortunately, the proposed obfuscation revolution looks to be a lonely one. Obfuscation accepts the framework of privacy individualism, and both obfuscation and privacy are conceived in individualistic terms, largely through the lens of self-defense. Obfuscation is proposed as an individual pursuit, a tactic to be employed by people seeking to create or preserve some notion of privacy for themselves.

The authors provide numerous examples of individual obfuscations, like poker players giving “false tells” to avoid being predictable and attorneys playing loud audio files of polyphonic “babbling” to confound eavesdropping.75 They highlight how people can change out SIM cards to avoid being linked to one specific phone or use deliberately vague language to frustrate data analytics.76 As a revolution, obfuscation seems to merely ask that we each fight surveillance alone, even if sometimes we fight it alone, together.

What is missing from this account is the importance of other people and institutions in effectuating obfuscation. Brunton and Nissenbaum explicitly offer obfuscation to those on the weak end of asymmetrical power relationships. But they seemingly treat all asymmetrical power relationships as adversarial. That will not always be the case. For example, Apple aligned itself with its customers in fighting the FBI over the security of its phone.77 Microsoft aligned itself with its customers in resisting search warrants for information stored outside the United States.78

Adversarial attitudes within information relationships are not sustainable. If we all were to pollute the information economy, we would also counteract the usefulness of personal disclosure. Disclosing our health data to the right people can help us get and stay healthy. Disclosing our financial information can help us access credit and accumulate wealth. Agreeing to certain kinds of surveillance might gain us entry to places that require elevated levels of security. People and organizations need each other to participate in the modern world, which means they must work together.

Brunton and Nissenbaum recognize this reality in a section titled “The Fantasy of Opting Out.”79 Credit, health insurance, jobs, travel and entertainment all require trust in other parties. Too often, the individualistic account of obfuscation glosses over the importance of these relationships. What would happen if people regularly obfuscated their health information to their doctors and their financial information to credit institutions? We would probably be less healthy and wealthy (and wise) because diagnosis and risk assessment would become very difficult for those seeking to work with us. While some surveillors and data collectors have little concern for people’s well-being, the costs of obfuscation vary wildly amongst different kinds of information relationships.

There is room for debate on the efficacy of obfuscation.80 But our critiques concern the individualistic conceptualization of obfuscation itself. First, as we have seen with other regimes designed to give people “control” over their personal information, “empowerment” is often the positive spin placed upon the structural reallocation of privacy risks.81 Under this story, when we are “empowered” to exercise control over how our information is collected and used, we bear the responsibility of bad choices, even when our good options are limited or nonexistent. These situations can include being bound by voluminous, incomprehensible, and constantly changing privacy policies.82 Or they can include liability for harm resulting from our inability to manage a bewildering number of privacy settings or passwords,83 or our failure to opt out of data collection by information brokers or online surveillance companies we may not have been aware even existed, a phenomenon Brunton and Nissenbaum themselves marvelously term “the fantasy of opting out.”84In these contexts, “empowerment” and “control” can be (and have been) used by the powerful to get their way while avoiding substantive legal obligations with respect to personal data. And they leave individuals alone and isolated to solve this problem themselves or bear its costs.

We should be careful about an over-enthusiastic adoption of “privacy self-defense” because it places us in a defensive posture and allows third parties to escape responsibility for protecting and respecting people and their personal information. While Brunton and Nissenbaum explicitly emphasize at several points that obfuscation is only a small part of the privacy story and not a replacement for governance, markets, norms, and technological intervention,85 ideas like obfuscation can take on a life of their own in our dialogue, norms, and policy. We should thus proceed cautiously and temper the rhetoric of an obfuscation “revolution.”

Second, and ominously, there is only so much that we can gain through a strategy of obfuscation. It is a defensive tactic to protect against overreaching by the already powerful. At best, it preserves the status quo, perhaps minimizing the exploitation of the powerless, but doing relatively little to upset the power differentials that constitute the status quo.

Additionally, as the authors themselves concede, obfuscation is available to the powerful as well as the relatively disempowered. It can be used by law enforcement and by corporate surveillance regimes as cloaks and countermeasures. Brunton and Nissenbaum give examples of each of these techniques, from the use of false signals by the police to trick automobile radar detectors into thinking there is a speed trap86 to the intentional drafting of privacy policies to obscure the real ways in which personal data are being exploited.87 Obfuscation can be a useful tactic, a “force multiplier” of sorts, but there is no evidence about whether its deployment will benefit the disempowered or the already powerful.88 If obfuscation is simply about exploiting the practical limitations and resource cost of surveillance and data collection, the weak and vulnerable may be able to temporarily obfuscate effectively or obfuscate effectively against only some parties. But if a motivated adversarial party is willing to invest the resources to counteract obfuscation, the rich and powerful will eventually win. In addition to the powerless, the rich and powerful will also be motivated to find ways to use obfuscation to their own advantage. In a digital society, in which the control of code-based technologies generates ever-more useful social power, we fear that increased use of obfuscation across the board could worsen existing power imbalances rather than shrink them.

If obfuscation is truly to be a revolution, it cannot be an individual pursuit that presumes all asymmetrical information-based power relationships are adversarial. Even if obfuscation is justified against some adversaries, people are likely going to need to trust others with whom they are at a power disadvantage. This is particularly true in the digital world, where the guts and inner workings of code, structure, and process are opaque.

Under the standard story we tell ourselves about digital privacy and security, individuals must take an adversarial position toward all those who would collect, use, and share their personal information. While we are certainly vulnerable online, privacy’s trust gap means that this worldview can be wasteful and even destructive as the dominant story of privacy.

B. Obfuscation Requires Reliance on Designers

One common trait of many of the examples provided in Obfuscation is that they require the use of tools, most of which must be made by other people such as software developers or other designers. CacheCloak, for example, is a tool that obscures a mobile phone user’s location by surrounding it with other users’ paths.89 The injection of that data noise makes any single user’s location ambiguous.90 Similarly, the Tor network facilitates anonymous Internet use by distributing a user’s encrypted traffic through multiple “nodes” to obscure the origin of a data transmission.91 Vortex is a “proof-of-concept game” that “confuse[s] and misdirect[s] targeted advertising” through the use of “cookies and other identifying systems.”92 FaceCloak generates false information for Facebook’s profile fields and stores “real” or authentic data on a private server for authorized users.93 Finally, another tool, TrackMeNot, developed in part by Nissenbaum, blends genuine and artificial searches to foil the profiling of users through their search results.94

In the least, people must be able to understand how the tool works and trust that the tool works as it is supposed to. Every user creates a mental model about how a technology will work. Their expectations are created by the representations of the developers, the user’s background knowledge, and the design of the technology itself.95 When a user’s mental model does not match the reality of how the technology works, they might think the technology protects them more than it does or accidentally misuse the technology in a way that exposes them to a range of privacy harms, from embarrassment to financial injury or even criminal penalties in the law enforcement context.

C. Obfuscation Requires Cooperation from Confederates

Many kinds of obfuscation also require ordinary people to put their trust in others just like them. Brunton and Nissenbaum provide several vivid examples. They recall the large group of Roman gladiators who called out “I am Spartacus” to protect the real Spartacus standing among them.96 Or similarly, in The Thomas Crown Affair, the protagonist pulls off a spectacular caper with the help of many identically dressed people engaging in a blur of exchanges involving identical suitcases.97 There are many real-world examples of obfuscation that require confederates as well. Brunton and Nissenbaum give the example of people who swap grocery store loyalty cards to obfuscate data collection about their shopping habits.98 People can also exchange SIM cards and debit cards to muddy data trails and prevent accurate triangulations of people’s whereabouts. But all of these examples share one critical factor—collective obfuscation usually requires us to trust our confederates. While this might not be a problem in contexts where enough people feel collectively and sufficiently repressed to fight back, this kind of solidarity is not always easy to locate.

Confederates must at least be reliable enough to engage faithfully in collective obfuscation. But often these confederates are entrusted with information, such as identifying information, incriminating information, location information, and more, that leaves the obfuscator vulnerable to a range of privacy injuries from embarrassment to criminal punishment. Spartacus and Thomas Crown were able to obfuscate effectively, but only by trusting in the solidarity of their confederates. The same gladiators who protected Spartacus from the Roman authorities could just as easily have identified him to those who would kill him by shouting, “Here is Spartacus.” In short, many obfuscatory techniques require trusting allies.

Considering the role of developers and confederates, it seems clear that obfuscation is frequently not an individual activity but one that requires the assistance of others. This complicates our story of privacy and obfuscation as individual pursuits, as it requires us to trust obfuscators, even as we are sowing the seeds of distrust by obfuscating in the first place. After all, if we do not want to live like hermits, we have to place our trust somewhere. Recognizing the fact that other people are necessary complicates the standard privacy islands story, but as we will see in the next Part, obfuscation theory has a more serious trust gap of its own.

iii. obfuscation as second-best privacy

The insight that we have to place our trust somewhere reveals a larger problem with obfuscation theory. Obfuscation, as we have discussed, is a product of distrust, a last resort for those who cannot otherwise resist their exploitation by the information economy. Instead of building bridges, obfuscators are compelled to burn them. By polluting the data stream to render it unreliable, obfuscation thus reveals itself as not just a creature of distrust, but also a creator of further distrust. In this Part, we examine obfuscation theory from an external critique and argue that it offers at best only a kind of second-best privacy—a privacy for those who have been disempowered and defeated rather than included as equals in the digital society.

There is a better alternative: a “first-best” form of privacy protection, which safeguards personal information via legal rules and social norms. Under this optimal solution, we could create the necessary incentives to protect sustainable, trusted information relationships between ordinary people and the corporations and governments with which they need to engage in order to participate fully in the digital society. This “first-best” privacy would be promoted through law rather than self-help, would be collective rather than individual, and would support building trust rather than undermining it.

A.Obfuscation Promotes Distrust

Obfuscation is a costly weapon. For it to work well, people must either deceive or damage a system or data set. Brunton and Nissenbaum defend obfuscation in these circumstances by arguing that “[d]ata pollution is unethical only when the integrity of the data flow or data set in question is ethically required.”99 Fair enough, at least with respect to the data set.

But data are usually collected in the context of information relationships. Websites, Internet service providers, merchants, carriers, and members of our social networks all collect our personal information as part of a service or social exchange. Sabotage through obfuscation will breed distrust within these relationships. This is where obfuscation’s trust gap diminishes its own utility as a weapon of the weak.

As the authors note, it is essentially impossible to opt out of information relationships in the modern age. Unless we want to embrace the hermit lifestyle and go completely off the grid, we must share our information with others to get the things we need in order to live as integrated members of our society. It is thus not ideal to intentionally poison the online relationships we cannot do without—the technologies that help us get jobs, find partners, seek health treatment, recommend books and films, purchase goods and services, socialize, and travel.

While obfuscation can occasionally be useful, too much waste, damage, and dishonesty will render toxic any such useful information relationship. Consider social media. One obfuscation technique profiled by the authors is “Bayesian flooding,” a strategy in which Facebook users include so many false and implausible life events on their profiles that Facebook cannot accurately target advertisements to the user.100 While this might be effective, a Facebook profile full of lies would be largely useless to everyone, including the owner of the profile. It might confuse one’s networked connections. Even if human audiences recognize the profile as a fake, the ostensible purpose of social media is to exchange legitimate communication with others. Why even use Facebook in the first place? Similar problems can be found in mapping, gaming, and recommendation apps that require geolocation to serve their purpose. Though such actions might help people obfuscate as a form of protest and collective action, they are mainly only useful when users are prepared to sacrifice the benefit of that information relationship.

The result is that obfuscation is best suited as a strategy for those within relationships that are expendable or those with whom we have no relationship. But in the context of a nonexpendable relationship, obfuscation promotes distrust. As information relationships continue to become more important to our networked lives, this issue is likely to become more problematic.

B. Legal Reform Is Not Hopeless

A defender of obfuscation theory might respond at this point that while the tactic is neither practically nor ethically perfect, it can still be good enough. From the perspective of a powerless individual, obfuscation might well be the best option. But as we have already explained, looking at privacy questions from an individual perspective is limiting. Once we expand our frame from the individual perspective to a social perspective, other options become available.

One promising option is collective action through the legal system (e.g., class actions or government enforcement) or the political process (e.g., new laws). As lawyers, these options may seem obvious to us, but we believe that they should not be underestimated. Brunton and Nissenbaum explicitly consider the possibility of regulation as an alternative to a strategy of obfuscation, but they are dismissive of law’s potential to resolve the problems of the digital age, and they are suspicious of large corporate and government institutions that run on personal data.101 They argue:

Our laws probably will be the eventual site of the conversation in which we answer, as a society, hard questions about the harvesting and stockpiling of personal information. But they operate slowly, and whatever momentum propels agents of government and law in the direction of protecting privacy in the public interest it is amply counterbalanced by opposing forces of corporations and other institutional actors, including government itself . . . . The rate of progress doesn’t inspire great optimism.102

We agree with Brunton and Nissenbaum’s observation that our law has taken a long time to wrestle with the problems created by the information revolution and the processing of personal data. We are sympathetic to the sincere frustration that Brunton and Nissenbaum undoubtedly feel about the lag between social change and legal regulation. Moreover, we are also ideologically sympathetic to their call to the barricades of obfuscatory self-help. But we disagree with the proposition that the inherent conservatism of legal change should lead us to abandon the law as a primary means of dealing with these problems.103

An obfuscation-first strategy for the information age is a concession that the battle is lost, conceding that the best hope for individuals is a kind of guerilla war against the powerful. Depending upon one’s politics, these strategies can have just as powerful an emotional appeal as the obfuscation strategy does for us. But when it comes to the strategy of obfuscation, while we feel its romantic appeal, we harbor no hopeful illusions about such a last-resort strategy’s efficacy over time. Instead, we believe that the strategy of trust building will be more effective. Such a strategy works with government and corporate interests to show the long-term value that digital trust can create and plays those interests against each other where necessary to promote the interests of the humans who constitute our digital society.

Contrary to both popular and legal rhetoric about the “death of privacy” and the privacy pessimism Obfuscation exhibits, there is substantial evidence that the legal campaign for privacy rights can be effective.104 Consider the numerous examples from the past few years of instances in which privacy law has advanced human interests over those of the government or corporations. Examples of this phenomenon abound, but some of the most salient include the expiration of the PATRIOT Act,105 the passage of the California Electronic Communications Privacy Act (CalECPA)106 and state social media protection laws,107 the expansion of FTC enforcement of privacy and security rules,108 the effect of European privacy regulation on American data practices,109 and the efforts to foster data security by state attorneys general.110 In each of these recent cases, law has been successfully marshaled to protect people’s privacy (whether in their capacities as citizens, consumers, or employees) against powerful corporate or government entities.

These privacy-protective legal developments cover a wide range of regulatory possibilities. For example, the Supreme Court has started to expand the Fourth Amendment to reflect digital technologies, holding that the police must obtain a warrant before they use thermal imaging to search houses,111 deploy GPS trackers on cars,112 and search cell phones incident to an otherwise valid arrest.113 At the federal regulatory level, the FTC has, over the past two decades, used its limited authority to police unfair and deceptive trade practices, and to secure consent decrees against many of the largest Internet companies as well as dozens of other companies. These agreements typically require the companies to cease specified acts alleged to be unfair or deceptive, to create “comprehensive privacy and data security programs,” and if those companies violate the consent agreements, to be liable to the government for potentially devastating damages.114 Moreover, a federal court recently reaffirmed the FTC’s authority to regulate information security.115 At the state level, the California legislature recently passed CalECPA, a comprehensive digital privacy law that requires the police to get a warrant before they access emails, cloud-stored documents, or cell phone metadata.116

The battle to secure privacy is by no means won, but as these numerous cases illustrate, progress is being made. Seeking change and shelter through the legal and political process is a proven strategy recognized by previous revolutions like the civil rights movement and the struggle for safe and equal workplaces.117 We worry that a strategy of increased obfuscation could, by corroding trust in our digital future and by keeping our focus on the individual rather than the social dimensions of privacy issues, undermine this promising trend in a way that might make us all worse off.

We cannot know what the future will hold. As Justice Holmes wisely reminded us in his great dissent in Abrams v. United States, “[A]ll life is an experiment. Every year, if not every day, we have to wager our salvation upon some prophecy based upon imperfect knowledge.”118 Holmes was writing about an earlier revolution—the industrial one—and dealing with the prosecution of an activist who had quite literally called his fellow workers to the barricades to protect human interests against government and corporate ones.119 Yet Holmes’s wisdom is as relevant to the information revolution of the twenty-first century as it was to the industrial revolution of the twentieth: we do not know what the future will hold, but we have to do the best we can with the limited knowledge we possess. We cannot know for certain whether a strategy of obfuscation or one of trust is the best way to deal with the disruptive consequences of the information revolution. But we remain hopeful that trust is the better strategy. If we must have a conceptual revolution about how we think about privacy and security, we should fight that revolution for trust and relationships rather than sabotage and individualism. Instead of a revolution of sabotage, a guerilla war of obfuscation against powerful interests, we should have a conceptual revolution about the way we think and talk about privacy that pushes past the limitations of the privacy islands approach. At bottom, we believe there are encouraging signs that the battle for privacy has not been lost and that a strategy that promotes trust in information relationships is a better way forward than doubling down on obfuscation and distrust.

iv. the potential of trust

In this Review, we have argued that the major weakness of obfuscation is that it subscribes to the individualism that dominates modern privacy rhetoric and policy. There is, however, a better way to think about privacy, security, and the role of information in a digital age. Rather than thinking about the ways in which we are isolated like islands, we can think instead about the ways in which we are connected. These connections frequently occur through what we have been calling information relationships. Thinking about privacy in these terms will allow us to move past the privacy islands model and close privacy’s trust gap.

In this final Part, we sketch out how a trust-based model of privacy policy can work. First, we explain what we mean by trust, what its constituent parts are, and how it can serve as a conceptual foundation for privacy. Second, drawing on our trust theory of privacy, we show how looking at privacy problems from a trust perspective rather than a privacy islands perspective changes legal and policy questions.

A. A Theory of Privacy and Trust

Trust is an incredible force. Here and in other work, we use the term to mean a willingness to expose our vulnerabilities to others.120 In the privacy context, trust allows us to develop long-term, sustainable information relationships by sharing meaningful but often sensitive information and having sincere exchanges with the confidence that what we share will be used for our benefit and not come back to haunt or harm us.121

How should we promote trust in the context of personal information? Trustworthy data stewards have four characteristics that promote trust: they are honest, discreet, protective, and loyal.122 Each of these values requires some elaboration. First, trustworthy stewards are honest because they explain to us the terms under which they hold and use our data. Honesty places the obligation of being understood on the steward, rather than on our ability to scrutinize the dense, vague, and protean language of privacy policies and terms of service. Second, they are discreet because they treat our data as presumptively confidential and do not disclose it in ways contrary to our interests or expectations. Third, trustworthy stewards are protective because they hold the data securely against third parties, doing everything within reason to protect us from hacks and data breaches. Fourth, and most fundamentally, those we trust are loyal because they put our interests ahead of their own short-term potential for gain. This means, among other things, that they do not engage in unreasonable self-dealing when collecting, using, or sharing our data.

We have argued elsewhere at length how these four principles can serve as the foundation for our modern notions of privacy, thereby encouraging us to engage in online commerce, social relationships, and political discussion.123 The four foundations of trust are already familiar to us instinctually and in our policy. They are implicit in some existing notions, such as confidentiality, transparency, loyalty, and data security. They can be seen in the law of protective relationships like fiduciaries.124 However, we have not typically used the idea of trust to unify these concepts as an alternative frame to individual-centric mindsets. Nor have we used them to place substantive rather than procedural obligations on those who hold our data—our digital lives and identities—on their servers. However, change is starting to happen here as well. A small but growing number of other scholars are also moving beyond the limitations of the privacy islands approach and exploring the promise of thinking about privacy problems in trust terms.125

The most sustainable solution to the problems raised by personal data is to promote trust between humans and the corporate and government institutions that hold and process data about them. Rather than doubling down on obfuscation, which fosters distrust, we should promote a privacy policy based upon trusted, sustainable, long-term relationships. For corporations, where market incentives exist to create these kinds of relationships, we should embrace them, and where markets fail or incentives conflict, we should use the full range of legal and policy tools to promote trust in our information relationships. For governments, we should use the range of public law tools—constitutional, statutory, and regulatory—to encourage our public officials to act in trustworthy ways. Given the important roles that corporations play in holding our personal data, we should also encourage corporations to fight the government to earn and maintain the trust of their customers, as Apple notably did in its battle with the FBI over the San Bernardino shooter’s iPhone.126 We believe that when it comes to the battles over personal information in a time of rapid technological, social, and commercial change, a trust-based equilibrium for personal information will be normatively superior to an obfuscation-based equilibrium. Trust, properly understood, holds the potential for a kind of first-best privacy.

B.Privacy Problems from a Trust Perspective

One of the chief virtues of a trust-based approach to privacy is that it allows us to better understand privacy problems and formulate privacy solutions. Legal and policy questions surrounding privacy are transformed when we move from a privacy islands perspective to a trust perspective.

Consider, for example, the problem of government surveillance. Secret government surveillance of journalists and activists, in addition to being difficult to prove in court, might not rise to the level of an individual injury under current law. This was the Supreme Court’s holding in Clapper v. Amnesty International USA, which rejected the plaintiffs’ individual claims of injury from surveillance as too speculative.127 From a privacy islands perspective, journalists who could not allege a legal injury might want to turn to obfuscation. However, a focus on relationships enables us to perceive one of the real harms of secret, thoroughgoing, and unchecked surveillance. Such surveillance threatens relationships and chills expressive freedoms because the fear of that surveillance fosters suspicion that thwarts the building of trust.128 A trust-based perspective also reveals the fallacy of the government’s reading of the Fourth Amendment to suggest that when we disclose data to a trusted “third party” like our cloud provider, we lose the protection of the Fourth Amendment for that data.129

Away from government surveillance, a trust-based perspective can also help us better understand private law problems in personal information. Consider the issue of data breaches. Focusing on individual harm in cases involving data security breaches, the aggregation of information by data brokers, and the downstream disclosure of information shared on social networking websites often gets law and policy no closer to keeping personal data secure, digital dossiers compliant with the fair information practices, and socially-shared information obscure.130 By contrast, trust supplies the missing ingredient to these problems by treating betrayal of trust as actionable in the absence of an otherwise quantifiable, visceral harm. Similar to theories of promissory estoppel and detrimental reliance, breach of trust should be taken more seriously in privacy law because people change their positions to become more vulnerable because of it.131

Thinking about privacy in terms of trust also helps us avoid many of the shortcomings and blind spots of the dominant individualistic view of privacy. That view takes as a given that individuals must take primary responsibility for their digital privacy and security. Thus, in most sectors of the economy, as long as companies offer notice of their privacy practices and a choice to opt out of those practices, they have complied with the law.132 This is the case even when the “notice” is vague legal text buried in a privacy policy that few consumers read, and the “choice” is nothing more than the choice not to use that company’s services.133 If we are all privacy islands left to fend for ourselves, this approach seems innocuous, even though the American approach is an outlier compared to the laws of virtually every other advanced economy.134 Moreover, fending for ourselves would lead logically to a strategy of obfuscation.

A focus on trust changes this calculus. Moving beyond fictive notions of trust and blunt concepts of consent as the essence of privacy law, a trust perspective would go further. It would ask whether the notice was sincere and reasonable (i.e., whether it was honest). It would ask whether the choice was meaningful and gave the data subject the opportunity for discretion in the way their data was held. More generally, it would ask whether the data were protected and whether the institution holding personal data acted in ways that were loyal to the data subject. In this context, loyalty might mean the company took the data subject’s substantive interests into account so that the data subject did not need to engage in pragmatic privacy self-help, whether of the notice-and-choice or of the obfuscatory varieties. The main effect of the shift in perspective is to keep the party entrusted with personal information from shifting the risk of loss back onto the trusting party. It thus places obligations on the powerful entities best able to protect against loss, rather than blaming the powerless individuals who are often at their mercy.

Consider, for example, how this shift in perspective might work in the context of a social network like Facebook. Large technology companies competing with each other for long-term relationships with human customers already have substantial market incentives to promote trust. However, one of the criticisms that Facebook has confronted is the argument that because its human users do not pay any money to use the service, its real customers are the paying advertisers on whose behalf Facebook users are served up for marketing purposes.135 Meaningful legal incentives to be honest (in terms of better notice of data practices and data breaches), discreet (in terms of never selling data to third parties, at least by default), and secure (greater liability for data breaches) could generate greater trust than the mixed feelings many people have about large technology companies. But the real virtue of trust theory is the duty of loyalty, putting the interests of the human user first over the short- and medium-term interests of the company, so that both the user and the company benefit over the long term. A meaningful duty of loyalty to human users could eliminate the ambiguity over Facebook’s duties to its human users, and promote further investment in the platform by both sides.136

We harbor no illusions that trust is a panacea for all problems of information policy. It is hard for us to trust those whose interests are opposed to ours, or parties of whose existence we are unaware. Another limitation of trust is the problem of misplaced trust—a party that pretends to be trustworthy but then betrays those that trust them can sow massive amounts of distrust. As we conceive of it, trust works best in relationships in which there is the potential for mutual gain and in which there are multiple opportunities to deepen the relationship. We can have such relationships with our cloud provider, our social network, or our spouse, but one-time transactions standing alone are more prone to distrust. Relationships like these are the economic or literal equivalent of a “one-night stand.” To guard against this problem, companies in the “sharing economy” like Airbnb and other online intermediaries have built trust-promoting structures like peer rating systems into their platforms.137 But we believe that trust retains enormous potential, particularly for consumers in the digital economy who must live their lives in connection with large technology companies who share the consumers’ economic interests in a long-term, sustainable information relationship that can be beneficial (and profitable) to both parties.

Similarly, we believe that there is much insight to be gained when we look at the world from the perspective of trust. For example, trust allows us to see other problems with the dominant view, such as the implicit zero-sum game of individualistic notions of privacy. When the individual is pitted against the world, we should not be surprised when the individual’s gain is seen as the company’s or government’s loss.138 If we think about privacy as the antithesis of profitability or national security, we should not be surprised when companies choose to maximize profits or governments engage in widespread surveillance. But privacy is frequently something corporations can use for their long-term benefit, and something free societies can cherish while also being secure.139 However, looking at privacy in negative, individualistic terms can cause us to lose sight of this very important insight.

Perhaps most interestingly, even obfuscation is transformed into something more useful when we look at privacy through the lens of trust. As an isolated concept, obfuscation is destructive. Brunton and Nissenbaum speak of “sand in the gears” and of frustrating adversaries.140 The entire point of sabotage is wreckage. However, the wisdom motivating obfuscation—that data collectors’ limited resources can be used to protect our privacy—can be leveraged for constructive purposes as well. Consider instead a broader notion of obscurity as privacy. Obscurity is a concept focused on the creation or preservation of transaction costs to finding and accessing personal information. Obscurity protections are based around the notion that making (or keeping information) “hard but possible” to find or use is often good enough for many purposes.141 Whereas the examples in Obfuscation largely taint data, obscurity looks to more generally minimize the risk of identification, which could include limitations on searchability, storage, or other increases in transaction costs that keep the data safe, but useful.142 While it is hard to imagine companies saying “please obfuscate against us” or “please taint our data,” they might be open to preserving the obscurity of data subjects within relationships of trust. They might not need to know (or store) many kinds of identified sensitive data in order to carry out their functions and might deliberately collect less data than is possible in order to protect and be loyal to their trusting customers.

C. Promoting Trust in a Digital Society

At a practical level, how do we promote trust in the relationships that constitute our digital society? Our proposal for first-best, trust-promoting privacy rules is twofold. First, we should encourage the further development of existing trust norms as a business practice in the technology industry. Second, we should develop legal rules to provide additional incentives for trustworthiness and to punish companies that act in ways that are trust-corrosive or disloyal. Third, we should place meaningful checks on government surveillance and government access to information held by companies on behalf of their customers.

The first part of our strategy involves encouraging trust norms. There are already numerous market incentives for companies to promote these kinds of relationships. Consider in this regard Apple’s high-profile standoff with the FBI over the security of iPhones and its lobbying in favor of strong encryption before Congress.143 Or consider Microsoft Corporation’s lawsuits against the federal government, seeking to prevent extraterritorial use of search warrants and the use of search warrants accompanied by gag orders.144 Consider also the trend among large technology companies to issue “transparency reports,” data-rich compendia of government requests and orders to access the data of their users.145 Or consider the vast sums that big technology and cloud companies expend in order to protect the data they store on behalf of their customers.146 These are recent examples, but this phenomenon is hardly new. In 2006, for example, Google successfully convinced the government to narrow a subpoena of its search engine records in order to protect the trust of its users and the confidentiality of their search results.147 These cases illustrate that it will often be in the immediate and long-term interest of companies to protect the privacy of their users. More generally, as Internet business models mature, it will be in the interest of companies to be honest, discreet, protective, and loyal to their customers, in order to develop long-term relationships that create real value for the companies as well as their users.

We harbor no illusions, however, that all companies will take this long view. Bad actors exist throughout our economy and seek short-term gain at the expense of their users, a phenomenon we have elsewhere called “data strip mining.”148 Similarly, there will also be instances in which companies have financial or other incentives to betray or act disloyally with respect to the privacy of their users or other people. The shadowy and largely unregulated data broker industry is one salient example here, but there are many others. In these cases, in which the market fails to adequately promote or protect trust, we offer the second prong of our privacy strategy, which is overtly regulatory. In cases where the market fails to provide adequate incentives to promote trust, we believe that the law should step in to regulate in trust-protective ways and require companies to be honest, discreet, protective, and loyal toward those people whose data they hold, process, and exploit for gain. In this way, the law could recognize a kind of constructive information relationship between a person and a large commercial entity that trades in large quantities of information about them. Such examples are further afield than our core case of a close information relationship, however.

We should also continue to use law to provide additional incentives for trustworthiness and to punish companies that act in ways that are trust-corrosive or disloyal. For example, while the FTC has used its long standing jurisdiction over unfair and deceptive trade practices to become the de facto American privacy and data protection regulator, it could go further.149 The FTC already promotes the honesty norm to some extent through its unfair and deceptive trade practices work, but we could imagine the FTC treating indiscreet or disloyal trade practices within its unfairness authority. The agency could continue its trend of holding data collectors, instead of data subjects, responsible for ensuring that consumer expectations match reality.150 Congress could empower the FTC with additional authority in this area, and it could join the rest of the industrialized West by passing a baseline privacy law for commercial data—one that places real incentives on companies to treat personal data in trustworthy ways. This could take the form of the traditional, top-down regulation that every industrial Western democracy but the United States has, or it could take the form of the “Digital Millennium Privacy Act” proposed by Jack Balkin and Jonathan Zittrain, under which companies could agree to act as trust-promoting “information fiduciaries” in exchange for immunity from uncertain liability.151

An entirely different approach could be for courts to reinvigorate the languishing tort of breach of confidentiality to reinforce expectations of nondisclosure and protection within intimate relationships and those that involve significant personal disclosure. In other work, we have explored the ways in which American tort law has failed to fully embrace the idea of a duty of confidentiality that protects information disclosed in relationships.152 By encouraging discretion and loyalty in particular in these relationships, tort law could be leveraged to promote digital trust as well.

The third part of our proposed strategy deals with government surveillance. All citizens of digital societies are in information relationships with their governments, who collect data on them from cradle to grave and beyond. The dangers of government databases were a stimulus for the passage of federal and state laws placing limits on government data usage like the Federal Privacy Act of 1974 and its state law equivalents.153 But as noted above, as modern technology has marched on, new dangers of government recordkeeping and surveillance have emerged, whether by direct surveillance or by obtaining personal data from companies that hold it on behalf of their customers.154 The revelations of Edward Snowden shattered the trust of many ordinary and law-abiding Internet users in the privacy and confidentiality of their communications and online activities.155 Law can (and should) be used to rebuild trust between citizens and their government, whether by explicitly extending Fourth Amendment law to digital communications and evidence,156 or by using statutory law to achieve similar goals.157 There are a variety of ways to rebuild trust between governments and citizens; our argument does not depend upon any particular form, but we believe more generally that the information relationships each of us have with our governments will benefit from the trust that clear legal rules can create. In particular, the governments that democratic citizens create to govern their societies must be honest, discreet, protective, and loyal.

There will no doubt be difficult cases to regulate, but we believe that trust-promoting regulation is a superior alternative to doubling down on obfuscation. More fundamentally, if we think about personal information in terms of trust, we start to ask better regulatory questions, ones that focus on the kind of sustainable digital future we want to build, rather than on fictive notions of consent or illusions of consumer choice. This, we believe, is the policy path forward to a first-best kind of privacy.

Conclusion

In Obfuscation, Brunton and Nissenbaum have done us an enormous service by identifying obfuscation as a strategy, describing its potential, and engaging deeply with many of its ethical pitfalls. They also powerfully remind us that, when it comes to protecting our data, pretty good protection is frequently good enough. Our understanding of the strategies of privacy and security is substantially richer as a result of their work.

Obfuscation may well be one of the most appealing strategies of the digital age, but we must resist the full force of its siren call to “revolution.” While the practices of obfuscation will certainly have their uses, a full embrace of obfuscation will lead to distrust at a time when there is good evidence to believe that a strategy of market and regulatory trust building can produce meaningful benefits in our struggles over personal information in the digital society. The deployment of legal and policy tools to promote trust in sustainable information relationships certainly has less romantic appeal than obfuscation, but there is good reason to believe that trust, not obfuscation, is the way toward a better digital future.