The Yale Law Journal

VOLUME
117
2007-2008
Forum

Defusing a Google Bomb

09 Sep 2007

AutoAdmit has its problems—racism, sexism, and bigotry quickly come to mind—but we would not care nearly as much about its more vicious content were it not for Google. In this essay, I sketch a framework for a statutory solution to the Google bomb problem derived from the notice-and-takedown provisions of the Digital Millennium Copyright Act (DMCA). The purpose of this framework is to eliminate defamatory anonymous speech from Google search results. It would require search engines to remove a Web page from their indexes when an individual notifies them that the page contains defamatory content, while allowing those who post the content to respond with counternotices or other legal action.

When confronting the evil of anonymous internet defamation, the first step is to realize that the sites hosting such content are not always the uniformly grotesque villains we might like them to be. AutoAdmit is just one example, but it demonstrates that such sites may have both good and bad content. For example, a Google search for “help with clerkships” returns a useful, obscenity-light AutoAdmit thread discussing the competitiveness of various clerkships as its first hit. I do not need to call attention here to AutoAdmit’s less worthy contributions; it is enough to say that a solution that blindly requires removal of all portions of an often-offensive site from search engine indexes goes too far.

If message board defamers were not anonymous, victims of defamation could easily sue the offending posters, which would no doubt reduce the amount of defamatory speech on the board. But anonymity, too, has an important value in this space. For example, law students likely depend on anonymity when discussing summer job offers and rejections, as do students providing evaluations of their professors. Therefore, we should not try to solve the problem by eliminating anonymous speech on message boards altogether.

We could require message board operators to delete allegedly defamatory content when they are alerted of its existence, but what if the content is both truthful and valuable? If a law student learned that a professor evaluated exams by throwing a stack down a stairwell, she might post that tidbit on a law student message board. This information would be valuable to other law students on the message board, and the professor should not be able to demand removal by claiming defamation when the professor truly uses the stairwell grading method. The student’s anonymous post might harm the professor, but the message board operators are not in a position to evaluate its truth. Deleting the post might go too far.

A viable middle-ground would be to remove potentially defamatory posts from search engine indexes. While such posts—some of which are no doubt true and therefore not defamatory—would remain within the confines of the message board community, the most significant harms they cause could be avoided. I refer, for example, to the horror stories of top law students not receiving job offers because Google searches return links to defamatory content from anonymous message boards. This is the kind of harm that companies such as ReputationDefender—which cites the prevalence of Google checks by employers on its homepage—aim to combat.

The DMCA’s notice-and-takedown provisions provide a useful, analogous statutory framework that can help us to solve the Google bomb problem. Under §512(c), service providers can escape liability for copyright-infringing content on their servers if they promptly remove the content when given proper notice. Content-owners provide notice by asserting a good faith belief under penalty of perjury that the hosted content is infringing, and those who post the content can assert that the content is noninfringing through a similar counternotification procedure. Under §512(f), anyone who “knowingly materially misrepresents” that the content is infringing or was removed by misidentification is liable for injuries caused by misrepresentation.

We can develop a similar framework for removing anonymous, defamatory statements from search engine indexes. Note that this would only apply to anonymous statements, since we can deal with self-identifying posters more directly. The first step is to hold search engines liable where they fail to respond to the notice-and-takedown system. Then, we can allow those who claim to have been defamed by anonymous posters to send notices to search engines requesting removal of the Web page hosting the statements from the search engine indexes. Posters who claim that the statements are true can—if they are willing to give up their anonymity—send a counternotice to search engines requesting that the page be returned to the indexes.

This framework would deter false notices and counternotices in two ways. First, following the DMCA, both the notices and counternotices would be submitted under penalty of perjury. An anonymous poster of defamatory statements would therefore counternotify at the risk of perjury and defamation, so such a poster would be unlikely to counternotify. Second, a provision like §512(f) would provide a cause of action for posters or those defamed by posters where a notice or counternotice contains material misrepresentations. This provision would be a stronger deterrent than in the DMCA context if it had a less rigorous standard than “knowingly materially misrepresents” as under §512(f), but it would provide at least some level of deterrence regardless of the standard.

There are several advantages to this framework. First, it does not require that anyone evaluate the truthfulness of anonymous speech, for which neither search engines nor sites hosting anonymous speech are particularly well-equipped. Instead, it places a priority on reciprocity—when one’s reputation is allegedly injured by an anonymous poster, the poster must put his own reputation behind the statement or see it removed from search engine indexes. Second, by focusing on search engines rather than the message boards themselves, this framework places the burden of the system on those best able to bear it. While the notices would likely be numerous, search engines would have advantages of size, experience, and resources over message boards. And the framework still would provide meaningful incentives to the boards themselves. If threads are consistently removed from search engine indexes for their allegedly defamatory content, message board operators may respond with more effective moderation so as to protect potential advertising revenue.

There is reason to think that the notice-and-takedown system would be much more successful (and equitable) in the anonymous Internet defamation context than under the DMCA. The DMCA typically pits a large corporate copyright holder against, for example, a private individual posting video clips to YouTube. In such cases, there is so vast a disparity in resources and knowledge of the relevant law that notifications are common even where material is clearly noninfringing, and counternotifications are almost unheard of. This is part of what makes Professor Wendy Seltzer’s notice-and-takedown dance with the NFL so unprecedented—the NFL has picked on someone who knows the law. In the anonymous Internet defamation context, however, the players involved are much more likely to have comparable resources, as the posters and those whom they post about are both likely to be private individuals. That said, there are many practical concerns—too many to address in this brief sketch—that would need to be addressed before the framework could be implemented.

One apparent problem with this system occurs where someone gives a false notice of defamation and the poster’s anonymity is too important to give up for a counternotice. One can imagine that whistleblowers in workplace settings or those who give insider information about public figures, such as celebrities or politicians, could be in this position. But we already impose stricter standards in defamation actions for public figures, so there is no reason not to impose stronger notice-and-takedown provisions for public figures as well. Private individuals are the ones we are primarily concerned with here. The whistleblower problem is harder, but if the information is important enough, nonanonymous Internet sources will likely pick up on the story once it is posted to the anonymous forum, so important information would be disseminated despite lying outside search engine indexes. These nonanonymous sources would be subject to standard defamation actions, so companies claiming defamation would not be without relief.

The statutory solution I have sketched above requires much more development before we can deploy it against the problem of anonymous Internet defamation. But even in its abstract form, the framework has some obvious advantages. We would not destroy the “town square” of message boards like AutoAdmit, but we would limit the reach of their defamatory speech to their readership. While the framework could not clear away all of the fallout from anonymous Internet defamation, it would begin to defuse the problem of the Google bomb.

Steven J. Horowitz is entering his second year at Harvard Law School. His recent scholarship, which appears in Volume 20 of the Harvard Journal of Law & Technology and is forthcoming in the Ohio Northern University Law Review, focuses on virtual property rights in virtual worlds.

Preferred citation: Steven J. Horowitz, Defusing a Google Bomb, 117 Yale L.J. Pocket Part 36 (2007), http://yalelawjournal.org/forum/defusing-a-google-bomb.