Censorship Online and Hate Speech: The Case of Reddit

Cédrick Mulcair

Cédrick Mulcair is involved in his community's Federal Executive Council as former Vice-President Youth, now part-time assistant. His experiences range from close to a decade of work, including as an assistant to Community Outreach for Member of Parliament Djaouida Sellah. He is completing a double Major in Political Science and History McGill University and hopes to transition into the faculty of law and pursue studies in international law and the promotion of human rights through doctrines such as the responsibility to protect (R2P). 

Unpacking the Responsibility of Social Media Platforms

Law and Order

 

Short of having lived under a rock since 2016, it would be difficult for a modern internet user to claim ignorance as to the Alt-Right Movement. The term was coined in 2010 by self-declared white nationalist Richard Spencer, but has since evolved to encapsulate a radical offshoot of the conservative right loosely defined through their racist, misogynist, and violent means of recapturing a degree of “white identity.”1 One must realize, however, that the term has also come to encapsulate a large variety of political dissidents, who may not all share a fixation on white identity as central to their movement. The term has been used increasingly to describe both those who have an explicitly racist agenda, but also online users who adhere to openly sexist views or attempt to generate controversy as a backlash against changes in conservative and liberal politics.

 

Although this loose movement has existed for quite some time, it would be difficult not to equate its recent presence in media and on public places as a response to the Trump administration in the United States and the careless attitude it has unfortunately demonstrated in response to the consequences of their positioning. The August 12, 2017 Charlottesville, Virginia car attack was the most notorious of these incidents, and a prime example of the tone-deafness of the Trump Administration. The event, which garnered considerable press attention, involved James Alex Fields Jr. driving into a group of counter-protesters during a rally to “unite the right.” In the year since then, the President of the United States has done a lot of backtracking to dissociate himself from both white supremacists and hate groups generally, claiming that their racism is “evil” and that they “…are repugnant to everything we hold dear as Americans.” Still today, however, reports such as those of the Salt Lake Tribune continue to highlight the change felt by migrant populations since 2016. As recently as January 10, 2019, Congressman Steve King (R-Iowa) was questioning how terms such as “white nationalist” became problematic in the United States. Later that month, James Harris Jackson was found guilty of Murder in the First Degree in the Furtherance of an Act of Terrorism. District Attorney Cyrus Vance would be quoted post-conviction as saying that “White nationalism will not be normalized in New York. If you come here to kill […] in the name of white nationalism, you will be investigated, prosecuted, and incapacitated like the terrorist that you are”.

 

Because of this change in politics, a great deal of attention has been given to how the alt-right and other users espousing hateful comments have come to organize and communicate online. The truth is that there is still little consensus as to what defines the alt-right and thus, how it should be monitored effectively. This quest for monitoring and the prevention of white national radicalization has taken most researchers, as well as law enforcement authorities down the rabbit hole that is the internet. Whether on forums, the comment section of YouTube videos, or in online gaming communities, overt racism, sexism, homophobia and other forms of public incitement of hatred and targeting have acquired a particular character defined by the anonymity of contributing members.

 

Long considered a space for free expression, the internet and the websites hosted on it has never been equated with regulation. The United States Federal Communications Commission (FCC) has long grappled with the implications of net neutrality, but rarely with the content itself. So far, in the United States, that responsibility barely exists. Indeed, FCC Chairman Ajit Pai would declare that a “free and open internet” to him, is one where every citizen can “go where [they] want, and say and do what [they] want, without having to ask anyone’s permission.” Yet, this foundational open internet principle seems to increasingly be butting heads with the political notion that some statements should be deemed unacceptable, that there must be a reasonable degree of supervision in regard to online comments and the way that citizens interact online. How are they to be reconciled, if at all?

 

To understand how the internet self-regulates, one option is to examine the measures put in place by individual websites to counter radicalization. For the purpose of this piece, I will focus on a case study and its community censorship; that of Reddit. Keeping an eye to the means in which it has come to deal with radical opinions online, we may gain some insight into the evolving interplay between a user’s free speech rights in the U.S., and the monitoring structure of social media platforms.

 

The increasing attention being drawn to Reddit by mainstream media is in and of itself worth examining. Why the attention to mainstream websites, and not more problematic ones like Gab, or the politically incorrect (/pol/) board on 4chan? Craig Timber recently wrote a fantastic piece on this matter for the Washington Post. The answer lies largely in the perception outside users have of the website, as well as its overall traffic. While Gab has described itself as an “alternative of Twitter” that effectively puts “free speech first,” since its 2016 inception it has remained quite fringe by nature. A 2018 research paper on its constituent members cited 336,000 users. While this number is quite high, it does not compare to the more than 1.5 million monthly users in Reddit’s numerous communities. Rather than focus on known echo chambers, it is necessary to consider how mainstream social media sites and hosting companies have come to grapple with online radicalization and anonymous hate speech.

 

 

The Case Study: Reddit

Reddit is an American social news website founded in June 2005. Once a user is registered on the site, they may post a variety of links, images, and text to interest-based communities known as subreddits. Rather than targeting hate speech directly, Reddit has long argued that it delineates between speech and belief.234 To that effect, the website monitors content through two mechanisms. The first is Reddit’s content policy—or “Reddit rules”—that provide an overarching policy on content within all subreddits. While Reddit itself describes the platform as offering “a lot of leeway in what content is acceptable,” there are some key exceptions to the rule. Prohibited content includes, in a non-exhaustive manner: (i) involuntary pornography, (ii) content that threatens, harasses, or bullies or (iii) is personal and confidential information.

 

While this policy is enforced by a Trust and Safety team (TS), it is primarily used for two reasons: to police the very subreddits which are allowed to be hosted on Reddit, and as a guideline for moderators of subreddits to appreciate and judge the content which is posted onto their independent communities. Most subreddits, such as /r/Games mirror the content policy into their own subreddit rules.

 

 

steve
Reddit CEO Steve Huffman, via Wikimedia Commons

 

 

Yet even websites that take an active stance toward content removal know better than to take the act lightly. Indeed, while the decision to take down a subreddit is not a new occurrence, it is in no way an easy one to make. A high-profile example of such a removal occurred following the Charlottesville event, when r/Physical_Removal—a subreddit previously flagged by Reddit’s TS which advocated for the removal of immigrant populations from the US—was removed from the website after clearance from Reddit’s legal department. The bigger problem is that in the last few years there has been an increasing amount of political association on the platform that challenges the traditional limits of Reddit’s content policy. While some subreddits are controversial in that they promote a permissive climate, others are much clearer in their intent to promote harassment or violence against populations and opinions; such as the infamous /r/incels which was removed from the site in 2017 after repeated violations of Reddit’s rules due to doxing. This second category is obviously much easier to target by the TS team at Reddit.

 

The Charlottesville atrocities were mentioned earlier because they played an important role in the way Reddit’s administration has come to see their site, and the communities hosted on it. Reddit CEO Steve Huffman studied at the University of Virginia in Charlottesville and was understandably shocked by what he saw that weekend. His initial reaction, quoted by Lagorio-Chafkin’s piece was “fuck all these people. Ban them all.” Yet, this response to this demonstrated violence, white supremacy, and its influence on specific subreddits came into conflict with both the site’s anonymity and its own free speech ideology. How were they to be reconciled, beyond the banning of overtly nationalist, racist, xenophobic or misogynist subreddits? This question was only exacerbated as new hate attacks, such as the one perpetrated in Toronto by Alek Minassian, demonstrated an idolization of incel pariah Elliot Rodger. His posting on Facebook of the words: “The Incel Rebellion has already begun,” before committing the act is evidence of the radicalization that can occur through online communities, and their ramifications in the real world.

 

The past few years, sites like Reddit, due to their inherent structure, have felt the pull to purposefully detoxify themselves of communities they consider too hateful to allow. Reddit has ramped up the enforcement of its own rules in the last two years, yet only time will tell if this mechanism proves effective or simply encourages a growth of dissenters on an alternative platform. The problem has already been cited in connection to Gab. Even though the website was initially founded by CEO Andrew Torba as a means to promote “non-politically affiliated anti-censorship platforms,” it too has come to struggle with domain registrars cracking down on allowable content and enforcing their own abuse policy.

 

As Reddit has taken active steps in enforcing its own content policy while also reinforcing its own community driven filtering, other sites such as Gab have taken an active step toward non-enforcement in an effort to align these issues with ongoing political debate in the United States. For that reason, the governing law of the internet host is particularly relevant to the examination of the scope and character of hate speech online.

 

 

 

The Name of the Game; United States Governing Law

 

Understanding the situation of private corporations that must monitor online forums requires a deeper look into the governing jurisdiction hosting 43 percent of the world’s top 1 million websites as of 2012: the United States.

 

It seems that the legal recourse of a user against a private website, especially a message board, is rather limited because of the Communications Decency Act (CDA). This has the dual effect of denying the rights of those against whom acts of hate are committed online, but also those whose accounts are terminated on the grounds of terms violations. One prominent and recent case which illustrated this protection is that of Buza v. ProBoards, Inc. The California civil lawsuit pit popular internet message board ProBoards, Inc. against a former user who believed his account termination, grounded in violated terms of use, contravened his constitutional right to free speech. ProBoards was ultimately found to have acted lawfully in the user’s removal. The judge cited the CDA, which immunized ProBoards from liability for removal of objectionable content and established no private right by the citizen to bring an action forward in state or federal court.

 

EFF via Wikimedia Commons

Though the CDA was initially devised in 1996 to protect American citizens from indecent content on the internet, it has since undergone many mutations. Nowadays, it serves as an important tool to immunize and empower websites and Internet Service Providers to moderate content in a reasonable manner. Section 230 of the CDA states that “no provider or user of an interactive computer service shall be treated as a publisher or speaker of any information provided by another information content provider.” This essentially means that websites such as Reddit and ProBoards are immunized from liability from users’ posts, but also protected in their reasonable censorship of user-generated content through internal content policy. Indeed, in Zeran v America Online Incorporated, the ruling confirms that section 230 was enacted to “remove disincentives for the development and utilization of blocking and filtering technologies.” The fact that many social media sites are hosted in the U.S. provides them with the safe haven to host, as the Electronic Frontier Foundation characterizes it, “controversial or political speech”.

 

The broad scope of free speech protections in the United States has blurred the line between what is and isn’t deemed acceptable. This has only made it more necessary for websites to develop coherent and consistent content policies. The constitutionally protected idea of the marketplace of ideas is one that has been consistently protected by U.S. courts. One of the most high profile examples of this protection occurred in 2014 when the Supreme Court of the United States upheld 8-1, in Snyder v Phelps, that the Westboro Baptist Church’s signs and comments during a veteran’s funeral were protected as the church was speaking on “matters of public concern” while on public property. As the court highlights, “The tort of emotional distress is not recognized as the basis for a claim pit against the First Amendment.” Snyder confirms the U.S. interpretation that speech is afforded the highest level of protection when it concerns matters of public issues.

 

The Snyder decision highlights the hard line the Supreme Court takes concerning the manner and content of speech. The Court reminds the reader in Snyder that “this Nation has chosen to protect even hurtful speech.”5 Such is the shield afforded to Westboro, and an important precedent for how free speech is handled to this day in the United States.

 

Taking this analysis and applying it to the communities generated online reinforces the measures taken by many websites; namely, that using the shield provided by the CDA is the best means to enacting content censorship online. Yet the CDA itself must reckon with the use of the internet to facilitate crime and must engage with the blanket safe haven it grants. Despite the judgment in ProBoards, there is still reason to believe the CDA can be adapted to host liability. A 2008 decision by the 9th US Circuit Court of Appeals, Fair Housing Council v Roommates.com, noted that the immunity granted by the CDA did not apply when the website operator “materially contributed” to the unlawful content they host. The judgment was upheld in 2014 in Jones v Dirty World Ent. Recordings LLC, and 2016 in the People v Bollaert. Yet, these work-arounds still exposes the rest of the world to the US standard as to the freedom of speech and hate speech. For the latter, there is a legal inability to hold users accountable to content posted if it does not meet the Brandenburg test. This test, set out in Brandenburg v Ohio, establishes one of the only reasonable limits on speech and requires the possibility of imminent lawless action. The test holds up to this day, but the immediacy requirement, at least instinctively, seems to translate poorly to online forums where live content—such as user-generated content on YouTube or other popular sites such as Twitch.tv—can pose a risk but be easily missed by authorities.

 

Thus, while other jurisdictions may have demonstrated more of a willingness to monitor content posted by their citizens online, the American hosting of websites remains a hindrance to applying a shared standard for hateful content. Criminal prosecution of states such as Germany and Canada would not reasonably be mirrored by U.S. jurisdictions absent serious political motive. While some multi-lateral conventions have attempted to define the bounds of hate speech, such as the International Covenant on Civil and Political Rights, consensus is still non-existent. The discord is only exacerbated when the content considered is posted online. While the Convention on Cybercrime attempted to add an additional protocol prohibiting racist and xenophobic content online, the U.S. took a clear stance against its applicability to American citizens.

 

Considering the challenge posed, and the existing laws, websites like Reddit are doing the best that they can. While the internet as a space is the topic of many discussions, its interactions with the public and the forum it should and shouldn’t provide are still unclear. The nature of the service Reddit provides has forced it to balance between the rights of a plethora of users; whether through the quarantine of extremely offensive communities, or to the outright removal of a select few.6 For now at least, this will have to do, and Reddit can count on the CDA to support its right to reasonable censorship.

 

The problem with the proper control of hate speech online is that criminal law which prohibits it can only reasonably be expected to be enacted on citizens of certain jurisdictions. Further, the possibility of international cooperation on the matter is still at a preliminary stage. Without greater cooperation internationally, or consensus on the constitutional limits to free expression, predictable enforcement is unlikely. While certain websites like Reddit, or even Facebook and YouTube have taken steps to counteract offensive content, there is ongoing discussion as to the effectiveness of these bans. Whether in the case of Alex Jones, who claimed a spike in his following after being banned from social media platforms, or of Reddit and the /r/altright subreddit, there is an understanding that anonymous online populations, without any real enforcement from domestic and foreign governments, write with impunity. Differentiating between real policy and steps taken by hosts against hate speech, and the politics surrounding such opinions, is an ongoing issue.

 

There is a freedom inherent to the internet which should not be brought into question. Its transnational and essential characteristics have allowed it to generate a true wealth of digital economies and information sharing. That said, as an increasing amount of attention is drawn to the reliability of both information and sources, social media sites are bound to reinforce the guidelines and terms of service / use which they have internet users sign. The CDA protects such a reasonable degree of censorship. How the prevalence of politics online comes to affect what is allowed online will largely revolve around individual corporations continued willingness to take a stand against content which they deem unacceptable, and the CDA’s continued hard line in protecting those corporations from constitutional claims. As Snyder highlights, it is difficult to envision the U.S. returning to a judgment on the offensiveness of content, particularly on the internet. That ship sailed long ago, without captain or chart.

 

 

  • 1. See; ADL’s “Alt Right: A Primer about the New White Supremacy” and Taylor Hosking’s Piece for The Atlantic “The Rise of the Alt-Right”
  • 2. Ashley Feinberg, “Conde Nast Sibling Reddit Says Banning Hate Speech is Just Too Hard” (7 September 2018: 16:46), online: Huffington Post < huffingtonpost.ca/entry/reddit-ceo-ban-hate-speech-hard_us_5b437fa9e4b07aea75429355>.
  • 3. Will Sommer, “What is QAnon? The Craziest Theory of the Trump Era, Explained” (6 June 2018: 10:03) online: The Daily Beast <thedailybeast.com/what-is-qanon-the-craziest-theory-of-the-trump-era-explained>.
  • 4. Jane Coaston, “#QAnon, the Scarily Popular Pro-Trump Conspiracy Theory, Explained” (2 August 2018: 12:31 EDT), online: Vox <vox.com/policy-and-politics/2018/8/1/17253444/qanon-trump-conspiracy-theory-reddit>.
  • 5. Ibid.
  • 6. Coldewey, supra note 44.
Hate Speech