Online Hate Speech: What the Faith Community Can Do

During the 2016 election, many of us were dismayed to see a surge in hate speech around the country — particularly online. We’ve come from a time when social media was, at its worst, a distraction, to today where social media is “weaponized.” How is hate speech spreading online and what can we do to stop it? Recently the United Church of Christ’s Justice & Witness Ministries, and its media justice ministry, OC Inc., hosted a joint event to explore these issues.

UCC’s Media Justice Ministry

The United Church of Christ is a denomination whose roots go to the founding of this nation, and during the 1960s it led the way in holding media accountable to the communities they serve. After groundbreaking work establishing ordinary people’s right to petition the Federal Communications Commission, the ministry continues today working for technology rights and media justice, from net neutrality in our Faithful Internet campaign to affordable broadband.

At the same time that the UCC has stood up for the rights of those misrepresented and maligned in the media, we also have experience when our advertisements were not accepted on broadcast television — limiting our ability to disseminate our message. In the 1990s our advertisements were refused carriage. Thus we understand the dangers when accountability is lacking and when gatekeepers censor content.

Online Hate Speech

At the UCC event, OC Inc.’s policy adviser Cheryl Leanza and the National Hispanic Media Coalition’s Carmen Scurato outlined the real threats online and how this material is funded.

We began our session by highlighting several chilling examples hate speech and its consequences. The Southern Poverty Law Center has released a

disturbing video describing how the diary of Dylan Roof, who murdered nine worshipers at Mother Emmanuel Church in Charleston, SC, recorded his own radicalization through Google research. His attempt to learn more about crime after hearing about Trayvon Martin’s killing resulted in consuming data and information from a web site peddling false facts came up in his search for “black on white crime.”

A group of anti-immigration organizations attempted to exploit the hashtag #UndocumentedUnafraid to identify undocumented people and report them to the authorities.

A recent report of the Anti-Defamation League documented a total of 2.6 million tweets containing anti-Semitic language on Twitter between August 2015 and July 2016 reaching an estimated 10 billion impressions. ADL also found a significant uptick in anti-Semitic tweets from January 2016 to July 2016 and over 19,000 tweets directed at Jewish journalists who criticized Donald Trump as a candidate.

In some cases, problems faced online are not only from receiving hate speech, but also from possible racial bias or double standards in removing speech. Seventy organizations, including the American Civil Liberties Union, Center for Media Justice and Color of Change, wrote to Facebook describing instances where Black Lives Matter activists’ discussions of racism were taken down because they allegedly violated Facebook community standards, but at the same time harassment and threats directed at activists based on their race, religion, and sexual orientation were not removed and are thriving on Facebook.

Efforts to spread hate are often sophisticated and coordinated. Hateful comments online are often focused on women and people of color. Online users can feel overwhelmed by hate groups and are forced to stop participating in online discourse. New studies suggests that women and minorities are more likely to self-sensor. Hate speech can, therefore, reduce speech by people who are the most likely to be marginalized by existing power structures.

How Is Online Hate Funded?

Hate speech online also can bring revenue to the organizations which sponsor it — the online world is funded, significantly, by advertising. The platforms which host advertising share revenue with web sites, YouTube channels, or Facebook pages, and also profit themselves.

Hate sites or purposeful fake news purveyors can sign up to use advertising platforms, enabling them to earn money. Many online mechanisms are available to place advertising and earn web sites, YouTube channels and Facebook pages. Most of the time, the companies or individuals placing these advertisements don’t select where the ads will appear. Instead they target, generally, audiences such as “women aged 40–65” or “men 18–35” or more specifically people with an interest in particular products, like children’s clothes or high-end sneakers. Online companies collect data to identify characteristics of viewer and users. Then, intermediaries place advertisements linking advertisements and users. For example, Google AdSense places advertisements on web sites and YouTube channels, and Facebook places advertisements on its pages. For this reason, in some cases advertisements show up in unexpected places. However, virtually all major platforms permit advertisers to prevent their advertisements from appearing particular kinds of sites.

According to the Southern Poverty Law Center, another common tool is using online payment sites such as PayPal or Amazon.com’s affiliate program which gives a commission for sites referring people to Amazon or permits groups to sell their racist wares on the site. While PayPal has banned some of the most well-known and oldest racist organizations from using its platform because they violate PayPal’s policies, many other sites continue to use PayPal with impunity.

Successful Campaigns

A number of groups have been working for years against online hate, new efforts are springing up, and new collaborations are forming. Because online revenues support hate groups, many advocates have focused on stemming the financial streams online.

According to Vox and Slate, the anonymous group, Sleeping Giant, was formed after the election by several people who were “really shocked to see that [Breitbart’s] content, disguised as real news, was so inflammatory.” They ask their followers to take screen shots of brand names advertising on the site and post to Twitter. Embarrassed, advertisers ask their ads to be removed. A similar huge European scandal has emerged after a journalistic investigation, with respect to advertisements on Google’s YouTube, where government and charitable ads appeared next to racist and anti-Semitic speech.

Color of Change, the well-known online organizer supporting African American interests, has focused on corporate accountability. For example, they have successfully persuaded advertisers to leave shows hosted by Glenn Beck and Bill O’Reilly. They are working on a campaign to stop Mastercard and Visa from permitting their products to be used by hate groups.

Similarly, civil rights advocates have successfully persuaded companies like Facebook and Google to stop carrying advertisements for predatory payday lending. And after a ProPublica investigation, Facebook recently adopted policies that will limit advertisers’ ability to adopt racist advertising targeting for housing, employment and financial products, each of which are prohibited by law.

Tracking efforts are also under way. Communities Against Hate is a national initiative to collect data and respond to incidents of violence, threats, and property damage motivated by hate around the United States. Harassment can be reported online or by calling, toll-free, 1–844–9-NOHATE.

The National Hispanic Media Coalition has founded the Coalition Against Hate. Their mission is to:

  • #breakhate by elevating the voices of the impacted and hold purveyors of hate speech accountable.
  • Encourage media platforms to abandon hate speech as a profit model and bring civil discourse back to the public square.
  • Expose purveyors of hate speech and those that use multiple media platforms to amplify their hateful rhetoric.

Technology itself might play a role. For example, Eric Schmidt has called on the tech community to create “spell-checkers, but for hate and harassment,” and Facebook uses artificial intelligence to report offensive visual content.

The Faith Communities’ Role

As people of faith we stand up together against hate in our communities, and we can do no less online. At the workshop, the assembled groups — representing a wide array of faiths, including the United Church of Christ — developed a list of possible actions.

  • Develop resolutions for local churches, interfaith coalitions, and as denominational policy.
  • Use our financial muscle as shareholders to petition companies to improve their policies.
  • Expand opportunities for faith communities to learn more about online hate speech.
  • Use platform tools provided by Facebook, Google (Adsense web site adverting, Adwords search advertising, YouTube) and Twitter to report hate. Report examples to Media Matters and individual harassment to Communities Against Hate.
  • Join actions hosted by Color of Change, Media Matters, the Coalition Against Hate and others.
  • Review our own online advertising policies and channels to be sure we are not inadvertently supporting hate.
  • Create multigenerational forums that can be used in religion school curriculum — young people and older people can both teach and learn.

Join Us: Come to the UCC’s Synod!

The United Church of Christ community can learn more about online hate speech at our workshop in Baltimore this summer. Register for Synod and join us on Saturday July 1, during Workshop Session 2, from 3pm to 4pm in Baltimore Convention Center Room 342.

Learn More

UCC’s media justice and technology rights ministry looks forward to more work with its interfaith partners and within the denomination to take action against online hate. If you are interested in learning more, join our mailing list.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.