Last week, the Southern Poverty Law Center announced a suit they filed on behalf of a woman systematically harassed by neo-Nazi Andrew Anglin and the followers of his hate-media website Daily Stormer. In #GamerGate-like fashion, Anglin’s followers sustained a prolonged campaign of threats and harassment, both publicly and privately, against a woman (and her family) who allegedly had a real-estate deal fall through with the mother of “alt-right” leader, Richard Spencer.
The SPLC’s involvement is a major development. Online threats are difficult to prosecute for a variety of reasons, not least of which is the unwillingness of law enforcement to take the threats seriously, or simply the ignorance of law enforcement and the judiciary about how they would pursue charges against perpetrators. Having a civil rights group stand behind a victim and pursue damages in court is a major step forward, and if they are successful, more victims will likely come forward, leading to more suits, and possibly even criminal prosecution.
But how do such communities of hate begin? What makes them sustainable? There is a massive network of content platforms and advertising architecture that makes hate media both possible and profitable. Major mainstream companies like Google and Facebook are at the center of that network, and mainstream news media companies provide the bulk of the financial resources that sustain it. Understanding how hate media ― and its close cousin, disinformation ― spread, requires understanding how that network works. But understanding it means we can put pressure on the right people and make a major dent in the ability for hate groups to organize, publicize, and mobilize online.
Behind the storm: the adtech network
Who funds Daily Stormer? According to the site itself, it is 100% reader-funded (though it does contain ads from sponsors). But that’s not the entire story. When you visit a website, you’re usually not receiving content only from that site. Typically, the site calls in data from around the web — ads, fonts, embedded media, recommended reads, comments, trackers, etc., alongside page content. And for sites that seek to make money, those relationships with third-party sources are key.
My research colleague, Bill Fitzgerald, included Daily Stormer in a set of 25 popular websites from across the political spectrum, which he analyzed for third-party data connections. Using an intercepting proxy while visiting three pages on each site, Bill found that Daily Stormer makes connections to 12 different domains in the background. While that may seem like a lot, especially for a site with little advertising, it’s a drop in the bucket compared to Breitbart’s 87 domains or ZeroHedge’s 184 domains!
Here are the 12 domains called in the background when visiting pages on Daily Stormer:
Notice anything? All of these domains belong to Google, Facebook, or Twitter! Some of them serve up embedded content (youtube.com, fbcdn.net, and twimg.com), some are adtech and data collection (doubleclick.net, google-analytics.com, and facebook.com). All of them are necessary to Daily Stormer’s “news” content, which builds itself around embedded YouTube videos, posts from social media platforms like Facebook and Twitter, and extended quotes from news published on other sites. It would not be a stretch to say that sites like Daily Stormer are dependent on the content from those platforms. In fact, some sites, like TruthFeed, are built almost exclusively on external content.
Daily Stormer is far from unique in their reliance on Google, Facebook, and Twitter for their content and revenue. Bill provides a complete list of the third-party services called in the background of the 25 sites he analyzed. Here are the top 15:
- doubleclick.net — used in 23 sites
- google.com — used in 22 sites
- googleapis.com — used in 22 sites
- gstatic.com — used in 21 sites
- google-analytics.com — used in 21 sites
- googlesyndication.com — used in 21 sites
- scorecardresearch.com — used in 20 sites
- facebook.com — used in 19 sites
- googletagservices.com — used in 19 sites
- adnxs.com — used in 18 sites
- demdex.net — used in 18 sites
- yahoo.com — used in 18 sites
- twitter.com — used in 17 sites
- facebook.net — used in 17 sites
- adsrvr.org — used in 17 sites
Seven of the top 15 services (each used in at least 19 of the 25 sites analyzed) are owned by Google, including each of the top six. Only three of the top 15 are owned by someone other than Google, Facebook, Twitter, or Microsoft. Together, these form a large share of the third-party services called in the background of websites like the New York Times, Wall Street Journal, Huffington Post, RT, Breitbart, Alternet, and YouTube.
The adtech “Death Star”
To get a handle on how concentrated these services are in a small number of providers, here is a graph of this “adtech death star” (to use a term now popular among adtech researchers, such as David Carroll and Jonathan Albright). All 25 sites visited are included as nodes with arrows coming out, pointing towards the adtech domains they called during Bill’s three-page visits.
Every single site on our list is connected in this network. In fact, there are so many third-party calls to overlapping services that the graph is hardly readable. However, at the middle of this graph, we find the services used by just about every site, the adtech nexus of Google and Facebook.
Let’s unpack the implications of this adtech death star. First, popular adtech services provide funding to hate media sites just like they do to mainstream media sites. That means that if Google, Facebook, Twitter, and Microsoft were to cut off access for hate media, hate organizations would have a harder time raising money to support their sites.
Second, services like Google and Facebook use data collected from each of these sites ― including the hate media sites ― to “improve” the services they provide on others ― like the content served in Google search results, Facebook home feeds, and ads across the internet. Wonder why for months a search for “Did the holocaust happen?” on Google returned holocaust denial sites as the top search results? This is part of the answer.
It’s not just about ads, though. Since sites like Breitbart and Daily Stormer base their content around embedded tweets, Facebook posts, YouTube videos, and often longer-than-fair-use excerpts of mainstream news articles, cutting off access to that content will make it harder to generate a “news” site that attracts visitors. And for sites like Daily Stormer, the “news” serves as the gateway to the forums, where the more extreme hate speech is found, and where much of the organizing takes place. That means that, if content platforms disallow hate media sites from embedding their content, there will be less mobilized extremists threatening and harassing citizens online and in person.
But there’s still another side here. If Google, Facebook, and company have content and adtech networks that effectively fund hate media, Google, Facebook, and company also have significant business income from hate media. Google may not intend to monetize hate speech and extremism, but Google makes money off those ads, just like the extremist sites do. And Google makes money off of ads on those RT videos on YouTube that pop up on extreme hate sites, and which form a major share of the misinformation, disinformation, and hate content shared on extremist social media. This gives these companies a financial reason to resist deleting hate media from their platforms and revoking access to hate media sites wanting to use their services.
But it also presents a strategy to combat hate online. In recent weeks, a number of major companies ― even the UK government ― pulled ads from YouTube and other Google services when they were made aware of their ads appearing on hate media sites. Google did not remove the hate media from their platforms, but they did remove advertising from certain YouTube accounts and provide advertisers with more control over the kinds of sites their ads appear on. This means that if we want to stop, or significantly slow, the spread of hate media online, we need to expose the advertising networks that sustain hate media, and inform companies when their ads appear on hate media sites.
However, as Bill Fitzgerald makes clear, displayed ads aren’t everything. Background third-party connections matter, too. In fact, they’re often more important because they aren’t as visible, and therefore are not held accountable. Hosting services matter. The fact that YouTube is one of the most prominent sources for extremist content on social media, including both hate media and misinformation, is a huge problem. We need to let Google know that it’s their problem to a significant degree and pressure them to fix it. Same for Apple, who makes it easier for hate media podcasts to be shared with a broader audience.
Halting hate media
Imagine a world where hate sites couldn’t do any of these things…
- embed content from YouTube, Facebook, or Twitter
- violate mainstream media outlets’ copyright privileges without legal action
- make money off of ads from Google, Facebook, or other mainstream companies
- list podcasts in the iTunes database
- post content to social media
- have site content appear on Facebook or Twitter via their Open Graph/Twitter Cards services (with pictures, highlighted headlines, etc., all of which boost traffic) … or even at all
- appear in Google search results
This isn’t hard to obtain! But it’s largely dependent on companies like Google, Facebook, Twitter, Apple, and Microsoft to make it happen. Again, they may not have intended their platforms and tools to facilitate hate media, but hate media does depend on their tools to operate at the scale that we’ve seen in recent years. That means that the companies that control those platforms and tools are in the best position to combat hate media and disinformation on a large scale.
As civil rights groups like the SPLC bring the fight to the hate media perpetrators themselves, media platforms, adtech providers, and advertisers need to recognize the role they play and start to combat hate media themselves. It won’t solve the problem overnight, but it will make it significantly harder for hate groups to operate, recruit, and mobilize. And combined with efforts on the legal front, we just might win.