Fake news won’t go away by itself

Ross Peter Nelson
4 min readJan 10, 2017

--

Fake news is stinking up the internet

In the aftermath of the US election, there have been a number of calls for high-tech companies to work harder to suppress so-called fake news. Just this weekend, German authorities warned the Breitbart organization against spreading “hate and propaganda” after Breitbart erroneously reported that a church had been set ablaze by a Muslim mob. In reality, a small fire broke out at the church due to stray fireworks on New Year’s Eve.

It’s impossible to know just how much fake news contributed to the outcome of the elections. However, it is possible to measure how often the fake stories were viewed. An investigation by reporter Craig Silverman showed that the top 20 fake news stories on Facebook outperformed the top 20 legitimate stories as the election entered its final months. Though Facebook protests that it is not a media outlet but a social platform, research shows that 62% of Americans go to the site for news.

While there are certainly things that Facebook and other internet sites can do to lessen the impact of stories that consist of disinformation or propaganda, relying on technology is not going to solve the problem. Indeed, a propensity to this sort of thing is practically wired into the internet as we know it, due to a confluence of economics and human psychology.

In the early days of the internet, the web was lauded for democratizing publishing. One no longer needed a printing press to get their message out. You could post a web page and anyone in the world could read it. While there is almost no cost at small scales, once you start serving large amounts of data to millions of people, you have to pay for rooms full of computers and for the network infrastructure to transmit that data.

If you were providing a service that was in demand, and Google’s search engine is an excellent example, you needed a way to pay for all those resources. One way, of course, is a subscription model. The user pays for the service. The method that Google chose (along with countless others) was advertising. A third party pays for your service in exchange for a bit of your time and attention. The advertiser is not acting altruistically, they have also have an agenda, typically a product to sell, so their goals do not necessarily align with either those of the service provider or the consumer, but both are willing to make the trade-off.

In particular, the advertiser wants as many people as possible to see their message. To increase exposure and improve ad revenue, content sites began to use sensationalist headlines or images as “clickbait” to lure more readers in. Sometimes, the underlying stories were legitimate, and reflected seemed like little more than a variant on the “if it bleeds, it leads” mentality of TV news stories. Others were more like the “Elvis ate my baby” sort, the tabloid era’s fake news.

Political partisanship generated the next level of escalation, and that’s where human psychology ties in. Well all like to think we’re smart. In fact, confirmation is a dopamine trigger. The brain rewards itself with a chemical congratulation when it’s told it was right all along. So if you believe that politician X is evil, and a news story reinforces that perception, you get a little dose of a powerful drug. This is what makes people seek out political bubbles.

In the 2016 election, people discovered that it was possible to earn significant incomes — as much as $30,000/month — from advertising revenue generated by fake news. Many of these people didn’t have a political axe to grind, in fact, one outlet tracked down a 17-year-old from Macedonia who had no political position, just a desire to make money. He found that generating fake news was an easy way to do so.

Since advertising generates revenue, the only way to combat fake news is to make it unprofitable. This is complicated by the way advertising is sold. Companies do not typically make their arrangements directly, the way a beer company might by an ad during the Super Bowl. Instead, they go through third party brokers. When food giant Kellogg’s discovered ads for their products were appearing on Breitbart, they instructed their media suppliers to pull the ads. Most corporations don’t want to be associated with racism, homophobia, and outright lies, but if they don’t know the connection is there, they may be supporting it by default.

While it is ultimately on these corporations to make the right choice, the average citizen can help. If you run across a headline that looks suspicious or an offensive site, take screen shots (Command-shift-4 on the Mac, Alt-PrintScreen on PC, Wake+Home on iPhone/iPad, Wake+Volume on Android), and email the image to the advertiser(s) telling them you are disappointed that they are financing such a site.

This technique will be most effective if you target sites that are blatantly fake, or support racist and intolerant viewpoints, rather than sites with which you have a political disagreement. If you’re a Twitter user, the Sleeping Giants are actively messaging corporations in the same way, and you can work with them as they outline in their FAQ. They also have a Facebook page.

As long as people can profit from fake news, they will. It’s up to us to see that they don’t. Notifying advertisers is a reasonable short-term solution. Longer term, we need to think about how the internet can better support publishing mechanisms that aren’t so easily abused.

--

--