I recently attended an all-day workshop on fake news at one of the top law schools in the US. By the end of the day, this roomful of impressive experts, among them representatives from Google, Microsoft, the New York Times, and Buzzfeed, couldn’t even agree on a definition of “fake news.” Worse still, every suggestion for controlling or regulating this indefinable kind of news was immediately shot down.
So what should we think about new efforts by Google and Facebook to use algorithms to protect the public from fake news?
In my view, we should be both unimpressed and frightened.
We should be unimpressed because fake news isn’t really the problem most people think it is, and we should be frightened because of the dangers in allowing big technology companies to decide which news stories are legitimate.
Although the fake news stories about Hillary Clinton that proliferated rapidly on the internet last November probably cost her many votes, fake news is not, generally speaking, a serious threat to democracy for two reasons:
Fake News Is Like Billboards
First, fake news is competitive. It is exactly like billboards and TV commercials — many of which also contain false claims. For every fake news story you post about me, I can post two about you. I can also post true stories about me to try to correct the record or even true stories exposing the fake news stories you have been posting. Sound familiar? Mud-slinging is endemic to politics and always will be, and no algorithm will ever stop it.
But isn’t the rapid proliferation of fake news on the internet an inherently new kind of threat? Absolutely not. As I said, fake news is competitive, so rapid proliferation works for all parties; it doesn’t inherently favor one party over another, even though it may have favored Republicans recently. It’s true that the Clinton campaign was unprepared to counter the rash of fake news stories being generated by Macedonian teenagers just before the 2016 election, but I’m guessing no serious campaign organization will make that mistake again.
The speed of proliferation is not in and of itself a problem. We mistakenly have come to believe that the rapid proliferation of ideas or information is a new phenomenon, made possible only in recent years because of the growth of the internet. We somehow have forgotten that long before the internet was invented, news stories and false rumors often spread through populations at lightning speed: news about the stock market crash of 1929, about V-J Day in 1945, about the assassination of John F. Kennedy in 1963. On March 19, 1935, a baseless rumor about a beaten child quickly spread throughout the Harlem area of New York City, resulting in widespread rioting, multiple deaths, and millions of dollars in property damage. People were gossipy, gullible social beings long before social media platforms were invented.
You Can See Fake News
Second, fake news stories are visible sources of influence. When, through Facebook’s newsfeed or Google’s search engine, you come across a story claiming that Hillary Clinton is a Martian, you know you are being influenced. You can see the story in front of you, just as you can see a physical newspaper or a billboard or TV commercial. Visible sources of influence impact people quite predictably: people pay attention to information that supports their biases and beliefs, and they ignore or reject the rest.
Far more dangerous are new types of influence that people can’t see. In recent years, I have discovered and studied two such sources of online influence — the Search Engine Manipulation Effect (SEME) and the Search Suggestion Effect (SSE) — which are entirely invisible to most people and which are unprecedented in human history. Biased search rankings can have a dramatic impact on people’s opinions, purchases, and voting preferences, and so can those instant search suggestions you see when you start typing a search term into Google’s search bar. These types of influence are nothing like billboards or fake news stories because virtually no one can detect the bias, and when people can’t see sources of influence, they mistakenly conclude they are making up their own minds. Worse still, the very few people who can spot the bias in search results tend to shift their views even farther in the direction of the bias; being able to spot a bias doesn’t necessary protect you from it.
Fake news stories are troubling; they might be impacting hundreds of thousands or even millions of people every day, although a recent study conducted by Jacob L. Nelson of Northwestern University suggests that the number of people affected by fake news stories (as he defines that term) is “tiny” compared with the number of people affected by legitimate news stories — “about 10 times smaller on average.”
Whatever that proportion is, let’s put this issue into perspective: Favoritism in search results and search suggestions is likely affecting billions of people every day without their knowledge. As sources of influence, news stories in general and fake news stories in particular are relatively trivial in their impact.
If you doubt that, consider one simple manipulation that Facebook has at its disposal: sending targeted messages to people in just one demographic group. If, in 2016, Facebook had chosen to send to send “Register to vote!” reminders to people in just one political party, and if, on Election Day, the company had sent “Go out and vote!” reminders only to people of that same party, millions of votes would have been shifted to that party’s candidates with no one the wiser. Even newspaper magnate William Randolph Hearst could never have dreamed of power of this magnitude: targeted messaging on a massive scale, controlled by the executives of a single company that has no competitors.
The Scary Stuff
Because fake news stories are both visible and competitive, I don’t find them very frightening, and Nelson’s data are also reassuring. What does scare me is the idea that rapacious corporations like Google and Facebook are going to decide what fake news is, then decide which news stories meet their criteria, and then either label those stories as suspect or, in the extreme case, make those stories disappear.
As I learned at that law school conference, it is unlikely that any group of reasonable people is ever going to agree on a definition of fake news, so I’m not going to bother offering one here. But to get a sense of how difficult this problem is, please consider the following questions:
Is a news story fake just because it gets some things wrong? How much does it need to get wrong for us to call it “fake”? Twenty percent of the facts it reports? Fifty percent? Isn’t the accurate part of the story still accurate?
Is an accurate story released by a bogus news site — say, a website that pretends to be part of news organization that doesn’t exist — legitimate or fake? Remember, this story is accurate; it’s just the news organization that’s fake. What should we do: run the story or bury it?
Is an accurate story released by a news site (like Sputnik) that is associated with an adversarial foreign government (like Russia) — legitimate or fake? I ask this in part because the last time I published an article in Sputnik, mainstream American news organizations immediately brushed my article aside, even though I was accurately reporting on new research I had been conducting on Google’s autocomplete and even though Sputnik published my article exactly as I had given it to them; they didn’t change a word.
Is a news story fake just because it is slanted toward one political perspective? If so, couldn’t many stories released by both Fox News and The New York Times be considered fake? Couldn’t all stories released by Breitbart automatically be suppressed?
Do satires qualify as fake news stories? Is, for example, the story I published last year about Google donating its search engine to the American public fake news? Should it be expunged? How can an algorithm distinguish fake news from satire when many people have trouble doing so?
At the law school conference, attendees also talked about motives. Those teenagers in Macedonia might have been agents of the Kremlin, but their main motive in inventing bizarre stories about Hillary Clinton actually seems to have been to make easy money. Just how seriously should we take such people? Should they be prosecuted for trying to influence a foreign election or praised for finding a cool new way to get rich?
Whether you use people or an algorithm (which is just a set of rules written by people) to try to answer such questions, you turn a problem that people can generally get their head around — the proliferation of fake news stories — into a nightmarish set of problems that make the head spin:
The false positive problem. If you or your algorithm correctly identify and quash a fake news story (and I say that lightly, because we can’t even define fake news), this is called a “true positive” — the correct identification of what you’re looking for. But what if you mistakenly identify a valid news story as fake? This is called a “false positive,” and from a public policy perspective, it is a disaster, just like a false positive in a test for cancer. You have now told the public — the whole world, maybe — that a valid news story is invalid; perhaps you even deleted the story from feeds so that virtually no one in the world can see it.
Having been a programmer most of my life, I can guarantee you that any algorithmic system meant to quash fake news will inevitably produce false positives. When do the dangers of obliterating real news stories outweigh the dangers of allowing fake news stories to exist?
The power problem. In case you haven’t noticed, companies like Google and Facebook already have far too much power. Do we want to give them yet another kind of power — the power to decide which news stories are valid and which are not?
The only good news here is that if these companies proceed with aggressive programs to suppress fake news stories, they will move closer toward the crosshairs of regulators. Until now, companies like Google and Facebook have claimed protection under section 230 of the U.S. Communication Decency Act (CDA 230), which states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230). In other words, Google and Facebook aren’t responsible for anything they show you because they don’t originate the content they show you; they’re just middlemen, not “publishers.”
But the more obviously these companies act like publishers — picking and choosing which news stories are valid — the less protection they will have under CDA 230. As I noted in “The New Censorship,” Google is now the world’s biggest censor; acting as a Super Editor to make decisions about news stories will make its role as censor more obvious to regulators, legislators and judges.
The technological tug-of-war. More than a decade ago, Google laid down the law and said, more or less, “How dare you try to trick our search algorithm into listing your crappy website higher in our search rankings? We will crush you like a bug.” Did SEO (Search Engine Optimization) experts — the people whose job it is to push your business higher in search rankings — respond by lying down and dying? Nope. In fact, by early 2016, the SEO industry was doing an estimated $65 billion a year in business. Gaming Google is very profitable, it seems. Google keeps adjusting its search algorithm to protect itself from being gamed, and the gamers respond with better algorithms to game it.
I mention this because the same thing will happen with algorithms that try to suppress fake news stories. People who want to spread such stories will simply program around the algorithms.
The Trump effect. Remember how Donald Trump made legitimate mainstream news organizations into “enemies of the people”? No matter how companies like Google or Facebook try to stigmatize fake news stories, leaders like Trump will have the final say about how seriously people take such stories. If a demagogue tells his followers that the only news reports worth believing are the ones from the IdiotsVille News Service, the main effect of the algorithm that labels those reports fake will be to bring more attention to them, while the demagogue righteously rails on about “censorship” and “oppression.”
Fake news is troublesome, for sure — always has been and always will be. But allowing big, unregulated technology companies to manage fake news — in other words, to manage all our news — is potentially far more harmful than fake news itself.
EPSTEIN (@DrREpstein) is Senior Research Psychologist at the American Institute for Behavioral Research and Technology in Vista, California. A PhD of Harvard University, Epstein has published fifteen books on artificial intelligence and other topics. He is also the former editor-in-chief of Psychology Today. An earlier version of this article was written for Sputnik News and appeared here. Sputnik agreed to publish the manuscript that was submitted to them without editorial changes in order to assure its accuracy and integrity.