I went to jail for posting fake news on Facebook

No I didn’t.

While you are here, let’s talk about fake news. I will ask three questions and try to answer them.

  • Does Facebook have the responsibility to filter fake news for its users?
  • Do we have the right technology to filter fake news?
  • Does filtering fake news promote high quality journalism?

On Facebook’s responsibility

[Source: Imagflip.com]

When you advocate for Facebook dealing with the fake news problem, whichever way it may be, the hidden assumption is that Facebook is responsible for feeding its users fake news.

Let’s dig deeper into this assumption, with something tasty.

Blame the pizza delivery person

Say your friend ordered a big pizza to cheer you up after the bitter election. She cares about you, except the fact that she ordered the wrong topping — you hate olives on pizza. The delivery person came to your door, showed you the pizza. You realized that it is delicious expect the olives. Would you ask them to remove the olives for you? Yes, that’d be crazy.

We don’t let pizza delivery person adulterate our pizza. We do not blame UPS for delivering low-quality or counterfeit goods we ordered from some sketchy eStore. The logic is obvious and intuitive in the physical world: we don’t blame the carrier for the quality of our atoms.

Atoms are hard to copy, modifier, filter and alter. So, we don’t hold messenger accountable for what’s in the package. Bytes, on the other hand, are quite easy to copy, modifier, filter and alter. The feasibility to filter bytes that enables us to hold Facebook accountable, but it should not be the reason to ask.

Algorithmic censorship

To hold Facebook accountable, we ask Facebook to move away from a neutral content distributor, and become a content filter. As glory as it is, we are essentially trying to grant grant Facebook the right to censor what people perceive in the Facebook world.

In Mark Zuckerberg’s post, he outlined a plan to combat fake news, and the first one would be to use algorithm to decide what’s fake and what’s not before user reports it themselves. In other words, we are letting an algorithm decide what is fake and what is not.

In good old days Richard Nixon had to call New York Times, a publisher, to try to censor some information he didn’t want the public to know. And of course, when he made that request, NYT held the executive branch accountable through the landmark supreme court case. It’s so much easier now: the algorithm will do it at the pipe. No matter how hard publishers or subscribers try, and there will be no evidence for them to sue when the algorithm is biased. Facebook will have ultimately control what people see and hear, because it controls all the goods in the marketplace of ideas. If that is not scary, probably you have not heard of North Korea.

Algorithmic neutrality is a myth

You may rightfully argue that the intention of the algorithm is to filter out only fake news, not good journalism. Facebook will not control what we see, just, you know, remove weeds from a walled garden.

Algorithm is fundamentally biased

Forget about unbiased algorithms, it doesn’t exist. No matter how well intended the algorithm or its designer is, it is going to be biased. We the people work in the AI field know it all too well.

You don’t have to know computer science to understand why, because it is really common sense. Algorithms are rules, extracted from experience. The data we use to train the algorithm are inherently biased. Just like people with different upbringing are entitled to have different opinions, algorithms trained on a set of data will inherit the bias in that data. The subtlety in data/experience is way beyond anybody’s ability to understand. As a result, the algorithm is born with bias. Bias that implicitly censor one view over another.

Algorithm is fundamentally superficial

The algorithm would not know the tension in the Arabic world in Tunisia in 2011, and it would categorize a fruit seller being slapped by a policewoman as fake news. It would not know the racial tension in Ferguson in 2014, and would fact check “Hands up, don’t shoot” and deemed it factually inaccurate (some people would report that too, and thus bury the news). Arab Spring or BLM movements would never happen if we just focus on reported facts in news, not what’s driving the news. Human world is complicated, while algorithms are shallow by design. The algorithm that tells fake news from genuine ones, does exactly that. True force is dampened, and news become collection of trivia.

Algorithm can be exploited

Deciding what’s fake is fundamentally a binary classification problem. If you don’t trust an algorithm to pick the right president for you (mostly because the algorithm cannot factor in every concerns and considerations you have), you shouldn’t trust an algorithm to pick what is fake and what is genuine news for you. It is your judgement, not the algorithm’s.

By training an algorithm to filter contents for us, we are creating the ultimate echo chamber. It is safe to predict that with the implementation of fake news, users will flag information they do not like as misinformation, and thus teach the algorithm to filter unwanted content, not fake content.

Once Facebook has a way to filter fake news, it will become the best friend of totalitarian government. In China where I grew up, government is actively censoring the Internet. Now it becomes easier: just call Mark and tell him that this and that is fake. Facebook will become a mediator. I can guarantee you that the party who has more resources will get to tell what is fake and what is not. Facebook becomes a propaganda leverage for government, for special interest groups, for people with more resources. It is not the intention, but the destiny of such an algorithm.

On Journalism and the marketplace of ideas

One amazing thing I learnt from economics is that the invisible hand acts in unexpected way. I am predicting here that Facebook’s fake news policy will kill good journalism, instead of saving it.

It is well-known that high quality journalism is a good but rare thing. We as a society should support high quality journalism. Now the question is: is fake news hurting high quality journalism? If so, should we eliminate fake news?

Fake news is definitely attracting traffic from sites like New York Times and Washington Post — you have a constant pool of reader attention. However, the very existence of fake news is to reflect the value of good journalism. Eliminating it has unintended consequences.

Average content drives out good, paid content

Eliminating fake news, if that is all possible, would allow two goods: average-journalism and good-journalism, to float the marketplace of ideas. Now if I am a consumer, after Facebook’s filtering, I lose the signal of the differences of what’s good and what’s bad, and I lose the signal of what is average and what is good because I have already relied on Facebook to make the judgement. We hope to distinguish good and bad, and as a result we now cannot distinguish average and good. So how do I pick: I will go with what’s cheap.

This is a commonly observed phenomenon: majority of the people buy store brand when choosing between a store brand and a premium brand (supposedly better quality but consumers cannot tell). Average-quality journalism (your friends’ posts, blog articles, tweets) is the store brand of Facebook. With fake news floating around, you know the value of good journalism and you don’t even necessarily trust something old John posts.

Now with the filtering, the average quality seems better, and everything seems to be “true”. In this world, I certainly would not pay $9.99 per month to get New York Times if I can get a somewhat true, filtered news source through Facebook. The value proposition of NYT is diminished when fake/bad quality news is out of the marketplace.

In addition, it reduces readership to high-quality journalism. New York Times relies on a healthy funnel from Facebook/Google to drive traffic to their site, so they can get subscribers. In the short-term after the fake news policy is implemented, they may get a few percentage more traffic because fake news is out of the marketplace. In the long term, however, the vacuum will be filled by some average, barely passing Facebook standard, news sources. They are cheaper to produce, cheaper to maintain, and of course, cheaper for the customers. No doubt customers will pick them instead.

Facebook should focus on exposing the signals that reflect the quality of the news, not silently filter fake news and hide the quality signal. This way, it eliminates the asymmetry of information and lets users know what is good and what is not.

Finally words

The only newspaper ever known to the human race that never said anything fake is of course, Pravda, or “Truth”, from USSR. We can blame Facebook for fake news, but I don’t think we really want a world where Facebook would use its algorithm to turn NTY, Washington Post and every news agency there all to one version, the Pravda.

There is an easy and human approach to this: just unfriend that sucker who spreads fake news. Problem solved. Don’t trust the machine, especially the machine that feed you information. Fast food fabricated by machine gives you empty calories, fast food style news feed fabricated by machine gives you empty and biased mind.

I sincerely feel that forcing Facebook to deal with fake news is dangerous. The consequence is way beyond “I’d never need to see the crazy propaganda your high school classmate posts”. Any intellectual with integrity, liberal or conservative, should oppose Facebook’s fake news policy.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.