The Anatomy of Fake News

Prashant Singh
Appy Thoughts
Published in
8 min readMar 3, 2017

Following a passionate (heated?) discussion at MediaNama’s event on Fake News last week, many people asked me to publish my views on the subject. In this post, I would like to address a few aspects of this issue, on which everyone from Twitter pundits to Donald Trump has a point of view:

Is Fake News a new/Recent /Nascent phenomenon?

L.M. Slackens: The Yellow Press,

I don’t think so. The idea of Fake news is probably as old as the idea of news itself, and the same applies to the idea of using Fake News for electoral benefits. The first recorded incident of using fake news goes back to Rome in 1522, where Pietro Aretino tried to manipulate the pontifical election by writing wicked sonnets about all the candidates except his favorite one. The NewYork Review of Books recently published a fascinating post on history of Fake News( link). Eric Burns in his eye opening book Infamous Scribblers provides a fascinating account of how until the recent past, all mainstream media was involved in politically motivated journalism and manipulation of public opinion. These tactics involved everything from partial truth to misinformation to outright lying. Have a look at the attached screenshot to know some of that .

So Fake News is not new. What is new is that we are now talking about it, and our understanding of fake news is very binary. We love to think that a news item is either true or false. I believe its slightly more complicated than that, and truth is more nuanced . We need a more detailed framework to understand it .

To explain this I request you to indulge me a little bit, and allow me to wear my product manager hat for a while.

A news item has two components: Authenticity of the Information and Agenda of the Publication. Let us assume that Authenticity of Information is shown on X axis (as shown below), and it varies from an outright lie to a factual and verifiable Truth, to a Subjective Opinion .

In the same fashion lets assume that the Agenda of Publication is shown on Y axis. In this case, the pendulum swings from purely commercial objective (advertisement / page views) to Journalistic responsibility, to partisan or ideological servitude for political purpose.

f we put these two things together we get a frame work for thinking about media publication, what drives them and what kind of journalism they practice.

Every publication, every news source, and every news story can be placed in one of the four quadrants here.

Each of these quadrant also has its own optimal method for content generation and distribution.

As you can see, in 3 out of 4 quadrants, the dominant distribution model is centralized. Fortunately for much of history, consumers were also on a centralized channel. The first wave of Internet pushed media companies to move from Paper to the website, but this was more of a format and medium change. Whether a news desk, or newspaper or news-site, the basic mechanism of information dissemination was still a centralized broadcast. The challenge for media companies was mostly of content adaptation and discovery. It is no wonder that two of the biggest companies of first wave were Netscape and Google.

Things got more interesting with the second wave of the Internet. Blogging and self publishing started a trend which got fueled with the advent of social web and smartphone. Instead of traversing the Internet from webpage to webpage, people started traversing from one person to another. Behavior like Back-linking and Bookmarking got replaced with Follow, Subscribe, Share, Rate and Like. Eyeballs shifted to Facebook, Twitter and WhatsApp. Distribution and creation of content became democratized and more importantly, decentralized. The genie was out of the bottle. Decentralized content creation means a decentralized and diffused responsibility of ensuring authenticity. The stage was set for emergence of Fake News.

One interesting aspect of this whole discussion is the role and responsibility of content discovery and distribution platforms like Google, Facebook, WhatsApp and Twitter in ensuring the authenticity of news hosted on their platform. The idealistic and knee-jerk response is: The Platform is not responsible for the content hosted on them, thanks to the Safe Harbor provision. However the underlying, unstated assumption in that regulation is that these platforms are a common carrier, i.e. content neutral pipes like a phone and postal services. So, like phone companies and postal services aren’t liable for what people send using their services, the same should apply for Facebook also .

But off late, this analogy is not holding up. One reason is that in practice, the algorithm of Facebook (newsfeed)/Google (ranking) selects and prioritizes content from an available pool, so that user gets to see whats most relevant for them. In this sense, they are not exactly content neutral. The algorithm embodies and reflects the bias of the person who designed it. The other distinction is that common carriage, like phone and postal services, don’t directly profit from the contents of letters, and packages. Google, Facebook, and so on, directly profit from the content, not from providing the service. So the case for not holding these platforms responsible is not as straight-forward as we think.

However what scares me more is the possibility where these platform becomes arbiters of truth and authenticity. Who will decide the moral compass and guidelines on what’s allowed or appropriate? A government with a political agenda or a corporation with commercial (and political?) agenda? None of these options look good to me .

A [Possible] Solution

When I think about how can we possibly solve this problem, I can’t help but notice one thing: The ranking and sorting algorithm of underlying platforms like Facebook. Have you noticed that, somehow, we were able to keep this thing contained in pre-social-web version of the Internet. Google was responsible for discovery, and I am sure sometime false stories made it to top. However, since their ranking algorithm mostly depended on an externality (number of backlinks), and not on content itself, that made it difficult and uneconomical for someone to invest in gaming the system for something with a low shelf life like News. On the other hand, the discovery of Newsfeed, Share and WhatsApp Forwards are optimized for viral and low shelf life content. There are cases where Newsfeed was used for purposes other than serving most relevant content to user. Remember Zynga and Farmville? Remember how frequently you used to see those notifications in your stream, and how they have vanished now. I am sure one reason for this decline in visibility is fall in overall usage, improvement of the algorithm, but I am also sure that one reason is that Zynga is not a big advertiser with FB any more. Some informed people make a similar case for Google also. For any effective enforcement of authentic news distribution we need to have two things :

(1) The primary criteria driving the content discovery should be outside of the perimeter of platform. Google’s backlink is an external attribute but most of “signal” in social web, like “share”,“like “, “fav”, “follow” are baked into their respective platforms. Thus, to exercise them, the user has to be on the platform and exposed to it. This put more weight on algorithm. We need a better understanding of the Newsfeed algorithm. Not control but visibility and understanding. I haven’t spent much time studying it but I believe that a combination of text mining along with Epidemiological algorithms and bayesian filtering can solve this problem to a great extent.

(2) Platform should have strict guidelines about which “signals” are being used as input for content targeting, and which are being used for advertisement targeting. They should probably disclose them too. Right now these boundaries are very porous. More than that since these signals can be conflicting, so platform should disclose the policy guideline on how such conflicts are handled. For example: Would you show a story on effect of irresponsible tourism on everest base camp trail to a user who is a camper, and likes a page of an Online Travel Agent, subscribes a You Tube channel on camping and has clicked on links of Travel Deals in past? Surprisingly, most of the privacy debate today is centered around advertisement targeting, and not on content targeting. This needs to change. Again, I repeat, we need more understanding and less control. Platform knows the best. I am suggesting probably they should disclose more. I am not suggesting that anyone should dictate what they can or can not use as signal.

Epilogue:

Source : Daily Sheeple

Any discussion about news and and its dissemination mechanism eventually boils down to question of Why? Why do you want to understand it ? Who will take responsibility that this knowledge will not be used for things like censorship, clamping down on free speech. These are uncomfortable questions but these risks are real. Often time the liberal ideological escape route is to shift the burden of verification to end customer. I believe this is evasion of responsibility. I am responsible for what I believe, and how I act on that information, but my surrounding environment is also responsible for what I got exposed to, and what stimulation and conditioning is provided to me. So if there is an agenda driving that exposer I need to be informed about the same.

A slightly provocative way of putting the same argument is to say that while freedom of speech might give you the right to shout “Fire, Fire “ in a crowded cinema hall, but if someone does that and there is a stampede, we shouldn’t hold them responsible for doing so.

Disclaimer: This post was originally published at Medianama (link) . View represented here are my personal view and not of my employer , my landlord or my dog and They might not be mine tomorrow.

--

--

Prashant Singh
Appy Thoughts

I am co founder of Signals ( http://thesignals.net ). I love Mobile Apps, Traveling, & Capturing Experiences . I blog at http://knowprashant.blogspot.in