What Internet Platforms are Doing About “Fake News” and Why it Matters

Mark MacCarthy
5 min readApr 5, 2017

--

This first appeared in SIIA’s Digital Discourse Blog.

At last week’s RightsCon in Brussels, much of the talk was about “fake news” and what to do about it. I was on one of several panels devoted to the topic and found the conversation enlightening. Here’s what I said and some of my reactions from the panel.

The panel’s title was “Resisting Content Regulation in the Post-Truth World: How to Fix Fake News and the Algorithmic Curation of Social Media.” So, unsurprisingly, the panelists largely agreed that the government should stay out of the way. I met no resistance when I said that freedom of expression means that governments should not determine what is or and what is not fake news; that’s a path to censorship, and we don’t want to go there.

I also got buy-in from my second big point, which was that Internet platforms are playing and ought to play a crucial role in controlling the spread of fake news.

This role has two distinct components. Platforms have to determine when a particular item is fake news and what to do when they have made that determination. They also have to take separate steps to ensure the integrity and security of their systems against those who would abuse it for profit or political manipulation.

And, I did not experience much push back against my third point: that free expression also means that governments should not mandate, regulate, or oversee these needed platform programs against fake news.

So, what should platforms be doing? What are they doing?

They have to have policies and procedures in place reasonably designed to detect fake news. Is a particular report obviously and deliberately false? Is it meant to deceive for profit or political disinformation?

Platforms need to give their users an easy way to flag something they think is fake news, and then they need a way to decide whether it really is fake.

Platforms are not able to make this judgment themselves. And they shouldn’t try. But they can work with authoritative third party validators who have the expertise and experience to assess a report.

One person in the audience pointed out that a collaborative journalism project called CrossCheck has been operating in France in advance of the upcoming Presidential election to label specific reports as “misleading” when the facts do not bear them out. Internet platforms can rely on these judgments, rather than try to determine the fakeness of fake news themselves.

Once the fact checkers have made a determination, the platform has to take appropriate steps — such as labeling the story as disputed and directing users where to learn more

It is crucial to distinguish these steps from procedures needed to preserve site integrity. This bucket of issues raises questions such as:

  • How is this piece of fake news getting into the system?
  • Is a fraudulent account involved?
  • Are they who they say they are or who they purport to be?
  • Who is behind it?
  • What’s their motivation? Are they trying to make money? Are they part of a systematic attempt to spread disinformation?
  • Can we shut down these accounts?

In addition to understanding these different processes and procedures, it is also important to frame the problem clearly. A key point is that the term “fake news” is regularly used for distinct phenomena. There is not a shared definition.

And it is often a term of abuse, rather than an objective concept. Much use of the term is expedient, driven by agendas other than identifying and dealing with deceptive manipulative news items

According to some conceptions satirical newspapers would be fake news; others use the word to refer to slanted opinion or to political opponents, or to shoddy careless journalism.

But platforms need a notion of “fake news” that can be used to put in place real systems that can deal with a recognized problem. For this purpose, a key element in any actionable conception from the point of view of platforms is whether the item is deliberately false, put in place with intent to deceive. The deceptive character of the purported news item might be in the service of profit making or political manipulation, but a key element is this notion of intent to fraudulently deceive.

It is worth noting that many of the platforms involved have found that the vast majority of fake news is put there to make money.

Shady operators have discovered that people will click on links that comport with their pre-existing political beliefs and that it can be very profitable to flood platforms with patently false news stories that cater to this inclination.

In any case, public conversation on what to do about fake news is difficult when people do not agree on the meaning of the key term.

However, platforms do not and should not treat political commentary, satire, or entertainment as fake news; or bad journalism; or an opinion that some don’t like.

When a journalist is trying to get the story right, and gets it wrong, that’s not fake news. And it is misleading to think of it in the same way as a group of Macedonian teenagers trying to make money by deceiving people.

Some think the problem can be automated. These companies write great code. Why can’t they code for fake news?

The answer is that the problem needs human judgment, experience, and expertise. No algorithm will know by itself whether the British Prime Minister has resigned today or whether she has submitted Britain’s Brexit letter to the European Union. And with human judgment there will be mistakes — false positives and bad stuff that slips through undetected.

That said, there are things platforms should be able to do on the technical side — signals or flags in news stories that might select some for human review and make the job more efficient. The role of algorithms is to augment and enhance human judgment, not to replace it. Algorithms should not determine which news stories are true or false. Their role is not to take down news stories automatically, but to flag news stories for human review.

Some news organizations have developed a tool that their reporters use to identify fake news. Reuters, for example, has developed Reuters News Tracer for use by their reporters in assessing a tweet as likely to be false or misleading. For busy journalists, such an augmented news feed is invaluable. But it is a tool for journalists, who have a depth of experience in newsgathering and assessment and who make their livings using these skills.

In contrast, platforms should use algorithms to identify suspected fake news and then pass it on to news organizations or other reliable partners for them to determine whether they are fake news. Platforms should not use algorithms to determine for themselves what fake news is.

This whole set of problems is not brand new, but it is more urgent as it becomes clearer and clearer that a flood of fake news at crucial times can impact elections. Internet platforms recognize that the need to control fake news is urgent. The good news is that they are stepping up to this responsibility with comprehensive and evolving programs to identify and respond to fake news on their systems.

--

--

Mark MacCarthy

Senior Fellow and Adjunct Professor, Communication, Culture & Technology Program, Georgetown University