You can’t sell news for what it costs to make

My solution might also help defang the fake news threat.

Frederic Filloux
The Walkley Magazine
7 min readAug 6, 2017

--

Cartoons by Cathy Wilcox, drawn for UNESCO for World Press Freedom Day 2017.

Political science scholars agree on one thing: Nearly every upcoming electoral process is likely to be infected by the fake news plague. “Misinformation”, a broader concept that encompasses intentional deception, low-quality information and hyperpartisan news, is seen as a serious threat to democracies.

How did we end up here?

The first full-scale deployment of fake news on an industrial scale has proven extremely effective. The election of Donald Trump, preceded by a massive wave of false reports reverberating in the social echo chamber, marks the ground zero of the phenomenon. While not a decisive factor in Trump’s victory, it is widely credited with having boosted the extreme right in the US — and later in Europe.

The second factor is the ease with which these techniques can be implemented. All the tools, as well as the talent to use them, are for sale. The Dark Web harbours vast and inexpensive resources to take advantage of the social loudspeaker. For a few hundred bucks, anyone can buy thousands of social media accounts that are old enough to be credible, or millions of email addresses.

Hot tip: Frédéric Filloux is coming to Melbourne and Sydney for Storyology, the Walkleys’ August 2017 journalism festival

Also, by using Mechanical Turk or similar cheap crowdsourcing services widely available on the open web, anyone can hire legions of “writers” who will help to propagate any message or ideology on a massive scale.

That trade is likely to grow and flourish with the emergence of what experts call “weaponised artificial intelligence propaganda”, a black magic that leverages microtargeting where fake news stories (or hyperpartisan ones) will be tailored down to the individual level and distributed by a swarm of bots.

What we see unfolding right before our eyes is nothing less than Moore’s Law applied to the distribution of misinformation: an exponential growth of available technology coupled with a rapid collapse of costs.

These weapons of mass influence will serve both sides. The next political campaigns in the US will be characterised by a systematic individualisation of legitimate messages, which will be accompanied by an avalanche of tainted news.

Weirdly enough, all the major players have been taken aback by the tsunami. Facebook was slow to acknowledge the problem. (Remember Mark Zuckerberg’s ingenuous denial right after the American election?)

Since then, large digital platforms and most news distributors, aggregators and publishers have tried to contain the problem by adopting various strategies. For instance, they teamed with fact-checking outlets that verify the authenticity of a piece of news. Both Facebook and Google have set up alert systems in which users are encouraged to flag any suspicious elements.

These procedures are qualitatively promising but quantitatively insignificant. Manual fact-checking works at a small scale: A politician states something weird during a meeting or a TV debate, checkers rush to validate the statement. Fine. But in most cases, it can take hours to debunk a well-engineered fake news report. As for the flagging system, some tests have shown that the cure is, in fact, worse than the disease: Signalling information as suspect actually raises the curiosity around it and increases its social reverberation.

Other initiatives, such as the Trust Project developed at Santa Clara University, aim at creating standards of quality to be adopted by newsrooms during the production phase. Again: a great idea implemented by competent people, but it will affect only a small fraction of the firehose of information.

Commendable as these initiatives might be, they are far from sufficient.

If we consider that Facebook deals each day with 100 million links and that any mobile aggregator collects and redistributes tens of thousands of stories per day, dealing with fake news requires scale and speed.

The News Quality Scoring Project that I’m working on at Stanford University is focused on the broader issue of news quality and the restoration of its economic value.

By Cathy Wilcox.

Incidentally, it could be an effective process for tracking down fake news.

The NQS Project is aimed at solving the incoherence of the news economy, in which there is no correlation between the production cost of a piece of journalism and its economic value.

In the physical world, an item’s production cost is always passed to the consumer. An iPhone 7 costs about US$225 to produce and sells for about US$670. At the other end of the spectrum, an ultra-cheap smartphone such as the Android One costs about US$40 to produce and sells for US$60. Production costs are reflected in the retail price, and high quality commands high margins (the luxury market is built on the same idea).

In the digital news business, neither the production costs nor the quality actually matter. The price charged to advertisers is the same, regardless of the quality of the editorial hosting the ad module. To borrow from the phone manufacturing example, the cost of a story varies widely: A one-month investigation by a pair of reporters, reviewed by editors and fact-checkers, will cost up to US$50,000. That number can shoot much higher if the piece originates from a news bureau in Kabul that carries a million dollars per year in fixed costs, for instance. Symmetrically, a 500-word news piece that takes a junior staffer and a subeditor a half-day to report, write and edit will cost a few hundred dollars to put together.

These two pieces of journalism will be sold to advertisers for the same amount — that is, a few dollars for every thousand views, also known as Cost per Mille, or CPM.

Currently, even the largest media outlets have no mechanism to pass on to their advertisers the higher production costs often associated with higher quality. Whether it is the Boston Globe spending about US$1.5 million for the Spotlight investigation, or the Wall Street Journal investing more than a million dollars to uncover the Theranos scandal, the advertising modules next to it carry the same price tag.

The primary objective of the News Quality Scoring project is to address this imbalance, using the following mechanism.

An aggregator is dealing with a stream of thousands of stories per day. On the business side, its advertising inventory, whether it is sold directly or via third parties, is made of different kinds of ads, ranging from high-value brands such as Rolex or Lexus to low-paying advertisers, like the toe fungus ads used to fill unsold spaces. Now suppose the aggregator operates a system that will assign to each story flowing through it a quality score ranging from 1 to 5 — a score that is machine-readable. We can then have a virtuous system in which an ad server will dynamically deliver ads based on the quality score of the editorial context. Rolex or Lexus will show up next to the best-scoring stories, the ones that result from significant journalistic efforts.

By Cathy Wilcox.

The four main stakeholders will benefit from it. The advertiser will find itself in a much more rewarding environment. To ensure this premium placement, the media buying agency will charge much more per ad module, taking a higher fee and funnelling more money to the publisher. As for the readers, they will see more great content, more often.

An often-raised question involves the motivation of the advertising community to move towards such a system. Why would they do that? Four arguments:

  1. The “flight to quality” — to use a term from stock trading — is unavoidable as all the ads’ KPIs are flashing red: Users succumb to “ad fatigue”, which ranges from a lower engagement to a straight-out rejection, with the massive adoption of ad blockers as the most visible symptom.
  2. Quality content calls for higher quality demographics (educated, affluent audiences).
  3. Quality leads to higher viewability of ads. The more engaging the content, the more the ads are viewed — a critical issue for ad efficiency.
  4. Quality editorial is a better fit for high-value content advertising. Numerous news outlets have created their most profitable ads thanks to the proximity of great journalistic content.

All of the above unfolds in a challenging environment for advertising as platforms (mostly Facebook and Google) capture most of the market growth.

Being able to sort the wheat from the chaff in large news streams could also benefit any subscription system. It would enable a publisher to package a set of high-scoring news stories and augment them with the same quality archives, delivering premium contextualisation, all in a completely automated way.

Technically speaking, the News Quality Scoring project doesn’t reinvent the wheel. It is built on proven technologies such as machine learning and natural language processing.

Challenges are, however, numerous.

First, we need to come up with reliable filters for quality. Next, a large corpus of “good” and “bad” news items must be assembled to constitute the training sets that will be fed to a neural network (we are speaking of at least a million stories to create a few thousand pieces properly labelled).

But there is no doubt that it’s worth the effort on multiple dimensions: contributing to building a sustainable business model for the information sector, making better news available to the reader, and by doing so, preserving a critical component of democracy.

This is from Issue 89 (August 2017) of the Walkley Magazine.

So, how else can we bolster the real news when the fake stuff keeps going viral? Come see Frédéric Filloux talk with other international leaders in this space at Storyology in Melbourne (Aug. 28) and Sydney (Aug. 30). And see what else we have on in Brisbane, Melbourne and Sydney Aug. 24–31.

--

--