Fueling a Flight to Quality

But first, how do we define quality — and crap?

Jeff Jarvis
Whither news?
9 min readMay 2, 2017

--

Storyful and Moat — together with CUNY and our new News Integrity Initiative*— have announced a collaboration to help advertisers and platforms avoid associating with and supporting so-called fake news. This, I hope, is a first, small step toward fueling a flight to quality in news and media. Add to this:

  • A momentous announcement by Ben Gomes, Google’s VP of engineering for Search, that its algorithms will now favor “quality,” “authority,” and “the most reliable sources” — more on that below.
  • The consumer revolts led online by Sleeping Giants and #grabyourwallet’s Shannon Coulter that kicked Bill O’Reilly off the air and are cutting off the advertising air supply to Breitbart.
  • The advertiser revolt led by The Guardian, the BBC, and ad agency Havas against offensive content on YouTube, getting Google to quickly respond.

These things — small steps, each — give me a glimmer of hope for supporting news integrity. I will even go so far as to say — below — that I hope this can mark the start of renewing support to challenged institutions — like science and journalism — and rediscovering the market value of facts.

The Storyful-Moat partnership, called the Open Brand Safety framework, first attacks the low-hanging and rotten fruit: the sites that are known to produce the worst fraud, hate, and propaganda. I’ve been talking with both companies for some time because supporting quality is an extension of what they already do. Storyful verifies social content that makes news; its exhaust is knowing which sites can’t be verified because they lie. Moat tells advertisers when they should not waste money on ads that are not seen or clicked on by humans. Its CTO, Dan Fichter, came to me weeks ago saying they could add a warning about content that is crap (my word) — if someone could help them define crap. That is where this partnership comes in.

My hope is that we build a system around many signals of both vice and virtue so that ad agencies, ad networks, advertisers, and platforms can weigh them according to their own standards and goals. In other words, I don’t want blacklists or whitelists; I don’t want one company deciding truth for all. I want more data so that the companies that promote and support content — and by extension users — can make better decisions.

The hard work will be devising, generating, and using signals of quality and crapness, allowing for many different definitions of each. The best starting point for discussion of a definition is from the First Draft Coalition’s Claire Wardle:

One set of signals is obvious: sites whose content is consistently debunked as fraudulent. Storyful knows; so do Politifact, Buzzfeed’s Craig Silverman, and Snopes. There are other signals of caution, for example a site’s age: An advertiser might want to think twice before placing its brand on a two-week-old Denver Guardian vs the almost-200-years-old Guardian. Facebook and Google have their own signals around suspicious virality.

But even more important, we need to generate positive signals of credibility and quality. The Trust Project endeavors to do that by getting news organizations to display and uphold standards of ethics, fact-checking, diversity, and so on. Media organizations also need to add metadata around original reporting, showing their work.

In talking about all this at an event we held at CUNY to kick off the News Integrity Initiative, I came to see that human effort will be required. Trust cannot be automated. I think there will be a need for auditing of media organizations’ compliance with pledges — an Audit Bureau of Circulations of good behavior — and for appeal (“I know we screwed up once but we’re good now”) and review (“Yes, we’re only two weeks old but so was the Washington Post once”).

Who will pay for that work? In the end, it will be the advertisers. But it is very much an open question whether they will pay more for the safety of associating with credible sources and for the societal benefit of putting their money behind quality. With the abundance the net creates, advertisers have relished paying ever-lower prices. With the targeting opportunities technology and programmatic ad marketplaces afford, they have put more emphasis on data points about users than the environment in which their ads and brands appear. Will public pressure from the likes of Sleeping Giants and #grabyourwallet change that and make advertisers and their agencies and networks go to the trouble and expense of seeking quality? We don’t know yet.

I want to emphasize again that I do not want to see single arbiters of trust, quality, authority, or credibility — not the platforms, not journalistic organizations, not any self-appointed judge — nor single lists of the good and bad. I do want to see more metadata about sources of information so that everyone in the media ecosystem — from creator to advertiser to platform to citizen — can make better, more informed decisions about credibility.

With that added metadata in hand, these companies must weigh it according to their own standards and needs in their own judgments and algorithms. That is what Google does every second. That is why Google News creator Krishna Bharat’s post about how to detect fake news in real-time is so useful. The platforms, he writes, “are best positioned to see a disinformation outbreak forming. Their engineering teams have the technical chops to detect it and the knobs needed to to respond to it.”

And that is also why I see Ben Gomes’ blog post as so important. Google’s head of Search engineering writes:

Last month, we updated our Search Quality Rater Guidelines to provide more detailed examples of low-quality webpages for raters to appropriately flag, which can include misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories….

We combine hundreds of signals to determine which results we show for a given query — from the freshness of the content, to the number of times your search queries appear on the page. We’ve adjusted our signals to help surface more authoritative pages and demote low-quality content…

I count this as a very big deal. Google and Facebook — like news media before them — contend that they are mirrors to the world. Their mirrors might well be straight and true but they must acknowledge that the world is cracked and warped to try to manipulate them. For months now, I have argued to the platforms — and will argue the same to news media — that they must be more transparent about efforts to manipulate them … and thus the public.

Example: A few months ago, if you searched on Google for “climate change,” you’d get what I would call good results. But if your query was “is climate change real?” you’d get some dodgy results, in my view. In the latter, Google was at least in part anticipating, as it is wont to do, the desires or expectations of the user under the rubric of relevance (as in, “people who asked whether climate change is real clicked on this”). But what if a third-grader also asks that question? Search ranking was also influenced by the volume of chatter around that question, without necessarily full regard to whether and how that chatter was manufactured to manipulate — that is, the huge traffic and engagement around climate-change deniers and the skimpy discussion around peer-reviewed scientific papers on the topic. But today, if you try both searches, you’ll find similar good results. That tells me that Google has made a decision to compensate for manufactured controversy and in the end favor the institution of science. That’s big.

On This Week in Google, Leo Laporte and I had a long discussion about whether Google should play that role. I said that Google, Facebook, et al are left with no choice but to compensate for manipulation and thus decide quality; Leo played the devil’s advocate, saying no company can make that decision; our cohost Stacey Higginbotham called time at 40 minutes.

Facebook’s Mark Zuckerberg has made a similar decision to Google’s. He wrote in February: “It is our responsibility to amplify the good effects and mitigate the bad — to continue increasing diversity while strengthening our common understanding so our community can create the greatest positive impact on the world.” What’s good or bad, positive or not? As explained in an important white paper on mitigating manipulation, that is a decision Facebook will start to make as it expands it security focus “from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people.” That includes not just fake news but the fake accounts that amplify it: fake people.

I know there are some who would argue that I’m giving lie to my frequent contention that Google and Facebook are not media companies and that by defending their need to rule on quality, I am having them make editorial decisions. No, what they’re really defining is fakery: (1) That which is devised to deceive or manipulate. (2) That which intentionally runs counter to fact and accepted knowledge. Accepted by whom? By science, by academics, by journalism, even by government — that is, by institutions. Thus this requires a bias in favor of institutions at a time when every institution in society is being challenged because — thanks to the net — it can be. Though I often challenge institutions myself, I don’t do so in the way Trumpists and Brexiters do, trying to dismantle them for the sake of destruction.

In the process of identifying and disadvantaging fake news, Krishna Bharat urges the platforms to be transparent about “all news that has been identified as false and slowed down or blocked” so there is a check on their authority. He further argues: “I would expect them to target fake news narrowly to only encompass factual claims that are demonstrably wrong. They should avoid policing opinion or claims that cannot be checked. Platforms like to avoid controversy and a narrow, crisp definition will keep them out of the woods.”

Maybe. In these circumstances, defending credibility, authority, quality, science, journalism, academics, and even expertise — that is, facts — becomes a political act. Politics is precisely where Google and Facebook, advertisers and agencies do not want to be. But they are given little choice. For if they do not reject lies, fraud, propaganda, hate, and terrorism they will end up supporting it with their presence, promotion, and dollars. On the other hand, if they do reject crap, they will end up supporting quality. They each have learned they face an economic necessity to do this: advertisers so they are not shamed by association, platforms so they do not create user experiences that descend into cesspools. Things got so bad, they have to do good. See that glimmer of hope I see?

None of this will be easy. Much of it will be contentious. We who can must help. That means that media should add metadata to content, linking to original sources; showing work so it can be checked; upholding standards of verification; openly collaborating on fact-checking and debunking (as First Draft is doing across newsrooms in France); and enabling independent verification of their work. That means that the advertising industry must recognize its responsibility not only to the reputation of its own brands but to the health of our information and media ecosystem it depends on. That means Facebook, Google — and, yes, Twitter — should stand on the side of sense and civility against manufactured nonsense and manipulated incivility. That means media and platforms should work together to reinvent the advertising industry, moving past the poison of reach and clickbait to a market built on value and quality. And that means that we as citizens and consumers should support those who support quality and must take responsibility for not spreading lies and propaganda, no matter how fun it seems at the time.

What we are really seeing is the need to gather around some consensus of fact, authority, and credibility if not also quality. We used to do that through the processes of education, journalism, and democratic deliberation. If we cannot do that as a society, if we cannot demand that our fellow citizens — starting with the President of the United States— respect fact, then we might as well pack it in on democracy, education, and journalism. I don’t think we’re ready for that. Please tell me we’re not. What ideas do you have?

* Disclosure: The News Integrity Initiative, operated independently at CUNY’s Tow-Knight Center, which I direct, received funding and support from the Craig Newmark Philanthropic Fund; Facebook; the Ford, Knight, and Tow foundations; Mozilla; Betaworks; AppNexus; and the Democracy Fund.

--

--

Jeff Jarvis
Whither news?

Blogger & prof at CUNY’s Newmark J-school; author of Geeks Bearing Gifts, Public Parts, What Would Google Do?, Gutenberg the Geek