“These are cool ideas, Joy, but how do you know if they’re actually building trust?” I get some version of that question in almost every conversation I have about Trusting News, including at my South by Southwest talk last week. (Slides from that talk are here.)
I’m always glad to get the question. Our industry needs to invest in a culture of measurement and experimentation.
Some things we’re doing are fairly easy to measure. Metrics for other things elude us. Let’s take a look at the Trusting News approach to measurement. And if you want to weigh in, please do — the more brains the better on this issue.
A quick recap of our project’s structure:
- Our partner newsrooms selected two to five strategies (from among seven, which are described in these links) that were a good fit for their mission, goals and audience. They also selected which platforms, mediums or methods of communication they were willing to use to experiment. Anywhere journalists communicate with their audience, we want to inject trust-building messages. Newsletters? Social media? On air stories? In person? Those selections served as the basis for their newsroom’s plan.
- Here’s a reminder of which newsrooms signed up for this round of testing. (They haven’t all been able to participate fully, but most have.) Here’s a look at the form we asked each newsroom to fill out, indicating their preferences and priorities.
- Partner newsrooms agreed to four months of experiments. For each strategy they selected, they committed to testing it once a week, using at least one of their chosen communication methods. So they might test a way to demonstrate their balanced approach in a newsletter one week and with a different story on air the next week. They might showcase a specific journalists’ credibility during an in-person community presentation one week and in a comment thread the next week.
- As they test, they keep a log. We ask them to describe and link to the experiment and tell us how users interacted with the content quantitatively (using traditional metrics), how users interacted qualitatively (the nature of their response) and what the journalists’ perceptions were about how it went. Here’s a sample log. You’ll note that this one is empty. We are not publishing each newsroom’s log because we want staffers to feel free to make candid observations about their audiences and about their internal conversations and priorities. We will, however, publish examples and select metrics from the newsrooms’ experiments when we share what we learned overall.
Our measurement approach is centered around platform and medium. We’ll of course collect different data for newsletters than for social, and we need to think creatively about what we can know with print and broadcast experiments. Our list of what to track for each is here. (Huge thanks to Melody Kramer and Andrew Losowsky for giving feedback on an earlier version.)
A few ideas that are easy to track:
Conversation is often easy to assess. When newsrooms host Facebook conversations (through live video or comments) about how journalism works, we can see how many people watch and how many participate. More importantly, we can see what kinds of questions the audience has and how they respond to journalists’ explanations. We can also track how many ideas result and how that influences coverage. We wrote about these experiments here.
Website links are trackable. When a newsroom links to a landing page on earning trust and invites people to offer feedback, we can see how many people click that link, what percentage of those people actually give feedback and how many useful ideas result. Here’s an example from the Virginian Pilot. Most people won’t respond, though. (See below.) And some people will appreciate seeing the invitation even if they don’t feel motivated to click.
Social media reactions to the framing of stories can be measured. When a newsroom uses a social media post to explain its ethical policies and its approach to journalism, we can track how click-thrus, shares and reactions (both qualitative and quantitative) compare to what’s typical. Here’s an example from the Coloradoan, on when to report on suicide.
Good ideas shared by communities are immediately tangible and actionable for the newsroom. And asking for those ideas can be done more effectively when we also share our mission or purpose. Here’s an example from The Gazette in Cedar Rapids, Iowa.
Newsletters can ask for feedback. We have a few newsroom partners that rely heavily on newsletter distribution, and the style of writing often found in newsletters can be a natural fit for building connection, sharing process information and asking for input. The Christian Science Monitor has been experimenting with sharing the motivation behind stories and also asking for ideas. And when they do, they can watch the email replies roll in.
A few ideas that are harder to track:
Sometimes, lower numbers are better. I’ve heard from two of our partner newsrooms that framing a Facebook post to build trust has led to less blowback from users, which manifests as fewer comments. The journalists know what kind of criticism they usually get for certain types of stories, and they can tell when they’ve successfully fended off that criticism. But that’s hard to quantify. Cal Lundmark is social media editor at The State in South Carolina, where they’re testing better labeling to differentiate types of content. She said adding the word “opinion” to headlines has cut down on complaints about unfair journalism by more clearly telling users what they’re getting. In this Facebook post, for example, “most of the comments seem to engage with the content of the column and debate the merits of the argument rather than bash us for biased ‘reporting,’ ” Lundmark told me in our project Slack workspace.
Most people won’t take time to tell us what they think. We know from previous user research that demonstrating our commitment to earning trust actually builds trust. But when we ask people specifically what they think of our trust-building efforts, they’re unlikely to respond. One newsroom has gotten especially creative in asking for feedback. The Gazette in Cedar Rapids, Iowa, is using Google Analytics click events to create buttons inviting readers to weigh in on story process, ethics and background. (Scroll to the bottom of this story for an example.) But they typically get just a handful of responses per use.
Legacy products don’t lend themselves to measuring action. Journalists can ask for feedback all they want in print and on air, but asking users to take a specific action as a result of what they read offline or hear is a big hurdle. Those are often where the largest audiences are, especially for radio and TV newsrooms, so adding trust-building language to those products is important. But it’s hard to point to the immediate effect. (See brand sentiment, below.)
Some change requires too much newsroom coordination. For example, newsletters offer easy A/B testing. Newsrooms could send one version with traditional language and another that introduces a story using trust-building messaging. Unfortunately, this has been a hard sell for our newsroom partners. The newsletter teams might have other priorities, with multiple things already getting tested. Or maybe an experimental mindset just isn’t part of the newsletter process. We have our fingers crossed that this will still happen. We also hope that participation in this project influences newsroom cultures and helps them become more experimental, flexible and audience-focused.
In-person interactions matter a lot. Reporters are out talking to community members all day, and a lot of those conversations involve feedback for (let’s be honest … criticism of) journalism. We put together ideas for how to inject trust-building messages into those conversations, and we hope some of our partner newsrooms are using them. (Heck, we hope YOU use them!) But tracking the impact of those interactions (or even logging that they happened) is next to impossible.
Brand sentiment is complicated and is affected by a lot of moving pieces. Over time, as we expose news consumers to messages designed to demonstrate credibility and fairness, our hope is that trust in these brand increases. But most people consume content without publicly engaging with it. If they hear a TV reporter explain the process behind his reporting, as WUSA’s Eric Flack did on a recent stop and frisk story, unless they feel strongly enough to reach out with feedback, we can’t easily know what they thought of it or if it enhanced their view of the station overall.
Our collection of data:
When this round of newsroom testing is done (by early May), we’ll combine results from all the partner newsrooms and look for patterns. When we see qualitative and/or quantitative evidence that news consumers are responding positively to these strategies, we’ll keep working on those ideas, even when the metrics aren’t perfect. We’ll retool them, add to them and test again.
We also have plans to work with the Center for Media Engagement to do some experiments around some strategies. That will give us the chance to see how people respond in a controlled environment to different versions of a story. I can’t wait to see if those results match what our newsrooms are finding in real life.
Building trust is a long game. Our goal is to empower journalists to actively earn trust, not just expect it. We want newsrooms to invest in demonstrating credibility day by day and story by story, not just hope that their good work will speak for itself.
Trusting News, staffed by Joy Mayer and Lynn Walsh, is designed to demystify the issue of trust in journalism. We research how people decide what news is credible, then turn that knowledge into actionable strategies for journalists. We’re funded by the Reynolds Journalism Institute, the Knight Foundation and Democracy Fund. Follow along here on Medium and at #TrustingNews on Twitter.