Reinventing Success Metrics for News Through Collaboration, Flexibility and a Focus on the User

Insights on what made working with the Guardian Mobile Innovation Lab to analyze the success of their storytelling experiments different from traditional industry analysis projects.

Lynette Chen
The Guardian Mobile Innovation Lab
7 min readApr 18, 2018

--

Photo by janjf93 / Creative Commons

Editor’s note: Lynette Chen is a senior digital analyst at MaassMedia, an analytics agency that was a key partner to the Guardian Mobile Innovation Lab and helped us decipher the results of our experiments.

Teaming up with the Guardian Mobile Innovation Lab to provide analytics consulting for their mobile storytelling experiments was an exciting and special opportunity for us at MaassMedia, giving us the chance to focus on insights that centered around the user experience and to adapt traditional news analytics thinking to a brand new space. MaassMedia helped measure the success of each lab experiment by creating new KPIs (key performance indicators) that were suitable for each project, as well as provide deep-dive analyses afterwards to help the team understand how users responded. Insights from the analyses were then used to improve the user experience for each subsequent experiment.

Example of a storytelling experiment: an evolving story called a “Smarticle”

The work we did with the lab was different from our work with many other publishers because we adapted our analytics approach to new and unconventional mobile storytelling formats. The ability to adapt was made possible by the lab’s unique team culture, which focused on a true understanding of user needs, and the need to be efficient and constantly optimize. Below we reflect on some of the specific elements that made working with the lab unique.

Putting a positive user experience over monetization as a primary measure

The mobile lab conducted their storytelling experiments without the constraint of monetization, which affected the way we approached our analysis. The goal of each experiment was not acquisition or conversion, but instead a positive user experience. As a result our analyses took a more exploratory form.

Our primary goal became to understand how users interacted with the experiment and what they thought about it. For many other publishers, measurement revolves around insights that encourage users to perform direct revenue-driving actions on site, such as clicking on advertisements or subscribing, which can limit our ability to focus on the quality of the user’s experience during analysis. Throughout our work with the lab, we approached our analysis as a means of learning about the audience, rather than interpreting experiments as successes or failures. It was more important to understand why an experimental format was useful or interesting to a user, so the team could know what to do, or not do, in the next iteration of the experiment.

KPIs revolved around a positive user experience

The KPIs used in our analyses were also different because of the shift in focus from monetization to the user experience itself. Traditional web metrics wouldn’t suffice, so we needed to define a success framework that focused on key user interactions that took the full experience into account. As a result, we developed the net interaction rate:

The net interaction rate signals how positive or negative the user’s experience was based on the different types of interactions they could have with the lab’s new story formats.

While the concept of a net interaction rate is not new to the analytics industry, what made it unique in this case was that the components of the equation were defined individually for each experiment. A crucial part of our process for analyzing success was the team’s thoughtful discussions about the types of interactions that were positive or negative based on the format’s features.

Subscribers could take a quiz inside an alert.

For example, if a user dismissed an experimental notification they had subscribed to during the Rio Olympics, was that a negative signal? Similarly, if a user shared the results of an Olympics quiz they took within a notification, was that a positive interaction, and something we want users to do more of? Each experiment introduced new features that needed to be added and accounted for in the net interaction rate formula.

Traditionally, with other publishing analytics KPIs, formulas are defined, locked in place and used consistently across analyses. However, if we had followed the traditional method for analysis and didn’t modify the formula for each experiment, it would not have accounted for evolving nature of what success meant for each of the lab’s experiments.

Developing the criteria for a positive or negative user experience was a team-based activity

As mentioned previously, it was crucial for the entire team to participate in discussions about what positive and negative interactions were for each user, in order to determine the formula for the net interaction rate. By defining the criteria together and upfront for what a good or bad experience looked like, we were able to share a common language when interpreting the results of each experiment.

Team members from complementary disciplines (i.e. editorial, design, development, etc.) also attended meetings to recap the results of each experiment. Each person’s expertise and perspective helped provide additional context that drove useful insights that the analytics team might not have developed on its own. Having every team member attend meetings also made it easy to ensure that the learnings were actionable and executed on the fly.

For example, we noticed in a meeting that experiment results were being impacted by an issue with the way we tracked clicks on external links from a Smarticle. Since the developer who built Smarticles was in the room, he was able to make a technical fix immediately to correct the tracking for the following iteration of the experiment.

The team’s focus on big picture goals kept things moving forward

The mobile lab’s fast-paced and collaborative ‘incubator’ style helped them work quickly and get experimental formats in front of users within a few weeks instead of months. This allowed us to analyze and iterate quickly on formats based on real feedback from users. In addition to the willingness to collaborate, everyone on the team shared the same mindset of focusing on the big picture goals and insights to be gained from the experiment, rather than letting some of the smaller details slow down progress.

For many other organizations, securing buy-in from multiple stakeholders and teams at each phase of work can significantly impact momentum. Despite the mobile lab’s quick pace, they always applied learnings from one experiment to the next. This strong culture of testing and optimization allowed us to see significant improvement in how users felt about each format type over time as the functionality was continuously refined.

Our tracking needed to be flexible to account for experiments that ran on fast news cycles

The lab’s big-picture mindset, combined with the urgency of the news cycle, required us to be flexible in our analytics processes. The fast-paced environment forced us at times to make changes as experiments were live. Typically, with other clients, we ensure that all tracking for data collection is in place before launching a project. However, because the lab prioritized launching experiments as news broke, there were times when we worked closely with the lab’s developers to perform QA (quality assurance) on the tracking after an experiment was launched and made implementation changes on the fly.

In the same vein, to move forward with interpreting results, we often had to make a few assumptions in our analysis. If there was a question that could not be immediately answered by the existing data, we would incorporate a survey question to better understand the user behavior in a following experiment. If any of our assumptions changed based on our learnings, we would then recalculate our KPIs and adjust the analysis as needed. Like the nature of the experiments, the analytics process was also iterative.

For more insights and resources on innovation-friendly analysis methods, also read about:

The Guardian US Mobile Innovation Lab was set up as a small multidisciplinary team housed within the Guardian’s New York newsroom from 2015–2018. Its mission was to explore storytelling and the delivery of news on small screens, and share what they learned. It operated with the generous support of the John S and James L Knight Foundation.

--

--