The future of editorial analytics in under 15 slides

Those of you that attended the FIPP Conference in Berlin this month will have heard the words ‘editorial analytics’, seemingly stuck on repeat. Both John Wilpers (editor of Innovations in Magazine Media World Report) and myself spoke at length on the problems editors face with analytics, which can effectively be summed up in these points:

  • Editors attempt to use ‘single metrics’ (page views, scroll depth, time spent on page) to measure the efficacy of their content.
  • Single metrics largely stem from Urchin, an analytics package developed in 1997 which became Google Analytics. Over half of the world’s websites use GA — it represents an 83.4% share of the analytics market. It was developed by marketers for marketers.
  • Taken alone, single metrics tell the story of browsers firing and technical occurrences taking place. They don’t tell you about engagement.
  • Most editors are reluctant to admit that they don’t really understand what their analytics are telling them, or what questions they should be asking of them, so they go along with what they think is right.
  • Therefore, the digital publishing industry, and the way in which we attempt to monetise it, is built on metrics that we don’t fully understand.

In short, it’s a rather crazy state of affairs that we find ourselves in.

To give you a little more background into the problems the industry faces and the potential solutions that we’re edging towards, we’ve mashed together the presentation on analytics that John Wilpers gave in Berlin and the presentation on editorial analytics that I offered up the same day. If you’d like to hear more about the future of editorial analytics, feel free to drop Dejan Nikolic or myself a line on Twitter. For more on scored metrics or a free demonstration of the Content Insights tool, get in touch with John Reichertz, Mario Krivokapic or David Brauchli.

1. Nobody knows which way to look

As John Wilpers pointed out in his presentation at the FIPP conference, it really is the Wild West out there when it comes to media tech. Everyone is offering something, and there’s a growing sense that the industry ought to standardise around a set of tools that are made with editors and their needs in mind, rather than simply taking off-the-peg solutions that seem to work because the rest of the internet is using them. Mr Wilpers is a consultant in media to the magazine industry, and his annual report on what’s new and what’s necessary is essential reading.

Many thanks to @johnwilpers for naming us as one of the most important innovations in digital media this year. #DIsummit #Media
— Content Insights (@InsightsPeople) March 20, 2017

2. The tools we use tend to massage our egos

As Frederica Cherubini points out, the problem in the newsroom today is that many journalists see analytics as “big screens with numbers that go up and down”. This suggests two problems: (1) that many journalists lack either a decent grasp of what the numbers are meant to show (or have no inclination to try and understand), and (2) that the “numbers that go up and down” aren’t really telling anybody anything.

As an editorial strategy, it can be best described as a race to the bottom

Worse still, charts like these tend to work on your dopamine levels. You think you see success in high numbers so you start chasing articles of that nature again and again. Even when you start seeing diminishing returns, you keep giving it a shot, simply because it worked once so you figure it will do again…eventually. As an editorial strategy, it can be best described as a race to the bottom.

3. The tools we have deliver the wrong metrics

Again, the numbers we’re talking about here are ‘single metrics’ — metrics that, taken alone, mean very little. Many people in our industry are starting to be aware that certain single metrics are a problem — they turn their noses up at page views, for instance — but less of them can explain why. That’s understandable, as most people aren’t data analysts and the industry as whole is only now waking up to the problem. So here’s a brief explanation as to why we’re so down on single metrics.

Taken alone, none of these tell you very much at all. A page view simply shows that a browser fired; time spent on page shows that the browser stayed open; scroll depth shows that the browser was used to reach a certain point on a page. They don’t tell you what happened after any of those technical occurrences took place. Did the user read anything? Did they simply scroll with a flick of their finger, through 67% of your article and then wander off to cook dinner for an hour?

Taken in those terms, you can see that single metrics are giving you very little insight into reader behaviour. And yet we persist in telling ourselves that these are the right ways to measure engagement, and that advertisers are justified in using them as a standard by which to pay for advertising space. Crazy!

4. Think we’re on the right track with real-time metrics? Think again…

Similarly, we have a problem with the latest gadgets on offer — a case of out of the frying pan and into the fire. Real-time metrics appear to be a good thing mainly because they’re fairly new and everyone else seems to be using them. And it’s here that we’re starting to see an ‘Emperor’s New Clothes’ dynamic taking place. We use them without question, and it’s only now that people are starting to ask the pertinent questions — essentially, “so what?” or “what is this actually useful for?”

In our view, real-time analytics encourage editors to chase page views even more rabidly than they were doing already. To follow the diagram:

  1. An editor looks at the trending news (often using exactly the same tools that their competitors are using, thus homogenising things from the get-go)
  2. They commission five or six stories in order to cover all angles
  3. The story races to the top of the real-time screens, so…
  4. …the editor orders more of the same
  5. As the story starts to drop in popularity, the editor milks it for all the clicks they can manage
  6. The editor rinses and repeats ad infinitum

The only point in this cycle where real-time analytics actually plays a useful part is point 5. This is where a front-page editor can really see what’s working now and make layout adjustments accordingly. In all the other points, the editor is essentially responding to what the reader appears to want.

Subsequently, the publication is giving the reader what he or she thinks they want rather than living up to the editorial standards that it built its reputation on. It’s also publishing almost exactly what everyone else is publishing, simply because the single metrics that real-time relies on seems to suggest readership engagement.

It’s here that you can see initial traces of the dreaded ‘echo chamber’ that we heard so much about in 2016, and it’s here that you can see readership trust in news brands slipping away (why trust a newspaper you once trusted when it appears to be resorting to sensationalist claptrap in a very transparent attempt to appeal to everyone and thus make money from advertising?) You’re on a slippery slope if you’re basing your editorial strategy on real-time analytics.

5. The industry is starting to wake up to modern editorial analytics

At the WAN-IFRA conference in London last autumn, Kritsanarat Khunkham of Die Welt spoke about the development of an in-house ‘Article Score’ that looked at a variety of metrics and their relationships to each other in order to measure the success of a piece of content. As John Wilpers points out in the slide above, they looked at six metrics and decided success was best ‘calculated’ by adding them up.

It was a good start, and it’s wonderful that the industry is starting to think in these terms. Similarly, the ad industry (or ‘the hand that feeds us’) is thinking along these lines, with Marc Pritchard of Proctor & Gamble recently calling on his colleagues to, “grow up and start using industry-standard verification metrics”.

And so here we find ourselves. The above slides say it all, really (thanks, John Wilpers!) Built by editors, for editors, we’ve been tweaking and refining our CPI for the best part of half a decade now. We’re not adding things together — instead, we’re looking at the relationships and ratios that connect around 30 different single metrics, really attempting to understand engagement and how the content is performing, “relative to the goals of the website or publication.”

We call this ‘scored metrics’, and we believe it’s the best way forward for editors who want to understand genuine reader engagement. You can see just one example of how we make use of scored metrics in the slide above: read depth. This takes the notion of scroll depth and makes it genuinely useful. By using scored metrics we’re able to see how engaged the reader was throughout the article, and potentially make fixes to the article to hold their attention (or simply understand that page views on certain subjects actually equate to zero engagement whatsoever).

To sum up, the future of editorial analytics is bright, providing we agree to follow a brief but important manifesto. As far as we’re concerned — and we believe John Wilpers would support us when we say this — the following points would be a very good place to start.

Slides 1, 2, 3, 4, 6, 7, 9, 10, 11 & 12 taken from the FIPP report on innovation in magazines, 2017. All others taken from Content Insights presentation on the future of editorial analytics.

Originally published at on March 27, 2017.