Looking for a good time (on page)

Christopher Brennan
Deepnews.ai
Published in
5 min readApr 7, 2021

This is one of the occasional posts we do about topics that interest us. If you are seeing this and you aren’t signed up to our email lists, you can receive blog posts like this, as well as our Friday Digest, by signing up here.

Where does the time go? It’s a tough one to answer, though over the course of the last year, the response has been easier: time is spent online, our eyes flitting about on one page or another.

We normally don’t think about it that much , but on our screens our time is also getting quantified into seconds. That is then aggregated into the average “time on page” for whatever article, like this one, we and thousands of other people are reading. But what are those numbers actually good for?

This blog has in the past written a lot about page views, or clicks. That is mostly because I find that they are the easiest thing to compare against the Deepnews score. On large parts of the internet the idea of “relevance” is often mistakenly conflated with the idea of engagement, whereas Deepnews is trying to do something different and provide a measurement to an article that is about the text itself.

But beyond clicks there are other measurements, such as time on page, that newsrooms and analytics companies use. Some have even stated explicitly that they are trying to optimize for time spent, which was the subject of research from Jonathan Hendrickx at Vrije Universiteit Brussel in Belgium.

Hendrickx spent a large amount of time with journalists from Mediahuis, a major newspaper publisher behind titles such as De Standaard and Het Nieuwsblad as well as more local papers. As he explains in his paper, a lack of time spent was particularly vexing for Mediahuis given that some of the advertising (and revenue) was sitting further down the page, unseen.

The piece goes on to explore that efforts by reporters to increase time on page purposefully (through things like adding videos) were not overwhelmingly successful. However, another aspect that Hendrickx looks at is how the journalists reacted, which varied from the more accepting local reporters to the more prestigious De Standaard where older veterans were much more reticent to change anything with their practices.

“I think important at any company is, if you want to change anything, to involve the people that you want to enact this change. Don’t go for a one size fits all solution,” Hendrickx told me.

The split between writers who are producing articles and their coworkers in audience development who want to measure those articles is something I have seen time after time. There is a certain unfortunate sentiment among many journalists, even younger ones, that numbers are distasteful and they would have majored in something like statistics during their undergrad years if they had wanted to deal with them.

Part of the reason why I think that Hendrickx’s work is interesting for Deepnews is that Mediahuis seemed to have been treating time on page not just as a business metric but also as a sort of editorial metric.

The Deepnews quality score is an editorial metric. It gives a quality score from 1–5 to a text which, if that same score was given by a human, would be given by an editor. If a newsroom were to choose to try to produce as many 5/5 Deepnews scores as possible, it would be a business decision but also a decision about their editorial priorities.

A combination of editorial metrics with more traditional engagement metrics (including things like subscription numbers) can also help different outlets better become the sort of outlets they want to become. Having the right sort of metrics as a guide can help a newsroom produce a unique sort of coverage for the unique community that supports it with subscription or donation dollars.

This may sound obvious, but it is a massive departure from the early and mid 2010s, when news outlets were all essentially chasing one metric, clicks. This led to a bunch of newsrooms all becoming interested in the same sort of stories, chucking them up online with little original reporting or added value and hoping that their article got more views than all the other, near identical versions.

That model has largely failed, and in some way, the path away from clicks may help bridge the gap between the editorial side and the analytics side of newspapers. It may head towards a place where the two sides work more closely together to talk about which of the many metrics they want to target as they differentiate themselves from everybody else. We may already be seeing this as some newsrooms look more closely at things like time on page vs. clicks.

Clicks can still be very useful for some outlets and for gauging immediate interest, Hendrickx says, though “Time-spent, then, is relevant as it enables knowledge on so-called tipping points where people stop consuming an article based on, for instance, average reading paces. This can then again inform journalists and editors on users’ behavior towards (quality) news content.”

With the advances in machine learning where algorithms “read” the actual text, we can also easily get even more information about the articles that journalists are creating and use them together with other metrics to make something like time on page more useful . For example, the graphic above shows how Deepnews scores predict higher time on page in our study of 17,000 articles in the US. But the possibilities for AI to understand text extend far beyond just quality. Researchers we have spoken to for this blog have mentioned possible metrics such as sentiment, political bias, logic of a text or how deliberative they are.

This sort of combination will help us not only find out what sort of things we can do to increase time on page, as in the case of Mediahuis, but also give readers the sort of coverage they sign up for instead of just what makes them engage. In that case, everyone will be having a better time.

--

--