Watch out! A cliff!

Or “How we kept 10,000 readers reading a wonky explainer on the minimum wage”

Josh Kalven
Newsbound

--

This is a story about publishing a piece of digital content, studying the analytics, seeing that something is tripping readers up, and fixing it on the fly.

The piece in question was “Scraping By,” a 1,000-word explainer written and designed by my studio Newsbound and commissioned by The Lowdown, KQED’s news education blog.

“Scraping By” is a stack: a click-through reading experience in which you advance paragraph-by-paragraph, with photographs, data visualizations, or illustrations accompanying each chunk of text.

The format obviously has a lot in common with slide decks and slideshows. But as with the similar work coming out of Tapestry, Vox, and even the New York Times, the fact that it is built for readers — not presenters — sets it apart.

We started building our stack technology two years ago while exploring how best to explain complex topics on the web. Unlike our video or infographic prototypes, our stack-based explainers got rave reviews during user-tests (to our surprise, actually). They struck a nice balance between A) giving the reader control over the pacing, B) obscuring the volume of content so as not to overwhelm, and C) remaining highly visual.

From a production perspective, the format also offers up a few advantages: Once published, a stack is significantly easier to update and redeploy than a video or infographic (most of the text is HTML-based). And because the content is so linear, tracking the reader’s clicks or taps gives us a truly meaningful feedback loop.

Indeed, with every new piece we produce, we obsess over the engagement curve and completion rate (usually around 60%), experimenting along the way to learn more about what keeps readers on board, what keeps them focused.

Which brings us to “Scraping By.”

This stack went live on March 13 and within two weeks over 350,000 people had launched it (thanks, Upworthy!).

But we quickly noticed a problem. Take a look at the engagement curve as of March 13:

How to read this chart: The bar all the way on the left represents the number of readers who launched the piece. The bar all the way on the right represents the share of readers who made it to the end. The bars in between represent the share of readers still sticking with the piece at evenly-spaced intervals along the way (about four frames per interval in this case).

Do you see it? Between the third and fourth bar we lost an abnormal number of readers — 13% percent of the starting audience (as compared to six percent during both the previous interval and three that followed).

When we look at the progress stats for a given stack, we usually see a relatively smooth curve. In some cases it’s steeper in the first quarter of the piece. Sometimes there’s a cliff right towards the end, when readers sense that it’s wrapping up. But rarely do we see a drop-off like this in the middle.

So we went back and examined the actual content. The section where we lost 13% of the audience is represented by the handful of frames in the orange box below.

One thing we quickly noticed was that #10 and #12 were carrying substantially more text than on most of the other frames in the piece. This led to a hypothesis: these two crowded frames were breaking the flow for some readers and sending them packing.

To test the hypothesis, we fixed the density issue, breaking #10 up into two clicks and cutting a sentence from #12. It took just a few minutes to make this structural change and redeploy the stack.

Not long after we made the change, another surge of traffic hit the piece, bringing 300,000 more launches with it. Here’s how the engagement curve changed after we made the fix.

Lightening up those frames led an extra three percent of the audience to read through that portion of the explainer. In turn, most of those readers stuck with the piece to the end, increasing the overall completion rate.

Three percent might not sound like much, but it means an immense amount to us. We’re in the business of creating evergreen explanatory content. We’ve designed our whole reading experience to provide a stress-free orientation for people who have a motivation to learn about a particular topic. We built this particular explainer to deliver a focused 4-5 minute read on the complexities of the minimum wage. And if we can keep an extra three percent of the audience tuned in to the end (nearly 10,000 readers in this case), that’s a huge win.

Why is this important?

Because most of the folks producing long-form digital content today have very little insight into how well their content is actually engaging its audience. You can pat yourself on the back for a great pageview tally or an astronomical number of YouTube plays, but those numbers don’t tell you anything about whether your article or video actually held the user’s sustained attention. And they definitely don’t alert you if one particularly dense or confusing paragraph in your article is leading otherwise-interested readers to head for the hills.

In a pageview-driven ecosystem, even if you possess the data to see that a particular paragraph is causing problems, you’re probably on to the next thing by the time the data arrives. There is little incentive for you to care.

However, if outlets like Upworthy and the Financial Times are successful in prioritizing “attention minutes” over pageviews, then the type of emergency surgery we performed on “Scraping By” — novel and rare today — is going to be increasingly relevant.

When I look at the results of this fix, I can’t help but ask: What other invisible hurdles are we unknowingly putting in our readers’ way?

--

--