Image credit: Ryan Bretag, Creative Commons License Non-commercial Share Alike 2.0 Generic.

Web Traffic and Stickiness: What Works?

Shorenstein Center

--

By Matthew Hindman, Joan Shorenstein Fellow (Fall 2014) and Associate Professor in the School of Media and Public Affairs at The George Washington University

The following is an excerpt from Matthew Hindman’s new paper, Stickier News: What Newspapers Don’t Know about Web Traffic Has Hurt Them Badly — But There is a Better Way. Read the full paper.

Growing local news audiences online boils down to two questions. First, how can we make news stickier compared to all of the other content — from Facebook to email to pornography to shopping to YouTube — that competes for users’ attention? Second, how can local news sites make themselves stickier compared to the large national news brands that soak up 85 percent of the news audience?

The good news is that newspaper organizations do not have to start from scratch. Almost two decades of research have documented the factors that allow some sites to build habits of readership. Newspapers need to adopt the same tools and techniques and strategies that allowed web giants like Google to get so big in the first place.

Perhaps the single most consistent finding, across the entire web traffic literature, is that faster load times lead to higher traffic. Dozens of studies have replicated this result across different sites and diverse categories of content. Even tiny user delays, on the order of 100 milliseconds, have been shown to reduce traffic.

News sites today still load more slowly than any other type of content. When Google CEO Eric Schmidt visited the Newspaper Association of America convention in 2009, his first complaint about digital newspapers was that “the sites are slow. They literally are not fast. They’re actually slower than reading the paper.”

In recent years, though, a few newspapers have gotten the message. Upon buying the Washington Post, Amazon.com CEO Jeff Bezos insisted on reducing load times by 40 percent. Since 2013 The New York Times has revamped its entire Web architecture, everything from hardware to server configuration to its massive code base, to meet new speed targets. The Guardian has dropped page loads from 12.1 seconds to 3.2 seconds. The Guardian now aims to load core page elements — layout, headline, and article text — in no more than a second, even for mobile users. These are welcome changes, but they need to be replicated at hundreds of other organizations. The fact that large newspapers got there first underscores the size disadvantages that small newspapers face.

Beyond speed, site design and layout has a large effect on site traffic and on purchase decisions. Some of this effect might stem from simple aesthetic considerations. But there are other factors, too, that make design especially important in building traffic.

Several lines of research show that site design and layout is used as a proxy for site quality and trustworthiness. Design also has big impacts on users’ abilities to navigate the site. Sites that are easier to navigate generate more return traffic and higher sales.

Site design seems to have effects on e-commerce revenue that are even stronger than its effects on raw traffic — something that should give newspapers pause. The paywall push means that most newspapers are now e-commerce sites, as they scramble to sign up digital subscribers. Amateurish and dated web designs are disastrous for reader’s perceptions of quality.

Another key finding in the literature is the crucial importance of personalized content recommendations systems. Automated, algorithmic recommendations are the cornerstone of most large digital firms. Companies like Amazon and Netflix depend on content recommendation systems for a large portion of their revenue, and an even bigger chunk of their profits.

Lists of “most popular” or “most emailed” articles are increasingly common on news and media websites, and they can raise traffic numbers if given a prominent spot on the page. But a large body of research shows that recommendation systems can do much better. Google News’ personalized news recommendation system increased traffic on its homepage dramatically. Similarly, when Fortune tested a content recommendation system page views spiked by 30 percent.

To be sure, recommendation systems are challenging to get right. Newspapers have limited staff expertise in these areas, and they often have trouble paying for the high salaries this specialized knowledge commands. But recommendation systems deserve more investment: few technical changes can provide such a big boost to news traffic.

Technical issues like site speed and content recommendation are both important, and underappreciated. But building local news audiences depends not just on site features, but on creating more and more compelling digital content. Here, too, the results are clear: sites with more content, more frequently updated, are much better at building traffic. Large news volume is a necessary, though not sufficient, condition for strong audience growth.

It is impossible to build audience with a mostly static site. By definition static sites provide no reason to come back. As one executive at The Atlantic remarked to the author, “if a user returns to your site, and finds that nothing has changed, you have just taught them to come back less frequently.”

The importance of fresh content is at the heart of recent discussion of so-called “hamster wheel journalism.” The evolutionary pressure for more content more often has led to enormous focus on immediacy, and breakneck production of short news articles. In a widely-discussed Columbia Journalism Review article Dean Starkman decried these trends:

The Hamster Wheel isn’t speed; it’s motion for motion’s sake. The Hamster Wheel is volume without thought. It is news panic, a lack of discipline, an inability to say no. It is copy produced to meet arbitrary productivity metrics.

Certainly Starkman is right that these tactics sometimes challenge traditional news values (more on that below). But these approaches are not just “mindless volume”; rather, they are the considered outcome of much research on what builds readership. The reason these techniques have taken over is that the news organizations that adopted them have grown faster than their competitors.

All else equal, news organizations generate more traffic with lots of little stories, rather than fewer medium-sized ones. Data from Chartbeat shows less than 10 percent of users scroll down to the end of a typical news article — most users, in fact, scroll only to the halfway point. This suggests that reporters often spend lots of time writing words that barely get read. Increasingly these findings are shaping newsroom policies. On May 6, 2014, both the Associate Press and Reuters (apparently coincidentally) issued separate memos asking reporters to keep most news stories under 500 words. In addition to saving reporters’ and editors’ time, the AP’s memo decried a “sea of bloated mid-level copy,” declaring that “our digital customers know readers do not have the attention span for most long stories and are in fact turned off when they are too long.”

To be clear, research does not suggest that newspaper sites can maximize their traffic by eliminating all of their long articles. Research on recommender systems, among other lines of evidence, suggests that the best solution for most newspapers is diversity in article content and format, including article length. Longer feature articles dominate the “most read” lists at most digital newspapers. But local newspaper sites cannot build up a consistent daily audience just with lengthy features. A constant stream of short pieces is the first step to ensuring site stickiness.

Newspapers can also make significant gains by even just better utilizing the content they already produce. In particular, headline testing and improved lede-writing can result in substantial jumps of traffic.

One of the most striking differences between successful online media startups like Upworthy, Buzzfeed or the Huffington Post is just how much time their editors spend writing headlines. Upworthy, a site that often promotes news and public affairs content, requires its staff to write 25 headlines for every story. Interviews with Buzzfeed staff emphasize the same point: a majority of writers’ time is spent writing the headline and the lede, even for stories with original reporting. Practices at the Huffington Post are similar.

Headline testing comes with perils for newspapers. Going too far down the clickbait path, with catchy headlines that misrepresent the article, can diminish the newspaper’s brand and squander readers’ trust. Still, the headline is by far the most read part of the article, and the greatest opportunity to alter reader behavior. Again and again, online aggregators have taken other organizations’ reporting and garnered a tsunami of traffic by adding an A/B tested headline and a quantifiably catchier lede.

Recent shifts in news organizations have suggested greater investment in this area, and a growing acknowledgement of the importance of headlines. Among other recent investments of the Bezos era, the Washington Post has created a new team of 16 people focused on rewriting headlines to boost traffic. Headline writing is not an either/or choice between the tepid titles of many newspapers, and Upworthy-style “you won’t believe what happened next” headlines. Newspapers can write more compelling headlines while still respecting their values and their brand identity.

In the same vein, optimizing news sites for social media can also boost readership. Many news sites find that Facebook is their single largest source of traffic, with sites like Buzzfeed and Huffington Post high-end outliers. Moreover, referral data often underestimates the role of Facebook: much of so-called “dark social” traffic has turned out to be mobile Facebook users, though sources like Chartbeat have recently gotten better at correctly attributing the traffic source. Capturing even a trickle from the Facebook firehose can produce wild traffic spikes.

Optimizing for social media is about more than adding “like” and “tweet” buttons to the website, or requiring reporters to tweet, or even Facebook-friendly headline testing like the sort above. Most mid-size and larger local papers now have at least one person focused on social media, which is a start. But the features of a good social media story need to be considered at every part of the news process, from story pitch to final publication. Increasingly, digital news sites have deployed dedicated social media teams to coordinate this process, and push a set of promising stories in the hope that they will go viral. With the Huffington Post, for example, different sections and “verticals” are required to pitch stories to the social media team several times a day.

Facebook-referred traffic is actually even more biased towards large, national news outlets than Web traffic as a whole. Newspapers need not (and should not) turn their sites wholly over to social content, but they do need a consistent stream of suitable articles. Even modest improvements would have an outsize impact on closing the gap between local papers and national outlets.

To be sure, there are limits to the gains social media can provide. Facebook visitors are mostly flybys, looking at a single page and spending less than a minute. Facebook users are difficult to keep for that second or third page view, let alone convert to paid subscribers. News organizations overly dependent on Facebook visitors are quietly ceding a great deal of control.

Moreover, even substantial investments in social media can evaporate without notice when Facebook or Twitter changes their rules. One prominent example is the Washington Post’s Social Reader. The app is promised to “share what you read with your friends,” and it added recently read articles to subscribers’ news feeds. Social Reader’s developers got substantial technical help and encouragement from Facebook’s own staff in building the app, and at its height the app had more than 17 million users. Yet in late spring 2012, without any warning, Facebook redesigned its site and altered its algorithms. Traffic plummeted almost overnight. By December 2012, the Post had killed the app. The Guardian’s similar social reader app, also created with help from Facebook, suffered the same fate.

Lastly, multimedia content attracts more traffic than plain-vanilla text articles. This includes interactive elements and graphics, which have long been associated with high levels of reader engagement. But video content and even simple slide shows typically outperform text alone. Some digital news sites already aggressively exploit this finding. Huffington Post and Buzzfeed, for example, have both invested heavily in slideshows (HuffPo) and scrollable image galleries (Buzzfeed). The Huffington Post is so committed to the strategy that, as of this writing, it requires that slideshows accompany most of its articles.

In this regard newspapers are missing an easy layup. Reporters in the print paper are strongly limited in the number of photos they can publish, but there are no such limits online. Digital newspaper articles are often text-only, when they would earn more time and attention from users with a handful of photos or even a gallery.

Yet for content that requires higher levels of investment, this finding is more equivocal. The New York Times’ story “Snowfall,” about a deadly Washington state avalanche, is an oft-cited example of how digital news organizations can tell stories in new and sometimes dazzling ways. But “Snowfall” required enormous investment of journalist resources. It took John Branch six months to report the story, plus the work of a photographer, a researcher, three video people, and an 11-person (!) graphics and design team. Because the Times’ content management system could not support the story’s rich content, the entire page format had to be built from scratch. Some of this functionality might eventually be built into the newspaper’s standard digital platform, making future projects easier. Still, the bottom line remains: multimedia content might generate more traffic, but it also requires more resources to produce. For many pieces of rich content the opportunity cost is simply too high.

The Infrastructure of Growth

The tactics discussed above are not a comprehensive list of everything newspapers could do to grow their digital audience, but they are a start. The median local newspaper could be improved in every single one of these areas. If money were no object, the prescription would be simple: do everything, and do it now.

Of course, for newspapers money is exactly the issue, and everything-at-once is not a viable strategy. Newspapers need to think marginally, to identify the changes that provide the most stickiness for the least additional cost.

Some strategies are so important that they should be implemented immediately. For any editors reading this: If your site is slow, you are bleeding traffic day after day after day. If your site does not work seamlessly on mobile or tablet devices, drop everything and fix it. If your homepage does not have at least some visible new content every hour, you are throwing away traffic. Fix these problems first.

Beyond these easy gains, though, the problems of increasing stickiness get harder and the trade-offs trickier. For these more difficult questions testing is crucial. Newspapers have to perform live experiments on their websites, in order to learn what they need to know. There is no substitute for data.

Online field experiments are the single most important strategy that has allowed today’s web giants to get big in the first place. Google researchers report that they “evaluate almost every change that potentially affects what our users experience.” Increasingly sites have moved beyond testing just two variants of a Web page, as A/B testing implies, to multivariate testing (MVT) in which many variables are tested simultaneously. Ron Kohavi, formerly of Amazon and now head of experiments at Microsoft, credits online experiments with adding hundreds of millions of dollars to Microsoft’s bottom line. Less appreciated but just as important, similar sums were saved by catching damaging features before they went live. Large firms such as Google, Microsoft, and Facebook have more than a thousand experiments running at any given time.

Though A/B testing began to be employed at sites like Amazon.com and Yahoo in the 1990s, most newspapers still lack the infrastructure or expertise to perform online experiments. First, newspapers must reliably track individual users, both over time and across devices. This is not trivial. If users cannot be reliably separated into treatment and control groups no experiment can work. Newsroom subscriptions to services like Omniture and Chartbeat are one way to solve the problem of tracking users.

Second, newspapers need to be able to serve altered versions of their webpages. Most newspapers currently do not have this ability — but this should be easy to fix. Cloud computing platforms such as Amazon Web Services or Google App Engine/Compute Engine are easy to use and astonishingly cheap — though of course newspapers need to make sure that load times and responsiveness are equal across different servers. Many vendors now provide A/B testing as a service with just a few extra lines of code on the target webpage. New open source multivariate testing platforms, such as Facebook’s recently-released PlanOut, are even more sophisticated, and cost nothing other than developers’ time.

Increasingly, then, newspapers have no excuse not to perform online experiments. Many news organizations are already doing substantial online testing. Large online-only news outlets, news divisions that are part of larger digital firms (e.g. Yahoo!), and a few prestige news brands have invested heavily in measurement. Yet even amongst this group there remains too little understanding of what exactly news sites should be optimizing for. This uncertainty can also be seen in missives about the journalism crisis, which are filled with vague, contentless calls for “innovation.” Newspapers have been told to “experiment, experiment, experiment” without specifying what hypotheses these experiments are supposed to test.

Often discussions of A/B testing in the newsroom have dealt with the total traffic gained or lost. But this reflects old-media thinking, the notion that audiences are mostly stable, and that any changes to the site bring a near-immediate boost or drop to that total. To be most effective, A/B testing has to begin from the understanding that web traffic is dynamic. Newspapers are looking not for changes in their total traffic, but rather changes in their growth rate. Positive changes that make people more likely to come back, or more likely to view that extra article, compound over months and years. Tests need to run for weeks, or even a couple of months, in order to accurately gauge their impact.

Moreover, A/B testing makes it all too easy to optimize for the wrong thing. Consider the case of one large national newspaper, which embarked on a program to test headlines. To their surprise, they found that headlines chosen for maximum clicks actually lowered total news traffic. Dramatic headlines attracted a larger fly-by social media audience, but turned off those readers most inclined to visit the second or third article once they were on the paper’s homepage. This example emphasizes, again, the need to focus on robust metrics that are harder to game and less likely to lead analysts astray.

Cooperation

A/B testing is indispensable, but it is expensive in terms of staffing and newsroom resources. One strategy to defray these costs is broader industry-wide cooperation.

Online testing is particularly challenging for smaller organizations. Per reader, experiments are more costly with a smaller audience. The Times or the Guardian can spread the costs of testing infrastructure and analytics staff across many hundreds of thousands of readers, while a midsized metro daily cannot. Even worse, the math of testing itself creates a challenge. Big firms like Google and Yahoo have been able to test thousands upon thousands of potential improvements. Often these changes are small or seemingly trivial, such as a site’s color scheme, or a margin’s width in pixels. Yet the aggregate effect of this testing is profound. Nearly every element of their Web pages, every piece of their user experience, has been tested and optimized.

Newspapers, especially smaller-circulation newspapers, will never be able to detect such tiny effects. Web traffic is highly variable. Some of this variation in traffic is systematic, such as higher traffic on weekdays versus weekends, or a boost in traffic surrounding an election, or a particular story that goes viral. But most of these ups and downs are just random noise.

This noise means that two groups of randomly-selected readers will never show exactly the same level of traffic growth over time. The treatment group will always be at least a little higher or lower than the control group. The challenge is to discern whether the gap between treatment and control is a genuine treatment effect or just the result of chance. Big sites like Google and Yahoo can be confident that even very small gaps between treatment and control represent real differences. If Google and Yahoo have 10,000 times more traffic than a typical midsized newspaper, they can detect effects roughly 100 times smaller.

Because of the statistical challenges of detecting small effects, and limited analytic resources, newspapers need to join forces, sharing research and expertise with other news organizations. The advantages of cooperation are many. Newspapers can pursue a far broader research agenda and limit redundant effort. Analytics expertise is one of the scarcest resources in journalism, and sharing allows those skills to be leveraged highly. Joint work provides greater statistical power — especially important with smaller audiences and long testing windows — and it ensures that important results replicate.

Of course, much informal sharing already takes place. Ideas and research are shared on Twitter and blogs, at industry conferences, through email, and in one-on-one conversations. Newspapers such as the New York Times and the Guardian have been laudably forthcoming about their research findings and technical platforms (see above). The American Press Institute, the Knight Foundation, and the Pew Research Center, among several other industry groups and academic centers, have fostered sharing of research across news organizations.

Still, none of this is a substitute for more organized efforts. Newspapers need a forum through which they can outline a common research agenda, share results, receive peer feedback, and synthesize findings. Failed experiments need to be highlighted, too, in order to avoid the file drawer problem. Such a research group could be organized through a professional association, such as the American Association of Newspaper Editors or the Online News Association. Alternatively, foundations could provide the organizing role.

In many industries firms are understandably reluctant to share core business information, or to collaborate in building common infrastructure. But newspapers are in an unusual situation. Newspapers rarely compete directly with each other. The Seattle Times is not a rival to the Tampa Bay Times, though both are now facing off against sites like CNN.com and Yahoo and Buzzfeed. Moreover, as reporters and editors themselves loudly declare, journalism is not just another business. Journalism’s commitment to openness is part of what makes it worth saving. Harnessing that public-spirited ethos, and being willing to share with their peers, is essential if newspapers are to adapt to the digital age.

Read the full paper, Stickier News: What Newspapers Don’t Know about Web Traffic Has Hurt Them Badly — But There is a Better Way.

--

--

Shorenstein Center

Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School of Government. Papers and blog posts by our Fellows.