Death to Assumptions, Long Live User Testing

How The Atlantic incorporates customer feedback into its product development cycle

“You are not your average customer.” I tell this to new product managers, and it’s a refrain that we repeat often on our team. We know our site inside and out, and can easily be blinded by our own organizational preferences and biases. In an effort to see more clearly, testing has become critical to our product development and user experience work. We need to observe how real customers, not Atlantic staff, experience our site.

At The Atlantic, we do two types of testing: on page multivariate testing and usability testing. We began robust A/B testing in early 2015, and have recently launched a new effort to make usability testing part of our regular product development cycle. We are a relatively small team comprised of two designers, four product managers and seven developers responsible for all digital display and distribution of TheAtlantic.com and CityLab.com. Our ability to incorporate testing demonstrates that it doesn’t require a large team or lots of money to generate valuable insights.

Some people ask what usability testing adds that we can’t learn from analytics. One of the main benefits is hearing about our customers’ thought processes and perceptions of the site. It’s helpful to hear people describe how they pick stories they want to read. Likewise, it’s important to understand which parts of our site are frustrating to use. For example, analytics can tell us how many newsletter subscribers we have. The number alone tells us if people are subscribing, but without usability testing we don’t know if the process is frustrating or confusing. What’s more, without talking with customers we don’t know how many people might have been interested in newsletters, but didn’t know they existed or where to sign up.

Our Experience with Usability Testing

Our team has experimented with usability testing in the past — particularly before important launches. We wanted to create a process that would be affordable, informative and easy to deploy regularly for our small team.

Before joining The Atlantic, I worked in new product development for a major financial services company. We did extensive user testing and I spent many hours behind one-way glass watching real customers interact with products in development. At The Atlantic we wanted to incorporate the same level of testing rigor without the research facility budget or personnel.

Here’s how we did it and what we learned along the way.

Set the Scope

For the pilot round of our new approach to usability testing we focused on our core homepage audience. We have a very stable, loyal and engaged group of people who visit the homepage regularly. The homepage was originally designed to meet their needs with curated top stories, topic-specific modules and frequent updates. We wanted to understand when, why and how this group visits the homepage; what they love or hate; and whether we have the right mix of content for them. Our goal was to improve the customer experience, find ways to increase the number of articles read per visit and encourage people to return more often.

Recruit Participants

We recruited by posting a request for volunteers on our homepage. We narrowed the list down to folks who lived in the DC area, visited the homepage multiple times per week, and agreed to let us film the conversation for internal use. Then we invited a mix of age, gender, magazine subscribers and non-subscribers, into the office to meet us in person. As a thank-you, we offered participants Atlantic swag.

Write the Script

To start the process we clearly outlined our goals and what we wanted to learn, and from there devised questions and tasks for the participants. When we wanted to understand how easily customers could accomplish basic objectives like changing their print mailing address or finding an author they’d heard about on TV, we wrote tasks and observed how participants approached them. For more behavioral insights, like when and how people watch video, we simply asked customers about their habits.

Practice Makes Perfect

Then it was time to practice interviewing people so we learned how to listen, probe for more information and avoid guiding the participant to an answer we wanted to hear. It’s awkward sitting in silence next to someone while they try to figure out how to use your site. Practicing on a safe person, coworker or otherwise, is time well spent so the real thing goes smoothly.

The Big Day

Our product and dev team took turns moderating interviews in a conference room, while a rotating cast of observers watched and took notes in another room. From our final script, we created a note-taking template for the observers to quickly track key takeaways and interview details.

We used a speakerphone and a screen share to let the backroom listen and watch the sessions, which were also recorded for future reference. After each session and at the end of both days, the moderators and observers regrouped on what we learned, the trends that were emerging, and whether we needed to adjust the script.

What We Learned

The feedback we heard was full of encouraging reinforcement about what we’re doing right. We learned how many regular homepage readers use the page as an index in a way we hadn’t expected. We also had several a-ha moments where we realized our site had confusing UI or unclear naming conventions.

We summarized all of our findings into an internal cross-functional presentation. The recommendations included immediate changes to the site that we’ve already made, such as including more links to service print accounts, ideas to A/B test, like new modules on our homepage, and challenges for our designers to work on resolving, including upcoming improvements to our site navigation.

Refining the Process

We also evaluated our approach to usability testing to identify what worked well and what we could improve. We found our first round of testing, which had ten interviews over two days, offered diminishing returns after the seventh or eighth interview as we stopped hearing new insights. In the future, we’ll cap the number at seven. We also learned we budgeted too much time between conversations and that 30 minutes is sufficient. Fewer sessions, and a denser schedule, means we can fit all the interviews in a single day.

The hardest thing to predict was the pace and length of the script. We thought we had plenty of questions planned with extra topics in case a session moved especially quickly but we still ended a few conversations slightly early. Time with readers is precious so it felt like a shame not to use every second. In other conversations, we had to truncate sections of the script in order to make it through all topics. We will tweak our script in the future and include more backup questions to make full use of our allotted time.

We have since done a second round of usability testing, focused on the nav and article page, and the changes we made to our process were definite improvements.We’ll keep optimizing as we go but we’re pleased with the process we’ve put in place.