How user testing has changed during coronavirus

Grace Lauren
Content at Scope
Published in
6 min readJun 11, 2020

User testing has always been important when improving our online advice and support. The onset of coronavirus put new demands on our work. We were getting a large number of user needs through our helpline, online community and research channels. Many of these were urgent. Our programme lead Stephanie has written a blog post about how we adapted to this challenge. During this time we expanded our testing methods to provide a more rounded view of our user’s experience.

Testing in more ways means that there’s more work for us to do, but we get better feedback on content and navigation. We can also provide information faster if we test content after publishing it.

We had been working towards live testing for a while, but the urgent need for information during coronavirus pushed us to start.

Before coronavirus

Before coronavirus, we tested content before we published it. This meant our content designers could use feedback to improve the page before it went live.

We sent participants a piece of content in a word document, and then had a call with them. Most of our testing was done remotely before coronavirus, so we were used to using Skype or Google Hangouts. During the call we asked them to read the content aloud section by section. We would then ask for their thoughts on each of the following points:

  • Lived experience: Is what we are saying true for our readers? Does it reflect their reality?
  • Title: Does it reflect the information provided?
  • Language: Is there any jargon?
  • Tone: Is there anything that makes readers feel patronised or anxious? Do we sound friendly, matter of fact, blunt
  • Structure: Is the information presented in a helpful order?
  • Missing information: Is there anything else we need to include?
  • Acceptance criteria: Does the reader know what we set out to tell them?

We recorded feedback in a google doc and shared with the team.

Screenshot of google document with comments from testing participant. Participant says a sentence is hard to read.

This approach gives good content focused feedback. It:

  • catches false assumptions about what language people will understand
  • alerts us to sentences that are difficult to read
  • highlights information gaps.

But there are limitations. We do not see how participants naturally engage with our content.

We give participants a document and ask them to read it from start to finish. This gives us detailed feedback on the words on the page, but doesn’t show us how people would consume it online.

What participants see is not the final version of the content. Often content changes based on feedback before publication. Meaning live versions may have untested additions.

At the end of a test we know what people think of the content, but we don’t know whether they will be able to find it on the website.

Finally, we know participants can read the content, but we don’t always know if they understand it.

How testing has changed during coronavirus

We decided to test new content after we published it so we could make it live faster.

Because content was live when we tested it, we were able to expand what and how we test. We still wanted to maintain the detailed content feedback gained from our existing testing methods, but we also wanted to take advantage of the freedom to test usability too.

We decided to continue testing each page twice.

1. With our existing methods

2. Using usability and comprehension testing, to check that our content is easy to navigate, see and understand.

Navigation

With the participant sharing their screen, we start each session on the Scope homepage and ask them to find the testing page themselves. This helps us to see if our navigation menu is working for people. We were also able to test if people notice our coronavirus banner at the top of the homepage. They usually do!

Analytics and heatmaps to check content visibility

Because pages are now tested live, we can combine Google Analytics and Hotjar data with our qualitative testing feedback. We record heatmaps of each page after publication. These show us how larger numbers of people scroll and click on different areas. It helps us restructure pages where we see significant drops in scroll numbers. It also tells us if important links are being seen and clicked by our users.

Screenshot of a heatmap. Shows lots of clicks on a link to food box deliveries

We have ‘related content’ at the bottom of each page and want to check if people see it. So far testing suggests that when people are looking for something specific, they will find the related content section. But heatmap scroll data suggests that generally only a small percentage of viewers get that far down the page.

Understanding

We’ve started using scenarios to check if people understand content. This means we can see if we are meeting our acceptance criteria.

For example, information for parents about school closures and EHCPs one of the acceptance criteria was:

“I know what school closures mean for my child”

We used this scenario to check understanding:

“Imagine your child had an educational healthcare plan (EHCP), could you still send them to school?”

When participants give us incorrect or ambiguous answers, our content designers make the content clearer.

Challenges

More time

The biggest challenge was time. It takes more time to:

  • use data to assess and report on page performance
  • make comprehension tests for each piece of content

It only works for some pages

After we started using this new approach, we noticed that some pages are not suited to comprehension testing.

For short, simple or signposting pieces we focus on usability and content, and less on comprehension.

Choosing which pages needed comprehension testing

We wrote a quick guide to help us decide if a content item is a candidate for comprehension testing. The page was a candidate for comprehension testing if any of these conditions applied:

Situations with complex solutions

This includes anything with multiple steps or calls to action.

Multiple ‘if’ conditions

This often happens when someone needs to be in a certain situation to be eligible for something. If you are x you can apply for x.

It may be more complicated when there are different answers for different variations of a problem. If you are x1 you must do x, but if you are x2 you must do x3.

High risk situations

We want to be sure that people can avoid a negative situation. For example, losing support or anything that could have a financial impact.

This often means testing the warnings that we put in our content.

When there are prerequisites for a task

This means that someone needs to have already done something for the information to be true for them. We need to check readers understand this and can find the information they need.

Next steps

We plan to continue live testing

Based on our experience with coronavirus content we are trialling a new approach. When we go back to our non-coronavirus content, we plan to continue live testing.

We will test higher risk pages before we publish them, and live test all content after we have published it.

We will use Google Analytics and Hotjar data when the page has had time to gather enough page views for analytics to be helpful.

We will do a review in 6 months

People can rate content as helpful or unhelpful and leave comments.

Screen shot of the “was this page helpful” pop up on the Scope website

After 6 months, we will review feedback to see if not testing ‘low risk’ pages before we publish them has any negative consequences.

--

--

Grace Lauren
Content at Scope

🤓 Feeling my way. 🌊User and social research. (she/her) @_GraceLauren_