Testing content and user research

Jonathan Richardson
UsabilityGeek
Published in
6 min readMay 6, 2020

As a user researcher I’ve often been asked to test content. Content is one of the simplest things for UX teams to change — ownership of the content (and often the images) is something teams typically have, whereas changing the backend and overall design of a site can be slow and convoluted.

Another reason is that changing content can have a massive impact on readers. In particular focusing on user needs for a page, keeping the language plain and reviewing the structure will typically improve the user experience.

Yet there is no standard way to test content so this post is about some ways that have worked for me.

Content testing principles

A workshop review of content

As with all user research, we do not test the user, we test the content.

We also have to be clear what we’ve changed, why we made these changes and how they fit into our hypotheses. We should also test each part in isolation, if possible.

This way it’s clearer which changes have had which effect, such as:

  • the structure —reorder and delete paragraphs and only change wording to ensure the flow still makes sense
  • sub-headings and navigation — keep the original wording and structure but add or revise sub-headings to aid navigation (it’s surprising how much content is still published without sub-headings)
  • the words — new wording, with the caveat that complete rewrites will include sub-heading and structure changes

To test the results of these changes we also need a measure or goal.

Testing if content meets user needs isn’t always the goal

I’m guessing that if you’re updating content it’s because you want to make it ‘better’. Measuring ‘better’ is hard, particularly if you’re not able to do it at scale and use live web analytics (which has typically been the case in my experience)

The obvious goal may be to see if a page meets user needs, such as can the user achieve a need-based goal (“does the user find out how to get a passport”)?

Yet serving a user need doesn’t always mean the content is good. I’ve worked on pages where technically the page met the user need but it was buried the information, or conversely it was the first thing but the rest of the page was gunff .

So you can technically say you met the user need… but you’ve done it in an inelegant way. After all, you can drive from London to Edinburgh solely in second gear - you achieve your goal but your car is not going to thank you for it.

User votes on a qualitative task

Measuring ‘better’

I’m guessing that if you’re updating content it’s because you want to make it ‘better’. But measuring ‘better’ is hard, particularly if you’re not able to do it at scale and so use web analytics.

Even when using web analytics it can be tricky. For example, is a high bounce rate a good or a bad thing if the page answers everything the user wants and there’s no need to keep them on the site then they should bounce away?

On public service sites such as GOV.UK this was acceptable (and better for the user), on a site that wants to keep users to sell to them or to advertise, not so much. Either way, you need to know your analytics and know what counts as better.

But what if you’re testing prototype content, or don’t have analytics software?

Non-analytics content testing methods

I recommend setting a simple benchmark for a pass/fail and measure whether users achieve this. This is my preferred method, for task completion does not rely on users self-reporting results but comes from your observation.

But what about pages that don’t lend themselves to tasks, such as pages that contain information?

For that I agree on a standard question with the team and ask users whether they believe the content lets them answer of achieve this with a simple yes or no response.

You risk that users will give you the answer they think you want, or they don’t understand the question, or you badly word the question. But I believe it’s better to do this than nothing, and gathering data from a wide range of users should minimise the effects of this.

For example, on a recent content test I asked users whether the two pages or similar content “gave them enough information to complete an application in one go?” and “would they send this page to a friend who needed help on this?”.

I don’t believe these are perfect questions but if we test with enough users we can get a broad area.

Benchmarking content

Benchmarking is comparing responses to your standard question for existing content with a new version of the content and comparing it with another.

Keep the question consistent and agree in advance what is ‘better’ and what kind of difference you are expecting.

But while this gives you a quantitive comparison of sort, qualitative response are extremely valuable. This is why I ask users to highlight content and use this to identify passages we think work well and which need improvement.

The highlighter test

Content highlighter test. Pete Gale via GOV.UK

I’ve borrowed Pete Gale’s Content Highlighter test in workshops to get multiple users to highlight a single page.

I’ve found what works best is that each user has their own copy and then mark on a communal printout to minimise being led by others’ responses. I also ask them to leave sticky notes with their comments and follow up any questions I have of them.

In Gale’s method he uses ‘decreases confidence’ and ‘increases confidence’. I’ve experimented with green for ‘this is good, I like this, it’s useful’ and red (or orange, or pink, I’ve yet to find a red highlighter) for ‘this is confusing, makes me pause, I don’t like it’.

The main thing is that your team agrees on a consistent colour scheme to rate content.

Capturing comments

Both the highlighter test and votes require context to help us understand why the user chose what they did.

In a moderated session you can simply ask them afterwards. In a workshop then encouraging comments either on sticky notes or observing what they do and asking them works.

In an unmoderated session, such as if you use a tool like Google Forms to get feedback, include a question to ask them why. Then follow up in interviews.

Combing these methods

These approaches can be used in moderated and unmoderated user research sessions.

Moderated methods with interviews in the session:

  • the highlighter test
  • workshop highlighter exercise — suitable for large groups, ask them to highlight text on a large printout and leave notes on post its
  • workshop pass/fail vote— ask users to vote with a dot whether a piece of content does or does not answer your benchmark question

Unmoderated methods — ideally you will follow up with an interview or questions:

  • Shared document comments — share a Google Doc or similar in Comment-only mode and ask users to comment on text for good or negative reasons
  • Forms for text — use Google Forms or other survey tools to send users paragraphs of text with your questions underneath, eg, how did this make you feel, was this useful, which content do you prefer?
  • Forms for voting — ask your benchmark question and get users to vote yes/no and then ask why

Want to learn more?

If you’d like to become an expert in UX Design, Design Thinking, UI Design, or another related design topic, then consider to take an online UX course from the Interaction Design Foundation. For example, Design Thinking, Become a UX Designer from Scratch, Conducting Usability Testing or User Research — Methods and Best Practices. Good luck on your learning journey!

Other ways?

If you have other suggestions on how you test content please leave me a note below.

--

--

Jonathan Richardson
UsabilityGeek

User researcher and writer with an focus on the journalistic and anthropological approach