Making content design decisions using research

Jack Garfinkel
Content at Scope
Published in
4 min readDec 1, 2022

“We know this is true…right?”

I like listening when the team makes decisions. When people write or say things like this, it’s usually a good sign:

“The last time we did X, in testing people said Y…
Search data suggests that…
From analytics, it seems that…
From card sorting most people chose X as a label rather than Y…”

Pigeon is inquisitive, and vaguely threatening. Only its head and the start of its shoulder are visible. It may or may not be thinking about stealing your walkman.
Curious pigeon pokes its head around a doorframe

When I hear people say or I want to say “I think that…” that’s a time to start asking questions. Questions like:

  • Why do we think this?
  • What do other people think?
  • Do we have any data that that confirms or contradicts this? If not, how hard would it be to get?
  • If the conversation is moving fast, that’s also a good reason to slow things down.

Working in the open

It’s important that the way we make team decisions is open and fair.

Guesses without evidence are OK. Sometimes that’s the only option we have when we’re one or more steps beyond what we know, which we often are. But it’s important to be clear about:

  • what we don’t know (no evidence)
  • what we think we know (some evidence)

Talking about what we know and why we do it helps the team to remember too, if we pair it with good team docs.

Often that means trying to centre the people closest to the data on how people use and experience our content. That’s usually our user researchers, although we’re not rigid about team roles.

We like designing with data and knowing things

We like to know things. These things bubble up slowly 4 or 5 tests over a couple of different bits of content. Web analytics building up over months, or years.

We can cite sources. The sources aren’t perfect, but it helps us to design in the open. We can agree on why we’re making decisions.

  • People in testing said ‘condition’ felt OK, but ‘impairment’ did not, especially if we used it more than once.
  • At the start of the pandemic, people were searching for coronavirus more than COVID, unless it was “Long COVID”

They’re not absolute truths. Some of them change over time, COVID is more common as a search term now.

But they help us to make better guesses when we’re designing new content or trying to improve existing content.

But it means making time to:

  • get evidence, even if you have great researchers (we do!)
  • agree what’s enough evidence, for now at least
  • helping everyone to remember and use it, even in a small team

An example

We write context for disabled people. This usually means trying to support them to move through complex, arbitrary and hostile systems. This includes the UK’s benefits system.

I’ve been writing content to help people understand what will happen to people when their life changes. Things like:

  • moving house
  • starting a new job
  • living with a partner

The answers about what can happen are complicated. There are so many variables that can affect what happens. For example:

  • Are you moving to a new local authority?
  • Do you rent privately because that was your only choice?
  • Was your child over 18 when they moved back in with you?

We try to give the information that will help people to avoid risks and to plan. But when you have more than 3 things that can affect the answer, text can start to feel like a complicated way to give people answers.

Then we usually link to a benefits calculator (Turn2us).

We had a piece go through our improvement workflow. We noticed it had more numbers in it than usual.

Our researcher told us that we had clear feedback that people wanted more detail, more numbers.

We booked in some time to talk more about it. It became clear that content people did not share an understanding of what level of detail we should use.

What people want

People want to know what might change. Will they be able to survive if their benefits change or stop? They want to know how much money they will have tomorrow. Next week. Next month.

Disabled people testing our content have told us they want to see more amounts in the content that we write. Feeling like information is being kept from you doesn’t feel nice.

So, we’re thinking again about how we do this, and testing alternatives to see what’s easiest for people to use and understand.

What we’ll do next

We’ll start adding more detail and compare results from testing. We’ve also explored the problem space enough to realise that there are different kinds of numbers in benefits content.

There are:

  • simple amounts: £10
  • multiple amounts: £10 if X, £20 if Y
  • rates: 3% or “for every £100 you earn, £3 will be…”
  • tapered rates: “3% if you earn under £10,000, 5% if you earn over £10,000”

We’re going to start experimenting. We’ll use different levels of detail and look at the feedback we get in testing. We’d like to do this in the same piece and compare results. I hope that in a year, we’ll be able to talk more about what we know, and less about what we think.

--

--

Jack Garfinkel
Content at Scope

Content designer at Content Design London, making accessible content for charities, government and businesses.