Using data to find out what content to improve

Jack Garfinkel
Content at Scope
Published in
5 min readJul 29, 2021

In the Scope content design team, we organise our work into ‘to do’ lists. Lists like:

  • early research insights
  • grouped insights to plan and explore with subject experts
  • fact-check due

My favourite one is “content items to improve”. Things go in here when they get:

  • lower on-page ratings from people asked if the content is useful or not (both as a percentage of pageviews, and as a basic number)
  • visits where people land on the page, but don’t read much of the page
  • visits where people don’t click on a link
  • people are seeing the page in search results, but don’t click on it
  • the page appears in search results for search terms that don’t seem relevant to the content on the page

On their own, these data points don’t tell us much. But used together, they show us that something has probably gone wrong. They don’t tell us what’s gone wrong, but they’re a good place to start.

Superficial metrics tell you nothing

Your boss asks you to make some content. If they ask you for data to show if content they asked you to make is a success, you’ll probably find data showing that it is. Your job may depend on it, after all.

Pageviews only tell you if your content is being looked at. It does not, for example, tell if you if the reader was angry and disappointed. Pageviews are ‘trophy stats’.

Black dog with eyes closed, paws over their nose seemingly in despair
Douglas the user research dog doesn’t react well to superficial metrics

When we started doing content design at Scope, we didn’t have a plan for gathering data. But we were asking a good question:

“Which bits of content should we improve first?”

Improving a bit of content is something we’re proud of. We report it to the organisation as a success, just as we do with new content.

When content works

The happy path for our content is usually something like:

  1. searching for the solution to a problem, mostly on Google
  2. our page appearing near the top of a list of search results
  3. people choose our page from that list of the results and viewing the page
  4. after looking at the content, people use it to take the next step or solve their problem

That last one, success on the page, is the harder one to measure!

On the page

Success metrics for sales funnels or sign-ups are simple:

  1. Land on page
  2. Click link to form
  3. Fill in form
  4. Submit form

But our content doesn’t look like that.

We’re trying to help disabled people and their families solve problems with information.

To audit the whole site, we need things which are easy to measure. They don’t need to tell the whole story, not yet. We just need to know what’s not working.

We use Google analytics to see which journeys involve:

  • viewing a page, but not scrolling past 25% (not reading our content)
  • viewing a page, but not clicking on a link (not taking an external step or action)

Page ratings

We try to measure how many people have viewed the page, but that’s not enough to count as a success for us.

We ask readers to rate our content on the page. We ask them:

  • if the page was helpful or not
  • if they would do something differently after reading the content.

If a page gets a higher number of ‘no’ ratings, that’s another indicator. We measure this in:

  • relative terms, as a percentage of all ratings for that page: “52% of ratings were negative”
  • absolute terms, as a basic number: “This page got 203 negative ratings”

If a page scores in the top 5 for both, that’s another indicator.

Potential impact on the reader

When we’re looking at what should be at the top of the ‘content to improve’ list, we also think about the difference the content could make to the life of the person who needs it.

This is hard to define. We’ve tried it a few different ways, but they were too complicated for us to use and understand consistently.

Now, we put anything about money and benefits at the top of our to-do list. We also prioritise:

  • things that could be a threat to health and wellbeing (usually food or social care)
  • education

Search data

Most journeys begin on a search engine. This is something our content has in common with commercial content.

There are tools to help you see if pages are appearing in or being selected from search results. For example:

Google Trends is fine for checking search volume for terms you already know.

A tool like Keywords Everywhere will help you to expand your keywords lists: “people also searched for…”. It’s not free, but it’s great value.

Search data has been great for helping us to:

  • validate people’s needs that have come up in user research interviews
  • see if there are related search terms we’re not covering on that page which might mean we need to update the user story or acceptance criteria
  • look at if people search for things together, which can helps think about which user stories to group together for a content item

The ‘to do’ list is only the start

Search data can also show us that the scope of the piece is wrong if we’re not covering subjects we’re told are related to the problem we’re trying to solve.

Measuring metrics over the whole site means we measure things that are easy to measure. When it works, we get to learn that something is going wrong, but not what.

Dog sitting with their front legs propped up on a large tyre. They are upright, alert and looking into the camera with their tongue hanging out of their mouth.
Douglas feels that this is better

How to improve content

To plan what we’re going to do with content that seems to be failing, we need to find out more. One of our fabulous user researchers is going to write about that in their next blog post

We usually do 1 or more of these things with content:

  • change the words
  • expand it, so we talk about more things
  • merge it with another content item
  • delete it

--

--

Jack Garfinkel
Content at Scope

Content designer at Content Design London, making accessible content for charities, government and businesses.