Centrelink’s robodebt crisis — can user testing help?

Centrelink and the Australian Federal Government have received widespread criticism in the past months for the ‘robodebt’ debt recovery campaign.

As a user researcher, I’m committed to helping people make positive change for their users and themselves. I took the liberty of doing some basic user testing of their website, hoping it would uncover some constructive and actionable feedback for the service.

First, a quick recap: from July 2016, ~230,000 customers of Centrelink started receiving automated debt notices from the campaign, which marries a dataset from the welfare agency with records from the Australian Taxation Office.

I don’t intend to add to the criticism of Centrelink or the Government — my goal here is to try to show how some basic and low cost user testing can help customers achieve their goals, as well as easing the load on Centrelink at this difficult time.

This campaign represented a significant volume of communication to customers. It’s a reasonable assumption that there would be a spike in searches and website hits as people who received notices looked for more information.

Put yourself in the shoes of a welfare recipient, getting a letter in the post apparently showing record of a debt against you. There are two basic outcomes from the debt notice, either:

  1. you accept the debt notice at face value and want to settle it by arranging a payment plan or one off payment, or
  2. you don’t understand or disagree with the debt notice and need more information or want to dispute it.

In UX-jargon-speak, these requirements form the start of user journeys.

How does the current Centrelink home page accommodate these user journeys?

https://www.humanservices.gov.au/customer/dhs/centrelink as of 9th March, 2017

Start with a test

Remote user testing is a good way to start this research.

It’s low cost and takes only an hour or so to set up and run for a basic task, like testing the start of the user journeys we’ve just identified.

There are a couple of types of remote test we could use for this, but one of the simplest tests is the click test.

A click test generally describes a background scenario, and a task. Participants are asked to read the scenario, look at the test website/app/content/whatever, and then click the area of the design they think would help them complete the task.

It’s a simple test which you can use to test comprehension, content, or layout.

25 participants were used in each test, which is five times the number required to find at least 80% of usability issues in user testing sessions, and over the recommended number for achieving acceptable quantitative results as well.

By the way, testing isn’t as expensive as most people think it is. I used UsabilityHub, a startup in my city. Their prices start at US$2.50 per test response, tallying each test at US$75. Very reasonable considering the cost of recruiting just one participant through a market research panel for a moderated testing session is currently around AU$110.

Let’s kick off with a quick click test of the Centrelink homepage, with the current robodebt scenario as the background.

Test objectives

It’s important to note we are not evaluating:

  • Centrelink as an organisation,
  • any people who work there or within the Government, or
  • the test participants themselves.

We’re measuring the performance of the design in relation to a specific scenario and task.

User tests give us this data in an aggregated and repeatable way. Any user testing, no matter how brief or simple, helps the responsible organisation to make decisions that support their key user journeys.

Here’s the scenario and task presented in the first user test:

Imagine you have received a letter from the government in your country.
The letter says that you were overpaid by US$3,454 when on unemployment benefits 3 years ago, and the amount must be paid back by you.
You are surprised by this and want to dispute the debt.
A search on Google lands you at this page [the Centrelink homepage].
Where would you click to find out how to challenge the debt?

Results and analysis

Here’s a visualisation of the results of this test. The coloured areas represent clicks from test participants, in response to the scenario.

The key insights from this test are:

  1. The responses are very spread out across the page — there are 12 unique types of links clicked in the test of 25 participants. This demonstrates that there isn’t a clear action to address the scenario on the current page.
  2. The panel in the carousel about the campaign was not a popular choice. There were 2 clicks in this area, which is only 8% of participants. Considering this element is designed to address our user journey, this is a red flag.

Testing a possible solution

So what could help?

I immediately thought of a design trope: an alert banner.

It’s normal for sites on the consumer and social internets to have ‘alert’ spaces. Think of ‘breaking news’ bars on news sites, status change notifications on Twitter or Quora, or natural disaster alert warnings.

Weather warning alert bar, bom.gov.au as of March 3rd, 2017

Properly implemented, a simple static alert field is easily configured via the site’s CMS, or triggered automatically when certain conditions are met, e.g when communications go out to more than x% of the customer base, or when posted in an ‘alert’ category by an editor.

Another advantage is a static alert bar can be a fully compliant WCAG 2.0 AAA solution, as long as basic best practice semantics are followed in the code, which for something this basic is not a strenuous requirement.

A challenge of the alert trope is administering what technically constitutes an alert. That said, mass communications like the robodebt program are predictably controversial, just because of the sheer number of notices sent out.

These potential ‘crisis’ moments should conform to user needs. These moments are when vague opinions are solidified. A great customer service experience turns a crisis moment into a positive story for the customer.

So, we hacked in the alert bar, and fired up the user testing again! We used exactly the same scenario text and task to test the new version. Here are the results:

21/25 participants clicked within the alert area. The response groupings are very tight and centred around the links in the alert box. This is a great result considering the range of unique options clicked in the first test.

% of ‘correct’ clicks

The next steps of this concept would focus on the pages after this interaction — and the specific actions a customer could carry out to reach the end of their journey.


From these very basic test insights, it’s clear there are opportunities to improve the existing Centrelink landing page design, which would help users affected by any similar campaigns in the future, as well as easing the burden on Centrelink customer service staff.

Welfare payments are a sensitive issue. Basic design changes (like adding an alert space) are comparatively easy to implement, and can stimulate real improvements in customer service metrics like Net Promoter Score and Customer Effort Score.

Metrics aside, simple design changes reassure your customers in times of need. And it doesn’t take a lot of time or money to find out what could make a serious difference to the quality of service your organisation provides online.


This article is part of a regular content series for the publication User Testing Monthly, by UX studio 25th.co.

If you found it interesting, sign up to our newsletter and get notified when we post new content!

Thanks for reading :) 
H. x

Show your support

Clapping shows how much you appreciated Hugo Jenkins’s story.