Small changes can mean big wins

When we’re working on improving the hirer experience at SEEK we deal with big, hairy problems, and different segments of the market with very diverse jobs-to-be-done. As well as being exciting and challenging, sometimes it’s hard to see the wood for the trees — we need to be careful not to underestimate how small things can impact our numbers in a big way.

When we wanted to make a small visual design pattern tweak in our job ad posting flow recently, the temptation was to roll it in with another test as we had other important A/B tests waiting in the wings. (Not a bad problem to have.) Initial thinking was ‘well, that couldn’t influence things too much, could it?’. It’s up there with saying “that couldn’t be too hard, right?” to a developer. In most cases, it’s foolishly hopeful.

When we reviewed what could change, we found it could have an impact on important metrics. As it was a highlight function in a crucial part of our funnel, it could impact page and funnel conversion, and shifts between product mixes. With that in mind, despite being a small change to make our design patterns more consistent, we decided it was best to test it separately.

How do you choose what to test?

There are a lot of approaches out there that prescribe different ways of running A/B tests — some promote radical redesigns, especially in landing page design, but most go for a more iterative approach.

You can certainly make great gains going for radical redesigns — if you get it right. If you don’t, a Product Manager, an Analyst and a UXer can spend a lot of time scratching their heads hypothesising about what drove a certain result. If this is the case, you are effectively going back to opinion-based design which can paralyse a team, or worse, lead to false conclusions being drawn, which can then lead to bad decisions.

At SEEK we’ve found an iterative approach has worked better for us. By isolating the elements and trying not to test too much at once, we are incrementally extracting more value, or failing fast and moving on. In both cases we are learning something tangible and concrete which can be built upon to drive other valuable conversion improvements.

“Companies that embrace a data-driven culture with rapid test and learn, can move forward steadily, taking the emotion and opinion out of the argument. As Grace Hopper (one of the original software developers) liked to say: “One accurate measurement is worth more than a thousand expert opinions.”

Marty Cagan,

The element of surprise

Some tests show us that we don’t always get it right despite many iterations with the input of many smart people, including our users, in qualitative research.

It’s rare not to be surprised by an A/B test. However, in this case it was a change in the objective rather than a different result. This test was a visual style change, designed to avoid confusion between design styles rather than to drive a conversion uplift. While we hypothesised we’d have a neutral or slightly better result, we got a substantial uptick in conversion.

Combining this test with other test results, we can now build a full picture of what has worked and what hasn’t, and it has influenced us to become more iterative when testing elements — for example not testing too many visual and copy changes at once.

The Digital Analytics team have gone to extreme lengths to remind impatient PMs of the importance of statistical significance

The benefits of patience
Given the backlog of tests, there was a temptation to wait until we saw a directional ‘do no harm’ result before switching it off. However, when we reviewed the numbers and saw the potential uplift we decided to wait and let it run its course to 95% confidence. With a longer time in front of different users we could categorically say that the new version was better, and not second guess future results.

So, through isolating a test, using an iterative approach, and being patient with letting the test run its course, we learnt:

1. Good visual design means good conversion
The control had a bright pink border promoting a specific ad product, while the new experience had a more under-stated grey highlighting. Our results showed that our audience had a significantly higher propensity to convert through the process of writing a job ad when exposed to the more muted grey design.

2. Look deep at the numbers
 While we saw a slight decrease in upsells in the new experience, the volume of increased activity going through the funnel has negated any loss and meant a far better result overall.

3. Go slow to go fast
 While this test initially seemed like it was slowing us down, it’s banked a sizeable uplift and changed our perception of what can influence conversion positively.

This is an important learning for us. While there is a temptation to visually call out what we want users to do, this can sometimes have an adverse reaction. We now have a focus on being more subtle in our recommendations across the experience, which should have a positive effect on the customer experience overall.

First published on the Brainmates blog.