What to do when user research fails

Angus Morrison
9 min readApr 12, 2018

--

What happens when user research gives you the wrong answer? Not inconvenient answers that burst bubbles and send products in unexpected or expensive directions. Quality user research should be full of those. Rather, the sort of answer you wholeheartedly integrate into your product, only to discover that it’s harming conversion. What then?

This happened to me at Lending Works, a peer-to-peer lending start-up that asked A. G. Morrison to audit their site, conduct user research and produce a conversion rate optimisation plan. Here’s what went wrong, and how we raised sign-up rates more than 50% in the process of fixing it.

The aim

The choice of stock images for “up and to the right” is about as limited as you’d expect.

Lending Works wanted to improve the rate at which visitors to the site would sign up and fund their investment accounts. A pretty standard goal for any aspiring fintech unicorn.

My immediate concern as a newly hired UX Consultant was to sit down with representative users, have them spin through the journey from landing page to funded account, and learn two things:

  • How well the available information about the product met their expectations as prospective users (which would, in turn, tell us what to improve);
  • Whether any fundamental usability issues were harming conversion.

We recruited five prime participants — people who weren’t familiar with Lending Works, but had used competitors or similar investment services.

Guinea pigs. Geddit?

Though Lending Works’ demographic skews heavily towards retired men, we decided that age and gender were unlikely to influence the problem we expected users to solve with this product. Namely, how to get a good return on their money. Our target participants were thus people with money to invest, not 60-year-old men with money to invest.

It was also important to recruit committed desktop users. The large majority of Lending Works’ lenders interact with the site on desktop, and the behavioural differences between desktop and mobile use are sufficient to factor into any robust user journey study.

The study

The study itself couldn’t have gone better. All but one of our recruits (there’s always one) were fully engaged, eager participants. With care not lead them to convenient answers or discuss preference instead of experience, we extracted a wealth of actionable information.

I was giddy. I’d arranged for the CEO, CTO, Executive Director and Product Manager to sit in on the sessions, and it has never been so easy to achieve total user testing buy-in from stakeholders. Seeing real prospects try and fail to use our product in person was a revelation.

Many of our observations confirmed usability problems we knew existed at the bottom of the funnel, but the recurring points of confusion at the top of the funnel piqued our interest.

This is what it looks like when your participants find problems faster than your hand can keep up.

Prompted only to describe the information they expected to see on the peer-to-peer lending landing page, participants uniformly responded that key information was missing:

  • Who their money would be lent to;
  • The minimum investment they could make in our platform;
  • How they could access their money if circumstances changed (this information was already available, but buried at the bottom of the page).

Similarly, participants expressed frustration at a slow-moving animated timeline of the sign-up process. Forcing your users to wait for information they could absorb at a glance is never sensible, and there was further confusion over whether the timeline was supposed to be interactive.

This timeline went down like a cup of cold, poorly designed sick.

These findings suggested a simple change: restructure the peer-to-peer lending landing page to include the missing information and prioritise information that was too hard to find. We felt we could reasonably ditch the timeline. It had done nothing but frustrate our subjects and it was eating valuable page space.

The “solution”

I produced a simple page that seemed to achieve these goals. Long-form copy was heavily edited and broken into digestible chunks replete with subheadings and bold highlights. Scannability was the aim, and we’d had success with the same technique elsewhere on the site.

I made sure the nature of the investment was clear with a chunk of text and a simple infographic: “You’ll invest only in personal loans to creditworthy individuals who pass our rigorous approval process.”

A simple, digestible subsection that directly addressed our users’ concerns.

The minimum investment was called out too, with additional reassurance as to the interest rate investors could expect on small vs. large investments: “Invest as much as you like, as often as you like, starting from just £10. Whatever you invest, you’ll always get our best 3- or 5-year interest rate.”

Finally, the section on accessing money became more prominent, and the slow-loading timeline was removed.

What an improvement it appeared to be! Better yet, it was an improvement backed by the authority of real users who had horribly failed to use this product.

The test

Spoiler alert.

We ran a 24-day A/B test using Google Optimize. Specifically, we ran a redirect test, diverting half the traffic from the original page to the new page.

At first glance, the results were wonderful: click-through rate from the landing page was up over 20%, with 95% probability. Our confidence in the results of the user testing felt rewarded.

Other measures of funnel health told the opposite story. Anyone who saw the new page was over 20% less likely to sign up.

Visitors were clicking through to the sign-up form more often, but abandoning with vigour. It was a disaster.

The detective work

Accurate reconstruction of scenes inside the office.

So, you’ve been confronted with a contradictory result that your pricey usability study says shouldn’t have happened. Where do you start your investigation?

Learn from the past

Scour your archives, your JIRA history and your poorly maintained Excel spreadsheets for any previous projects in the same vein as the present disaster. What was the outcome? Does it differ notably from your current project? If so, ask yourself why.

Fortunately, I’d fired off two copy-driven A/B tests before the first user testing session. Wasted test capacity is the kind of evil that kills puppies.

Early wireframe of the new peer-to-peer top block.

The first test restructured the top block of the peer-to-peer lending landing page — the same page now in question. By breaking the long-form content into bullets and running a benefit-driven headline, we’d increased click-through rate to the sign-up form by half. However, any change to sign-up rate itself was judged too small to be measured in a reasonable timeframe. As it wasn’t an obvious loss, we rolled out the change.

The second test was almost identical. We restructured the top block of the landing page for the ISA product in the same way as we had for the peer-to-peer lending page. The test results were astonishing. Click-through rate had increased in-line with the results of the first test, as expected. We didn’t expect that sign-up rate would double.

Why the difference? Apart from product-specific information, the top blocks of both pages were identical except for one extra bullet on the ISA page that I’d included to explain a flaw in the ISA sign-up flow: “Sign up in 2 minutes, then open your ISA in 1 simple step”.

The ISA landing page and its magic bullet.

There was no other difference between the two pages that could explain the divergence in sign-up rate, but it still didn’t explain why our latest project — the user testing-inspired redesign of the peer-to-peer page — had reduced the sign-up rate. After all, the peer-to-peer page didn’t have the extra bullet when we tested the new top blocks, but its sign-up rate was flat, not down.

Go back to basics

When you’re stumped, or you have no prior test results to draw on, ask yourself whether changes that align well with user feedback violate any fundamental rules of UX design.

To understand what was wrong with my design, I considered the function of the spectacular, conversion-busting fifth bullet on the ISA page. “Sign up in 2 minutes, then open your ISA in 1 simple step.”

All this bullet does is tell the user what’s coming next. In other word, it sets their expectations, priming them for the journey ahead.

Sign-up rate in the peer-to-peer lending top block test didn’t fall because there was another feature on the page served this priming function. It was less prominent, it was poorly designed, but it told users what would happen next. Have you guessed it?

That ghastly, slow-loading sign-up timeline — that our test participants hated — was propping up our conversion rate. And I’d killed it.

We were no long setting user expectations for what was coming next. They were clicking through from the landing page at a record rate, were caught off-guard by the sign-up form and its intrusive request for information, and they abandoned.

The fix

I was adamant that the timeline wasn’t coming back in its current form. The original peer-to-peer page had the timeline and still performed poorly relative to the ISA page and its expectation-setting bullet.

Our magic fix was a copy-paste job. We copied the bullet from the ISA page, reworked it to omit the reference to the ISA, and pasted it into the problem page before running a new A/B test. This time, we pitted the redesigned page as it stood following user testing against the same redesigned page plus the priming bullet.

The results were spectacular. Sign-up rate increased more than 50% vs. control.

The lesson

Prepare your brain.

There’s a cautionary tale here. Our user testing was unequivocal: participants hated the expectation-setting timeline. But we mistook frustration over a feature that was poorly executed for a feature that was not useful.

Pointed in the right direction by a pair of contradictory test results, explaining the poor performance of our redesign required a return to first principles. We had to learn what essential function was no longer being served by the new page. In this case, we were no longer effectively setting user expectations for what would happen next.

Once we discovered the issue, the solution was a matter of restoring this fundamental piece of the user experience while also addressing the very real user frustrations surrounding the original feature.

It’s only the combination of UX theory with hands-on feedback that yields truly remarkable results.

--

--

Angus Morrison

Go expert working on planet-scale finance projects at JP Morgan. Kotlin and TypeScript proficient. Rust-curious. Find me at angus-morrison.com.