User testing on a budget

Carol Liao
Inside Moneyfarm
Published in
6 min readJun 14, 2018

As we continue to grow our design team at Moneyfarm, we’re also developing our process to ensure our users are at the core of every decision made. We’re exploring how a different approach to user testing can help us to discover the blocking stages for our target market. We’re treating our user tests as an experiment in itself.

Finding the right test participants

First we defined company-wide personas, outlining our target user’s motivations, concerns, and behaviours to build empathy. We also mapped each persona to some key drivers for the company — wealth, technology familiarity, and most importantly, investment experience.

Moneyfarm’s target users are not novice investors looking to play with the latest shiny app. Our target users are financially experienced, but don’t have the time or confidence to invest themselves — some of our customers just have other priorities.

How do you test when your users just don’t have the time?

Sometimes it’s easier to answer a question by looking at what you *don’t* want in your test. To set our parameters, we outlined where we needed to draw the line, and where we had some wiggle-room.

Things we must avoid

  • Internal tests: Make sure you don’t only test exclusively with colleagues and subject-matter experts, as they’ll undoubtedly have role-specific biases. And crucially, they know what the company does in perfect detail— they don’t use your app like your customers do.
  • Friends and family: Great for support, great for sharing, but not so great for honest results.
  • Users who don’t fit our personas: Avoid clouding the data with opinions from users we aren’t trying to target.
  • Single country testing: Almost all of our users are based in the UK and Italy, and we must make sure it works for everyone.

Things we should try to avoid

  • Small sample sizes: Large sample sizes will allow for quantitative studies and give us more confidence in our results.
  • Users that fit our supporting personas but not our main target: Because although we have three personas, it’s best to focus on our target and make sure the solutions accommodate the supporting personas, rather than vice-versa.
  • Remote testing: Holding tests in person allows us to gather better qualitative data from observation, and will be a good opportunity to build a rapport with valuable customers

Now that we’ve defined our constraints, let’s take a look at our objectives.

We defined three testing methods that we wanted to… well, test, each with different purposes, compromises, and amount of effort.

Test 1: Online testing tools

Using a combination of online tools allowed us to gather feedback remotely.

Purpose: We generated various concepts for what the first screen in our onboarding flow should be, based on research from established experts and our own data. We narrowed our ideas down to our four strongest approaches and tested them in two visual styles to discern if preferences were based on content or aesthetics.

How we did it: This was the first test we tried in this phase, and is the definition of quick and dirty. With low-fidelity mockups and placeholder content and visuals, the main goal was determine which direction was preferred and what it said about the company. Using a preference test on UsabilityHub, we set up a few questions, and linked to the test via a Qualaroo popup, so that we could target customers logged in on our dashboard and be able to see their data and determine if they matched our target persona. We ran the test for four weeks.

Takeaways: The best thing about online testing is that once it’s set up, you can sit back and let it work its magic. You do however, need quite a bit of time if you want a sufficient number of results. Perhaps next time, we’ll try different approaches to the messaging and different types of test to see what elicits the most participation.

Time and effort cost: £

Test 2: Guerilla testing

By approaching potential testers on the street, we were able to gather feedback immediately.

Purpose: Like our online test, we wanted to test which approach to onboarding users felt most comfortable with. However the benefit of in-person testing was that we could better gauge initial reactions to visual styles. We also used this as an opportunity to test what questions people are comfortable answering.

How we did it: Two of us went down to King’s Cross Station and spent an hour approaching waiting travellers for their opinion. We showed the same mockups from the online test on an iPad and observed reactions as they swiped through. We followed up with a questionnaire, asking them the same questions followed by a demographic survey to get a better idea of who we’re talking to.

Takeaways: The results from this test were more ambiguous. We weren’t able to recruit as many participants as we would’ve liked, and sometimes people who did help didn’t feel comfortable answering our demographic questions. This made it difficult to tick off the “Users who match our personas” requirement, so we decided it was not worth pursuing in Italy. However, having immediate results in just one hour is really exciting, especially after our long wait for enough results with the online testing, so we’ll be looking for other excuses for guerrilla testing in the future.

Time and effort cost: £ £

Test 3: Small-batch usability testing

One-on-one usability testing gave us the most detailed and actionable insights.

Purpose: Fast-forward now to our dashboard project. Since this is the core of our product, we’re willing to invest more time and effort into testing here, despite its early stages. We had two potential directions with different information architecture and different functionality. The visual style is kept mostly consistent to keep feedback focused. Here the main goal was to test usability. Are the important actions discoverable? Is anything missing or confusing?

How we did it: We went with a qualitative, task-based approach and prepared a script with a scenario to walk through. We recruited participants with an email targeting existing customers matching our target persona, and lived within a certain distance of our London office. This limited our numbers significantly, but luckily we had enough volunteers who were willing to help improve our product. We invited them into our office individually for coffee and biscuits, and showed them our basic incision prototype, switching between the two directions as the primary version, then showing the other at the end to compare. We had two team members present during the test; one to interview, and one to take notes and observe.

Takeaways: The only way that we were able to do this with our timeframe and budget constraints is by limiting to four individual sessions of 30 minutes each. We started with UK only, but plan to hold the same test in Italy as our next step. This ends up being quite costly, but the learnings from this test significantly outweigh the previous two. Since we’ve already established that the user is willing to help, it was easy for us to adapt the test as we went, sometimes making changes in time to test with the next participant as a third or fourth option to review.

Time and effort cost: £ £ £

What’s next?

It’s no surprise that our final and most costly test was the most beneficial, but we learned a lot about which testing methods work for us at Moneyfarm. What might not work for one project could be ideal for the next. Since our goal for testing our tests was not to determine one golden way of working, we’re not looking to eliminate any of the options we’ve tried. This process allowed us to quickly and cheaply uncover some benefits and weaknesses in each approach, and we’ll be continuing to look for new ways to improve them.

In fact, our next, and biggest test yet has just been confirmed. If you want to help us improve our product, email us at careers@moneyfarm.com.

For more information, please see www.moneyfarm.com. Moneyfarm is authorised and regulated by the Financial Conduct Authority.

©2018 MFM INVESTMENT Ltd
Registered office: 90–92 Pentonville Road, London N1 9HS | Registered in England and Wales Company №9088155 | Telephone number: +44 (0)20 3745 6991 | VAT №193149785

--

--