How a quick usability test helped us validate an idea

Henrique Tramontina
d2design
Published in
6 min readJul 11, 2018
Usability test | Henrique J. Tramontina

Prototype, test, gather real user feedback and iterate. This is the ideal approach for solving problems and validating (or not) an idea before entering the development phase. Sometimes this process can take a lot of time and some of you might be thinking: “But I have a really tight deadline” or “My client needs this implemented by next month!” No worries, there’s still hope! In this article, we cover how we prepared and ran a model, obtained results from usability tests in just a few days and validated that we were going in the right direction.

Here at door2door we design solutions to improve the future of mobility and make cities smarter. We know that this is a hot topic at the moment and that great challenges also come with great responsibility. As a B2B2P company, the product and design teams are constantly being called upon to solve problems from both a client and a user perspective. This increases our challenges even further.

Recently, we decided to merge two apps into one, building an app which would include not only a variety of transportation modes, but also our most innovative service: ridesharing. To launch this product, we began by creating user journeys. Then, through sketch sessions and workshops with product managers and developers, we built three prototypes that we could test with real users on the street and inside the office. The tests were a success! The users offered us a wide range of insights, helping us to define the path we wanted to go down.

Screenshots of the 3 prototypes

In usability testing, the people who use our products are the focus of our process. This user-centered approach helps us to:

  • Make informed decisions.
  • Use our resources efficiently.
  • Build better products faster.

After compiling all the results and a lot of hard work, the team reached an important milestone: our first functional MVP was ready to be tested in-house before the public launch.

Internal test | Henrique J. Tramontina

As we’d imagined, there were recommendations for small improvements. But one specific issue caught our attention: users were a bit confused about how to select the number of passengers. At that point, this was only possible after performing a rideshare search. This bugged us because it was something we hadn’t had an issue with when testing the initial prototypes. Suddenly, it had become a big topic. We didn’t have enough time to re-conceptualize the entire UX — so what could we do? We decided to quickly work out another solution and run in-house usability tests to validate the functionality. This time with real users!

Running quick usability tests

As mentioned before, this wasn’t the first time we’d conducted usability tests. These are a fixed component of our design process, allowing us to learn from user feedback, iterate and improve our products. This is how we run a quick usability test to validate a hypothesis:

Getting ready

First, we clearly identified the problem. This was our problem definition from the internal field test:

  • The flow to change the number of rideshare passengers seems confusing to some people: they end up booking for one person because they haven’t seen that they can change the number in the rideshare result card.

Our design team, together with the product manager, set goals and tasks and also wrote the test script for the interview. Once that was done, we were able to move on to the following steps:

  • Creating two simple InVision click-dummy prototypes — one with our current flow and another with a new solution in which users could pick the number of passengers before performing a search. Using InVision allowed us to create a quick mockup without involving developers to build something that was fully functional at this stage.
Craft: InVision’s prototype plugin for Sketch
  • With our marketing team, we sent an email to the users of our previous apps to ask whether they would like to volunteer to come in for testing. Several enthusiastic people contacted us and we invited six of them to the office over two days of testing. All the users were adults and were already familiar with our apps.
  • We repeatedly tested the prototypes internally to check that all the interactions were working and that everything made sense in terms of the script we had written.
  • Other preparations: ensuring the phones were charged, organising the room for observation and interviews, grabbing a few drinks and snacks and also finding some company t-shirts for the users as a way to thank them for helping out.

It’s test time!

  • We planned 30 minutes per user. After a brief introduction and background questionnaire, we asked them to perform two tasks for each prototype. Using questions from the System Usability Scale we then asked them to evaluate each prototype. Later we positioned the results on the SUS curve to see which prototype performed better.
Prototypes in action
  • I was responsible for conducting the test, making sure all the tasks were clearly understood and ensuring each tester’s screen interactions were being recorded.
  • Our product manager was the observer, taking notes on all the answers, comments and feedback.
  • We shared a Google Meet link with our team, so those who were not fully involved at this point, like the developers, could at least listen in and hear the users’ reactions.
  • We always asked for open feedback and took notes on everything. As a side note, this isn’t the time to be offended if something doesn’t work the way you expect or the users don’t like the idea you’re proposing. After all, at the end of the day, they are the ones who will be using your application and your service.

Analysis and results

Normally, this can be be really exhausting. Going through all the notes, recorded user interactions and audio files isn’t always fun. But it’s also exciting, because you tend to discover a direction for the problem you’re trying to solve. In our case, seeing the final results was quite interesting.

  • 4 out of 6 users thought the current version was easier to use.
  • The current version received an SUS score of 90.83, while the newly proposed idea scored 85.42.
Our SUS results
  • Most of the users said they liked having everything in one place (number of passengers & fare selector in a single card).
  • The current version was more convenient.
  • One user mentioned it was better to be able to set the number of passengers right on the first screen before searching.

Conclusion

In the example above, it took us less than two weeks from initially framing the problem to the final results.

In our case, it was interesting to see that there wasn’t a huge difference between the proposed solution and the current one. Both solutions received positive feedback, validating that we were already on the right path.

As far as the team goes, this usability test with external users helped us to see that our current app is on track. It reinforced the idea that we need to learn to balance internal and external feedback and always listen to our users.

These results don’t mean we have the perfect solution for our problem and will never revisit the issue. But they gave us the confidence to launch this first iteration of our product with the version we had developed. We will continue to obtain feedback from real users and reiterate over and over again.

What about you? Do you take advantage of quick usability tests?
Share your ideas with us and check out our work on
dribbble! :)

--

--

Henrique Tramontina
d2design

product design. flat illustration. coffee. repeat.