How internal usability testing sessions saved our beta release

Bettina D'ávila
NYC Design
Published in
4 min readAug 16, 2019

A few months ago I posted an article about a 2-day Design Sprint. In this event, our Sprint team successfully achieved to establish the long and short term goals for the project, finishing the last session with a tangible storyboard for our main user journey.

Ongoing Design Sprint session at AB Tasty in Paris, February 2019

After this Sprint session, the prototype was built within a few weeks time to be tested and implemented. But, instead of rolling-out the solution to beta testers, we decided to iterate a little bit more with internal customer success managers in order to validate our hypotheses. We ran five different sessions and I will share my learnings and take-aways with you right now.

The Usability Testing Sessions

We decided to run some usability testing sessions with our own customer success managers — who are not only the final users of the feature but also represent the voice of our customers — in order to detect important usability issues before rolling out the new feature to beta testers. This way, we not only detected major pain points in our hypotheses but we also gained time to pitch our customers about the new feature and if they would be willing to become beta testers.

The testers

Following the famous tips from Norman and Nielsen, we gathered five consultants to test our first solution in order to cover the majority of pain points we could find in the prototype. According to their study, “elaborate usability tests are a waste of resources. The best results come from testing no more than 5 users and running as many small tests as you can afford”.

“Why You Only Need to Test with 5 Users” by Norman Nielsen Group

But, be aware! Keep in mind that this strategy works well with an existing product, where we are digging for usability issues in order to improve the experience. If you’re thinking about designing a whole new product or service from scratch, consider doing exploratory design research.

The prototype

We created a click-dummy prototype in Invision to fake the navigation and interactions on screen. Our solution was already a high-fidelity mockup because the testers were already familiar with the tool and with the functionality of the feature. Moreover, beyond changes in the content and navigation path, we wanted to test the impact of the new UI elements in the overall user experience.

A glimpse of the Invision’s platform by tubik

The session

Each session took around 30 minutes and it was composed by three participants: the interviewer, the note-taker and the interviewee (the interviewer and note-taker roles were taken by the UX Designer and the Product Manager, switching these roles between every session).

We created a script to guide the interviewer, with a narrative and a direction of the tasks the interviewee should try to complete during the session. We also applied the “Think Aloud” method — consisting in the interviewee saying out loud everything they are thinking, feeling, struggling with or happy about.

After each session, the interviewer and the note-taker would get together to discuss the notes, their different impressions of the session, so to finally confirm (or not) the major pain points and to validate (or not) the hypothesis being tested.

José Alejandro Cuffia for Unsplash

The outcome

With these five usability testing sessions we were able to validate crucial points of our solution within only just a few days.

We realised that our solution were way more complex than necessary and the users were overwhelmed with the cognitive load during the setup process. We also noticed that important buttons and sections seemed invisible to the user, or really difficult to find — greatly compromising the usability of the feature. On the other hand, we got great feedback on solutions that worked well, reassuring the team of the good choices we made.

Now, the follow-up step is to improve the mockups with the new feedback and re-test the solutions with the beta testers while we progressively roll out the feature to real customers.

Big take-aways

  • Things can always be simpler. A lot of complexity was removed after receving feedback. If the problem is complex, that doesn’t mean the solution should also be;
  • Beta versions of your product still affect real customers. Don’t underestimate the value of a beta version, MVP or POC when you want to show the value of your product/service to real customers;
  • Positive feedback is also important for our validation. As much as identifying usability issues is relevant, we need to understand what solution works for the user as well (not only what is broken).

Thanks for stopping by and I am eager to hear and share more similar experiences from fellow designers. Hit me up if you wanna chat and see you next time! 🤙

--

--

Bettina D'ávila
NYC Design

Designer, drummer & beer lover. Senior Product Designer based in Lisbon. Find me at bdavila.me