How we direct our Product Development — Usability Test!

Manabie Tech-Product Blog
Manabie
Published in
9 min readFeb 27, 2020

Being a Vietnamese or probably an Asian student is harsh.

It is full of stress, health issues, and confusion.

I’ve heard many stories from my friends about how they were stressed out because of national exams. I myself have an unpleasant experience of my own.

I remember watching a Chinese movie “Tiger mom” and feeling empathetic with the tiger mother. The movie revolves around how parents should raise their kids: strict rules for the children’s bright future, or enjoyment that pushes them behind their peers.

The question here is, why can’t both an enjoyable life and a solid base for the future be fulfilled simultaneously? That is why, our Manabie is founded: to create an effective environment where students can enjoy.

With that objective, we are building a product — an app, whose every detail is designed so that our learners could enjoy the useful lessons, finding joy in studying again.

However, moving from identifying a problem to creating a right solution is not as easy as we thought, especially when there are several competitors who are holding the same vision, working on the same product, and aiming to solve the same issue.

With given User Story Mapping (which is another story that we would like to save for our next blog! Stay tuned!), we have spent a lot of time discussing, observing, researching on how to optimize the user experience along that story journey. And here comes a powerful tool we are using at Manabie that I would like to share today: Usability Test

If you are a frequent reader of our blog, you probably know that at Manabie we do something called Growth Hack development, which is an ongoing development and deployment of new features or product improvement based on our user findings. This is the reason why at Manabie, the Usability test is also a frequent and ongoing work, along with our weekly product development sprint.

What is a Usability Test?

“Usability testing is a way to see how easy to use something is by testing it with real users.”

(Neil Young, experienceUX, accessed 22 February 2020, <http://www.experienceux.co.uk>)

“Usability testing refers to evaluating a product or service by testing it with representative users.”

(Usability.gov, accessed 22 February 2020, <https://www.usability.gov>)

Many people have their own way of defining Usability test; however, it seems to eventually include the “why” in definition.

(NN Group, accessed 22 February 2020, <https://www.nngroup.com/>)

At Manabie, our sole purpose of conducting Usability test is to get more insights from our target users — the high school students (sometimes parents) — on how they find our app, when the AHA moment clicks to them, and their app experience — whether it is easy for them to get the core feature in-app. Thanks to that, we can seek a solution for our product issue.

For many UX designers, the usability test is probably more of testing UX and to improve the UX design. However, at Manabie, we take usability tests as a particularly good chance to talk to our real users; therefore, other than UX testing, we get to know more than that about our testers: to craft our user stories from it, and to look for rooms of feature improvement or even new feature development.

Steps to conduct a Usability Test

Below is a diagram which describes a typical flow of how a Usability test is conducted, with 3 main objects involved in the session: the tester, the facilitator, a predefined task; and how the information flow should be handled.

(NN Group, accessed 22 February 2020, <https://www.nngroup.com/>)

In a brief version of description, the test includes several steps:

Step 1: Planning for the test:

Since we all would like to make the most out of our test, there may be many tasks involved in the session. Therefore, it is best to plan the test well beforehand.

For the first time planning, maybe we need to carefully define these 3 sections below:

  • Who is the targeted testers
  • How many testers are going to participate (and probably a timetable of their participation time since 1 facilitator can handle 1 candidate at a time)
  • What are the rewards for them

The below sections, on the other hand, may need planning over and again every time we have new Usability test session coming, not just for the first one:

  • What are specific flow we would like to test:

In this step, it is useful to list out some problem hypotheses we have for that current user flow, or having an imagined user story would be very helpful for us to notice the difference between what we thought and how the users actually behave in such a certain flow.

  • Interview questionnaires:

I always find it useful to prepare some basic technical questions to ask users in order to dig deep into their behaviour trigger.

However, at Manabie, some of our experienced Product designers would sometimes join the test session without the needs of these draft questions. They find it more effective to ask the follow-up questions based on specific behaviour of each tester.

This is one of the tips I learnt from them, to be honest, most of the time I find myself asking follow-up questions, and the prepared technical questions are rarely used.

Step 2: Find the testers

Yes, the very basic and prerequisite step. You may wonder why I put this obvious step here, and you are not totally wrong. The reason is even though it sounds obvious, we find it hard to find new testers in such a tight frequency. This is why I think it is essential to classify it as a single step.

Step 3: Join and facilitate the session:

For this step, some may suggest that we divide the session into 2 parts: 1/ Observation — in which we observe our testers to perform the given tasks, and 2/ Interview — in which we will ask for their explanation of further details during the test.

We did process the usability session this way at Manabie for the first several weeks as I remembered.

However, I then realised that users did not respond in the way we expected. Most of the time, users may forget their behaviour for specific tasks or flow and it is very hard to get the qualifiable answer from them even though we try to recall their behaviour and guide them through it.

Gradually, we switch to a new technique — asking users while observing them. Apparently, we would have to be extra-delicate not to distract their testing flow. In this way, we get more insights than before and if we don’t, most of the cases are because our users do not notice the mentioned features — that is, nonetheless, another finding!

Step 4: Consolidate test results and plan for action:

This is the step in which we draft the outcome from the test. For some big corporations with a more intensive and well-defined team, they probably apply some in-depth quantitative analysis before drawing a conclusion and coming up with an action plan.

However, at Manabie, we process it simpler: either interpreting our qualitative outcome into quantitative information or validating the test result based on our prior data and hypotheses. After that, our action plan is implemented based on priority.

I think this is cost-and-time-effective, also applicable for products, which are still new and at its very beginning phase of improvement.

Difficulties — what could possibly go wrong?

Not enough testers:

As I may have mentioned above, this is the issue we are encountering now — lack of source for tester acquisition. Continuous development requires testers for every new development batch, and apparently we need totally new users, who are not familiar with the app, to try it out. This can be tricky, we may not want to spend money and resources for “too noisy” an ad of tester acquisition as it may affect our sources of real users.

Insights are not “good” enough

This can be biased as it depends on how each of us defines a “good” outcome. For us at Manabie, a “good” test outcome is the one which is new and can be drawn to an action plan.

It is also true that we may need to do something about this definition or sometimes we just set the right expectation for the test outcome. Users are — most of the time, more frequently than we think — not conscious of their behaviours in our app. This is why sometimes we may not get a precise explanation from users themselves, just with their literal words.

This is another story, and we have a long way ahead to go with outcome presentation techniques.

Bias

As we discussed before, it is necessary to come up with hypotheses before conducting the test. Nevertheless, occasionally we might be “too” confident in our hypothesis that we may put the words in the mouth of users or hurriedly infer their answer without a second thought to validate that hypothesis.

Therefore, our team at Manabie avoid this by asking a sound-like-obvious question, just to validate the user’s expression and avoid assumption.

Communication:

In the early days of having a Usability Test, we planned very carefully for the communication script to walk users through the test flow. It is undoubtedly helpful.

Nonetheless, we may make a slight mistake of task-giving communication. I always made it clear for the users of what they have to do, maybe “too clear”.

For example, one of our tasks given to tester is “to complete a lesson on Manabie app”, which will include them to complete watching a video, read a study guide and complete a set of quiz.

As our testers follow my task guideline, they all try to complete the task as it is their given goals. Because of that, you can probably imagine how I missed their drop behaviour during learning flow and the insights of reasons behind that.

Soon when I realised it, I totally switched into a more general communication of the tasks — which is sometimes simply asking the testers to do whatever they want with the test, and I just sit there to observe and ask the questions. The key is to keep the structure of the test, to really keep the observer focused on each flow so that we are on track. Other than that, I just let them freely use the app — just like any of real users out there!

How to get the most out of a Usability Test?

Always start with your core features:

For us, the first and foremost priority feature we test is our most important feature — the learning flow. This is how users first experience our core value — the animation video lesson.

Not until we started the test did we find out that our users cannot even locate where to find our learning content. It was a huge strike for us when we found this issue. We were too familiar with the app structure that we thought it was well-presented to users. It turned out the opposite.

This leads to one of our biggest changes in the app UI structure — making the “course search” function a separated navigation, in an effort to make it more visible.

If this is not the first focus, our potential users are probably still skimming through the app and find no values after several minutes, and eventually they just drop without hesitation.

Handle different user types

Since our target users are students, our testers are clearly all high school students.

Most high-school students nowadays are active, smart, and familiar with technology, yet there are different types of students.

On one hand, most of the testers we have are not very conscious of their app behaviour; therefore, it is up to us to give the right questions, and sometimes to guide them through the testing flow, so that we get most of the insights we expect.

On the other hand, there are many student testers actively giving us their own feedback along their testing flow. In that case, it is easier for us to communicate and get their opinions without excessively interrupting their testing flow. However, as a facilitator, we also need to focus to keep this type of users on track of our testing flow.

Therefore, handling different user cohorts with different techniques would be a good way to get the most out of testers.

Ask users the right questions

This may sound a bit sophisticated, yet most of our questions eventually break further on the reason why our users behave in such a way. Sometimes, it is a simple validation question of their understanding of some specific features.

The right question may not refer to a complex, abstract or “expert” one, but often the balanced combination of timeliness and purpose that specific question may entail. It may require a good knowledge of user psychology and experience.

Conclusion

Those are some of my sharings of how we are applying usability tests at Manabie — from the very first days up to now. I hope you find it helpful, or find yourself related. Let’s discuss more below this post if you are interested!

If you would like to be a part of Manabie and to work together to create more positive impact in education, let’s visit us at: https://manabie.com/careers

--

--