How to get nice insights from a user testing ✨

Pablo Garcia Pedro
Bootcamp
Published in
3 min readSep 15, 2022
Pikachu searching experience

In this article, we are going to discuss a case of improving the search experience of a specific brand or model of users on a website for buying used and semi-new cars.

First of all, we analyze the system that currently exists to perform this search, which consists (of desktop and mobile versions) of a dropdown to first select the brand and then the model of the car. We have to take into account the limitation that without a previous brand selection, the search by the model is not active. This happens in all displays.

We understand that limiting the user options can have its advantages (agility, error prevention) and also its disadvantages (more actions, more time to perform them…).

Current desktop (left) and mobile (right) search method

It is then that a possible alternative is proposed through a search style based on the autocomplete function.

Therefore, we decided to conduct a usability test with users and stakeholders to assess the impact of this possible solution to improve the search experience.

The test, which we will perform with Maze, consists of 2 missions, a simple question, and an optional answer justification.

To complete the first mission, the user must search for the “Hyundai Hb20” model using an autocomplete system. In this mission, there are 6 different paths to obtain the result (model, brand, number, letter…). While the second mission has only 1 path to complete, as it is the current experience.

Autocomplete search (left) and dropdown list selection (right)

Once the 2 missions were completed, we asked the users which one they thought was the simplest to use. We then optionally ask them why they gave this answer.

In this way, we obtain two types of answers: quantitative and qualitative.

Data from Maze about the missions and the quantitative question

Users responded by leaving feedback in the answers and providing value for their experience. With this data and the analysis of the Maze metrics, we can generate our conclusions from the test.

Feedbacks from users

Conclusions

1. Depending on the users’ level of product knowledge, the predisposition towards one type of search varies. Users with less knowledge opt for the “dropdown list” option, while more advanced users prefer to autocomplete as it helps them find the car with fewer actions and faster.

2. Users can make mistakes. In an “open field” option the % of errors made may increase and lead some users to feelings of frustration. Whereas a search “limited” to the selection of a predefined list of make and model greatly reduces the possible % of errors made.

3. Search times are usually low (between 2–4 s approx) so the results are the data that will have more relevance when decided by the user.

4. Achieving a “Hybrid” search model can improve the experience of the two types of users we have differentiated, thus helping us to diversify user journeys and lead them down different paths according to their level of knowledge and urgency to buy. In addition, the data we would obtain from the search fields will help us to better understand which models users are looking for the most.

--

--