Improving App in this Pandemic with Heuristic Evaluation
In order to fulfil our customers’ order, HappyFresh developed several products that can be utilised by our shoppers and riders in store. One of those products is HappyFresh SND App (Shopper ‘n Driver). The main goal of this product is to help our shoppers and riders doing their job — fulfilling our customers’ order: by picking items, finding replacements (if any), contacting customers (if they need to), handing over the groceries to the riders, and delivering to customers’ address.
After we put more focus on the functionality, as the pandemic hits, the needs towards better user experience for our shoppers and riders significantly increased — as we expected more efficiency and effectivity to serve our customer better.
Where do we start
We knew that we were in the midst of pandemic, with the high number of new cases everyday especially in Jakarta, we had to limit ourselves to get potential exposure of the virus. Not only that, with our company value is to prioritise customer, our operations — especially our shoppers and riders — were busied in fulfilling customer order that increased due the movement restriction.
Knowing that, we thought that we were unable to conduct direct user experience research. But the show must go on, we brainstormed on how we could discover and explore the area of improvement of our HappyFresh SND App. We decide to use Usability Heuristic Evaluation as main method to evaluate our current state and explore the room of improvements.
Simply, heuristic evaluation is a process where experts inspect the usability of a product based on certain principles (you could read more about this in the reference part). We chose this method because — aside from pandemic reasons — this method is easy, quick, and cheap. Using the Jakob Nielsen’s 10 general principles for interaction design, our experts evaluated every single steps in end-to-end shopping and journey experiences.
The end-to-end journey that evaluated by our experts chunked into several part, based on screens provided and actual journey in real condition. Overall the shopping journey chunked into 19 screens and delivery journey chunked into 14 screen. For example, the picture below shows evaluation result in the login screen.
What we do next
After we got all the evaluation results from experts for all journeys, the next thing that come is to choose what will we do first and next. This is one of the key activities in product management processes: prioritisation. One of the reason why we had to do prioritisation is because our resources was limited — especially time and human, that the most important things should have higher priority than the least. Do not let our limited resources was used for something, let’s say, less important.
We assess all the journeys and evaluation results using RICE prioritisation with the development team (you also could read in the reference part). Simply, we assess the benefit we could get — in term of reach, impact, and confidence — to be compared to the development effort we have to spend. All in all, we came up with the detailed plan and timeline, what we have to first, next, and so on.
What we get
After a certain time, we successfully developed and shipped the improvements based on a heuristic evaluation to the shoppers and riders. Not only feeling satisfied by that, but we also want to know how our improvements had impacts in shopping and delivery journey, so we came with an idea to measure those.
We conducted a survey for our shoppers and drivers. The main goal of this survey is to capture whether our improvements made a better impact on our shoppers and riders or not. We used the comparison method, so for each question in our survey, we displayed the previous version of the screen (before improvements) and the revamped version of the screen (after improvements).
For every improvements, we provide a question for shoppers and riders and let them choose or fill the suitable answers. Other than let shoppers and riders compare between previous and after improvements, we also gathered several important information, such as user demography, general feedback, and else.
Shoppers and riders need to rate every comparison on a scale from 1 to 5. Scale 1 means if they think the previous version was significantly better than the improved one. Scale 3 means neutral, or there were no differences between before and after improvements. Scale 5 means if they think the improved one was significantly better than the previous one. If they select scale 1 or 2, there will be another follow up question which asking why they choose that answer.
The survey conducted to the most of our shoppers and riders in our operational countries. Generally, the result was quite good. In average, the average point of all the comparisons for Indonesia, Malaysia, and Thailand rated 3.8, 3.7, and 3.6 respectively, meaning that our shoppers and drivers thought that the improvements we made are better than the previous version.
Key takeaways
- User experience is one thing that is as important as functionality. To get optimal functionality there is a need to provide it with a good user experience.
- When it comes to user experience research, there are a lot of methods to do it, especially when the constraints are inevitable, such as a pandemic, heuristic evaluation might be one of the good solutions.
- Don’t forget to measure. You will never know that one thing is better or not without measure it.
Reference list
Nielsen, Jakob. (1994). 10 Usability Heuristics for User Interface Design. Nielsen Norman Group. Retrieved December 7, 2020, from https://www.nngroup.com/articles/ten-usability-heuristics/
Nielsen, Jakob. (1994). How to Conduct a Heuristic Evaluation. Nielsen Norman Group. Retrieved December 7, 2020, from https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/
McBride, Sean. (n.d.). RICE: Simple Prioritization for Product Managers. Inside Intercom. Retrieved December 7, 2020, from https://www.intercom.com/blog/rice-simple-prioritization-for-product-managers/