Users, what do they want?

Tatja Syrjamaki
Team Zooming
Published in
10 min readMay 2, 2020

When the clickable prototype was ready, we conducted an evaluation with six participants who were frequent runners. The evaluation consisted of testing the prototype by finishing four missions on the Maze testing platform, a UX questionnaire and a short, semi-structured interview. This blog post discusses the feedback we received from the participants, the possible future iterations and the evaluation of our teamwork.

Testing the prototype

First user task results

We used a service called Maze to let the participants test the prototype through four missions. Maze mostly concentrates on the usability of the prototype and how well the participants can navigate through the prototype. It helped us to see the prototype’s off-path rate, bounce and miss click rates. Overall, we received good results on the usability of the prototype, and the participants were mostly able to complete the missions. Some participants didn’t complete the missions in a straightforward manner, but explored options that were available on the prototype, resulting in extra clicks. The results showed that some participants tried to move forward with the navigation menu, and didn’t realize that the clock icon meant “start training”. This is good feedback for us that in the future iterations we should make the navigation menu clearer, add headings to accompany the icons and give the user more freedom to navigate.

Overview of our Maze Missions

A significant percentage (87.5%) of the participants didn’t finish the first mission directly. In the first mission where the participants were asked to find out how to improve their running, some participants had problems with understanding what to click. This shows that we should consider making the interaction with the Polar Partner even clearer. In the second mission where the participants were asked to start a training session, some participants were confused about what to click. The direct success rate was 16.7%, the indirect success rate was 50%, and 33.3% gave up the mission. It was unclear for the participants that they could click today’s goal to start a session, and to some the training icon on the navigation bar was unclear. The today’s goal should be made to look more clickable, and the users should be instructed to click it when they start to use the service.

The third mission of finishing a training session and going to the recovery section was much clearer for the participants. 66.7% of the participants finished the mission successfully and no one gave up. None of the participants got lost in the navigation, but some of them explored options that weren’t part of the mission, resulting in the indirect success. The last mission of getting more advice was completed with a 100% success rate. This showed that the participants find the interaction and navigation with the Polar Partner somewhat clear and easy, which is also supported in the interview results that we discuss later. At the end of the test, we asked the participants if the prototype reminded Polar’s own product range, and 83% of the participants agreed with the statement.

The user experience questionnaire

After the testing was done, the participants were directed to a user experience questionnaire. The participants were asked to evaluate the experience with 21 opposing pairs. The results showed that the participants had a relatively positive experience with the prototype. The participants rated the prototype quite attractive, clear, dependable and efficient. The results showed that the participants experienced the prototype to be less novel and stimulating.

Results of UX questionnaire

The UX questionnaire raised the following issues: commonness and lack of innovation. We should consider these factors if we were to do future iterations, because the current feedback shows that the experience doesn’t differ from the current services much. Another reason why this is also important to acknowledge is that the scenario for the service is in ten years, so the solution could be iterated in a way that it’s not outdated in the future, and would still match the current user needs even better.

Next we should focus on how to make Polar Partner more motivating in such a way that it meets user expectations better. Despite these issues, the prototype received relatively positive feedback from the UX experience questionnaire where the main findings were that the prototype was attractive, pleasing, efficient and easy to use. According to the results, Polar Partner’s strength is the usability and comprehensibility. Main learnings from the questionnaire was that we should develop our product to be more innovative and add something to surprise and delight the users.

Interview

We then conducted a short interview with the participants following the questionnaire. We analysed the interview results by discussing emerging themes and issues, highlighting them later on. Based on that, we interpreted feedback of what was positive and what was negative to the users, leading to better understanding of what we should work on in the future. Overall, we were happy with the interview results. The interview was a good addition to the previous evaluation methods, because we were able to get more in-depth insights and concrete development ideas from the participants. The interview results showed that the participants valued personal advice and more advanced running advice — similarly to what real running coaches would give. The current advice on the prototype seemed to be aimed at beginners, rather than advanced runners, because it didn’t bring new information to them. From the feedback we learned that the participants wanted to get actionable advice and direct reasons and benefits behind the Polar Partner’s tips. This is what we aimed at with our concept, but we clearly should make it even clearer. Also, the future iterations include understanding the different levels of runners better and gathering more user insights from the different runner groups and coaches.

The most important user feedback we gathered was that Polar Partner doesn’t feel very personal. To some participants, it was unclear that Polar Partner would use their own data to create the goals and sessions. They didn’t realize that the advice was targeted personally to them. Some participants felt that there should have been more choices, and that Polar Partner resembled a normal chat bot and that they wished for more available options . However, the interaction with the chat bot function was perceived as easy to understand and clear. Because this was only the first prototype, not all the options were available, which may have been unclear for some participants.

“I liked the tone of voice the partner had and the partner was able to give information that I would have not found or looked up by myself”

From the interviews, we can conclude that the experience with the prototype is personal and what may work for one doesn’t necessarily work for another person. The feedback we received was valuable and gave us concrete development ideas. We seemed to reach the UX goal of clarity, but we should consider the goal of athletic more by offering advice that’s targeted to different running levels. It should also be made clearer that Polar Partner uses the user’s previous training data, so all the advice is personally tailored. The visuals of the prototype was well received and seemed to fit Polar’s product range. Because this has been a university project, we’re not going to continue iterating the design, but we’ve received valuable and actionable feedback from the evaluation.

Value + Polar Partner: Success?

Next Polar Partner development step would be to define different levels and goals for the users. We should do more user research to understand what kind of data and advice different levels of runners need to reach their goal. To the runners who understand data, it can be more challenging to add value through the current prototype, so we should do more research on that, too. At the moment the advice is more targeted at beginners, so we should consult running coaches and athletes to better understand what’s valuable to them. The advice could also be more versatile, like include dietary advice, which could add more value to the service.

It seems that the participants thought that Polar Partner provides mostly general information for runners. We received feedback that similar applications already exist. With the iteration ideas mentioned, we could add more value to the users and differentiate the service from other sports tracking applications better. The future iterations would also include adding speech modality to the service. The user could interact with Polar Partner through speech, which is why there was a microphone on the prototype. To some participants it was unclear what the microphone was for.

Where is the Avatar 2.0?

We had previously steered away from the Avatar 2.0 idea. The interview feedback showed that some participants were missing the virtual partner aspect on the design. To them, the prototype didn’t seem very special and different from other similar applications. We created the current Polar Partner solution based on the feedback from the previous evaluation with the Avatar solution, and that showed that people had neutral opinions on the Avatar idea. However, we could consider adding the more personal aspect to the current idea to make it feel more special, but that would require more user research and iteration.

Evaluation of teamwork

Our team was very active from the start. For the progress of the project, we did not have many challenges, but we had to postpone the progress for two weeks due to personal reasons. We were able to finish the project within the new deadline easily and include a remote evaluation to the end, too. At the end of the project it was great to see how we knew each other’s ways of working and started to work well as a team. However, during the project we learned that the best results come when everyone is willing to compromise.

Due to the outbreak of COVID-19 during this project timeframe, we were forced to do everything remotely. However, we had already done all the team meetings remotely, so it wasn’t an issue for us. Also, evaluating the prototype remotely worked without big problems due to the ease of Maze as well as the prep work we did beforehand. In our team everyone was ready to help when needed and all the issues were discussed together. For any development idea, we got on track and were in sync from the beginning on who is doing what. We had the idea of product owners at the beginning, but we didn’t end up sticking with it as the project team was smaller and needed to be more agile. Sometimes we did get stuck with how to proceed.These types of decision-making problems could have been considered in the beginning more as well. If the project were to continue, we would have to agree on a certain way in which we always resolve the issues, so that we won’t get stuck on small things.

Sometimes we noticed that we were too concerned with little details, which slowed down the process. However, it’s also important to note that it was useful for us to discuss the details, too. Although the meetings went well in general, we could have agreed on an agenda before every meeting, so we wouldn’t have had to use time in thinking what we should when the meeting had started. Also, we should’ve agreed on meeting times more strictly and stuck with them, because sometimes our meetings were very long. With these practices, meetings would become more efficient.

As mentioned earlier, we didn’t choose the project owner as we had considered in the beginning of the project, which was a good decision when thought later. It allowed us to have a more equal contribution to the project. All the team members contributed to all the parts of the project, but we had different tasks that we were responsible for. We divided the work according to strengths, but mainly we did work forward together, and we always discussed decisions together. At first, we were doing almost everything together, but later realized that it’s more effective to let one to take main responsibility and others just help. Everyone was equally involved in the success of the project.

Whimsical

One part of the learning was learning to use different systems. During the project we used Zoom, Trello, Slack, Whimsical, Figma and Maze. It was very educational to use various human-centered design methods and our main work platform Whimsical was full of methods and it allowed the development progress of the Polar Partner to be seen

Concept Video

Our main goal was to learn to use human-centered design methods, got something for our portfolios and learn to write a blog. All in all, I can say that these things have been achieved during this project. I think I can say, on behalf of everyone, that during this course we learned a lot about prototyping an AI Chatbot interaction, product design and teamwork.

--

--