Sprint 7: Iterating upon iterations

Anthony Teo
99P Labs
Published in
5 min readOct 20, 2023

Written by the 2023 99P Labs x CMU MHCI Capstone Team
Edited by 99P Labs

The 99P Labs x CMU MHCI Capstone Team is part of the Master of Human-Computer Interaction (MHCI) program at Carnegie Mellon University.

Catch up on Sprint 6 here!

After deciding on the framework we would use to continuously iterate during the summer, we started off full steam ahead. Informed by the insights from our research in the spring, we designed a study that would be more generative for this first iteration

Starting off with a blank canvas, we had an empty box representing the physical space of the mobile mentor. We ran design sessions with 3 Gen Z participants recruited from around the city, covering people with a variety of interests and backgrounds.

Iteration 2

Through our first round of testing, we gained a bunch of suggestions and ideas of how different people would shape their mobile mentor to fit their learning needs. The main challenge we had after, was to find the “why” behind these ideas.

It would be simple to have a participant mention that they want the ability to cover the windows of the vehicle, and proceed to add such a feature. The main morsel we want from these design sessions was the reason they would want to cover those windows. Was it to increase the feeling of privacy? Or was it something else?

Moving according to the GV Sprint format, the findings from our sessions were put to use with a simple synthesis session. Through our synthesis, we found that the activities people wanted to perform in cars fell under a few main hierarchies. There was the need to prepare yourself for the context of your journey, learning through intellectual means, and also learning the best way to decompress while making the most of our journey time.

Throughout all of these ideas, people regularly brought up the concept of adjusting some aspect of the mobile mentor to better accommodate senses such as sight, hearing, touch, and more. This provided an interesting framework for our team to bring forward as we build for testing the next iteration of our mobile mentor.

We came up with a set of 3 scenarios, each aimed at testing out the different roles that Gen Zers expect from the mobile mentor. The scenarios include preparation for the day, tutoring related to the person’s interest, and playing the role of a therapist in moments of decompression.

To accommodate these scenarios, we needed a space that felt more like a mentor, and less of a cardboard box. So we got to work with some foam core, pvc pipes, and a bunch of physical work.

Iteration 3

As we moved deeper into our iterations, our testing moved from more generative to evaluative. We were looking for reactions to the different things the mobile mentor did, and how the interactions with our potential users turned out.

These findings were interesting, as we started to see how the categories we defined last iteration evolved. Of the 3 scenarios, the mentorship aspect of the mobile mentor seemed to garner the best responses, as we had participants engaging it in thoughtful conversation.

The other two scenarios had mild responses, especially the therapist scenario, where most people’s reactions showed that they wouldn’t use it over what they were currently doing in transit. However, there was still value in the different things we tried to focus on the 5 senses.

Using our findings from testing, we decided to drill down into the tutor scenario, focusing on the different things the mobile mentor can offer to provide an engaging learning experience that makes use of multiple senses, while making use of the unique moving environment of a vehicle.

We settled on three main flows during a learning experience with the mobile mentor:

  1. Visual and Auditory
  2. Environment-based hooks
  3. Location recommendations

By making this less structured, we hope to see how users make use of the different patterns. Think of it like a play-test of a video game, where we’re trying to see how we can make the gaming experience better.

With our visit with our clients at 99P Labs coming up, we found a great opportunity to make use of their expertise in vehicle HCI design to conduct our testing sessions with their help. Moving towards a more focused solution for the mobile mentor, we hoped to get reactions to our mobile mentor experience.

Luckily, our clients had an vehicle prototype from a previous project that we could use to conduct our testing. So, we came in early, modified the prototype, and got started with testing for the day.

Iteration 4

We used a new note-taking method for this iteration mentioned in the GV Sprint book. We had all of our clients take notes of things that stood out to them and write them on post-its throughout the session. This led to hundreds of interpretation notes, which we synthesized to identify main themes and hypotheses to carry forward.

Moving towards testing for this iteration we decided upon these hypotheses:

  • The mobile mentor presented as a realistic human avatar will enable the user to feel comfortable learning and interacting.
  • Users want current assessment of their current knowledge to be understood by the mentor and visualized to them.
  • People want formal learning content delivered in informal ways.
  • People want non-intrusive physical controls
  • People want to understand how the mobile mentor is working for them, and why they’re getting the content they are getting.

As we enter our last few iterations, our mobile mentor vision becomes more concrete. We’re looking forward to what we discover next, and how we can integrate this to shape the future of learning on-the-go.

Read the next blog for Sprint 8 here!

Follow 99P Labs here on Medium and on our Linkedin for more research projects and collaborations!

--

--

Anthony Teo
99P Labs

Software engineer interested in human-centered interfaces and the digital experience.