Sprint 6: From Research to Reality — Building the Bridge to the Field

Honda Research Institute MHCI @ CMU
99P Labs
Published in
7 min readJust now

Written by the 2024 99P Labs x CMU MHCI Capstone Team
Edited by 99P Labs

The 99P Labs x CMU MHCI Capstone Team is part of the Master of Human-Computer Interaction (MHCI) program at Carnegie Mellon University.

Catch up on Sprint 5 here!

Sprint 6 marks the beginning of the building phase of our project. Before beginning to build a prototype of our concept, however, we needed to uncover what to build to make sure our efforts are best used.

We began this sprint by meeting our clients in Columbus to conduct a collaborative session, discussing and ideating the future direction of our project. We broke down the scope of our project — the research process — into three parts: background research, conducting a field study, and sharing the research. The work we had done in earlier, in Spring, was heavily focused on the first and third parts, and we identified a need to iron out the second part: conducting a field study. This would be the most crucial and important part of the project, answering the question of how to get researchers into the field. We began tackling this problem by:

  1. Identifying the roadblocks: What are the current barriers preventing HRI researchers from conducting early in-the-field research?
  2. Rapid ideation: We conducted a methodical brainstorming approach.
  3. Narrowing our focus: We honed in on the most promising ideas, and made conceptual models for our favorite ideas.

Spring Presentation Recap

Team at CMU presenting their findings during the Spring Presentations

These past few weeks, our team has been hard at work preparing for our Spring Presentation which took place late April. Here we presented our research findings from our last sprint and addressed some of the ways we would tackle out pain points. As a recap, these were some difficulties that we observed in attaining Genchi Genbutsu (going to the place) testing:

Roadmap to conducting Human AI Teaming tests in real world environments

Mitigation strategies were derived by reframing this problem space into “How Might We..”

  1. Foster closer alignment between research and product development
  2. Nudge researchers to “go to the place”
  3. Connect research and researchers
  4. Facilitate the info searching process
Ways to mitigate current roadblocks

The implications of these “How Might We..” helped us transition into our summer ideation phase. We were able to uncover mitigation strategies to address the problem spaces.

Jasmine and Erin ideating

For our summer kickoff, we traveled to Columbus, Ohio to debrief and collaborate with HRI. During this session, we identified the positives of our potential solutions and decided on a focus for the summer. We started with a rose, bud, thorn activity to reflect on our concepts from the presentation. After identifying the buds, we selected 1–2 ideas to explore more deeply, considering how these features play a role in the products at large. This meeting felt like a light at the end of the tunnel as we converged on the design space of our problem area.

Prioritizing the topics to focus on in sprint 6

After our collaborative session with 99P Labs, we realized we needed to focus on solving problems in the “conducting” phase. However, there are so many design opportunities that it felt chaotic to tackle them all simultaneously. To work more efficiently, we followed a method suggested by our faculty advisor, John Beck, to prioritize our focus:

Listing Our Hypotheses

From previous research, we learned that HRI researchers rarely conduct research outside lab settings, and we aim to change this. Our hypotheses are:

We can encourage researchers to go into the field by:

  1. “Destigmatizing” field concept validation
  2. Simplifying the planning process
  3. Making field testing easier

Identifying Assumptions

Impact Matrix Highlighting

We derived several assumptions from these hypotheses and plotted them on a graph to analyze their importance and the evidence supporting them. The goal was to pinpoint highly important assumptions that lacked evidence, helping us prioritize our work. The assumptions identified as both crucial and lacking evidence were:

  • Researchers might need to adjust testing plans in real-time during field tests
  • Data analysis/synthesis of environmental factors plays an important role in early concept validation
  • Researchers need assistance in collecting data on environmental factors
  • It is hard to take notes during field tests
  • Researchers avoid fieldwork because they believe it is time-consuming and requires significant effort

While subject to future testing, these assumptions provide a solid foundation for our ideation process.

Prototyping

With these assumptions in mind, we can start brainstorming solutions and determining the fidelity of the prototypes (visual, interaction, content, breadth, depth) needed for each topic.

Starting Ideation & Peer Feedback

During the early ideation phase, we were torn between using abstract models or grounding our solutions in specific use cases. Fortunately, an ideation session with peers from other teams provided us with fresh perspectives. They suggested combining the abstraction of conceptual models with the tangibility of use cases, allowing them to inform each other. This idea became the foundation for developing our Smart Guide system.

Advice and Approach

We spoke with the amazing Dr. Jessica Hammer again for her insights on the problems we were trying to tackle, as well as advice on how to go about beginning solution ideation and prototyping.

Crazy 8 Activity Categorizations

Since our problem scope involved multiple variables, Dr. Hammer suggested a variation of Crazy 8’s for initial ideation: make multiple card categories, and for each round, draw one card per category that should be incorporated into solutions for that round. Our team came up with three categories: the level of concept fidelity a researcher is trying to test at, a problem we were trying to address, and a technology device that might be included in our final solution. For example, brainstorm solutions that involve sensors, that will help researchers at very early-stage concepts, with streamlining their logistical tasks.

Early Ideation

One very hot summer day, the AC in our office broke; our team decided on a change of scenery and did an ideation session at the boba shop next door instead. There, we began to do Crazy 8’s, using the approach suggested by Jessica Hammer. As a team, we made rough sketches of dozens of ideas.

Crazy 8 Activity

After debriefing, we noticed some key overlaps and themes, that helped us make better sense of what flow and features our final solution could incorporate. It also informed a subsequent conceptual model of what “going to the place” might look like, and what solutions or features could realize that final goal.

In addition to the activity suggested by Dr. Hammer, we also did our own brainstorming to imagine different solutions that would address issues before, during, and after field concept validation. These ideas also contributed to our conceptual model.

Update of the Conceptual Model

After the ideation session, we identified disconnections between the individual concepts we generated. To address this, we created a holistic structure documenting the Human-AI teaming process, which helped us categorize all our ideas effectively.

Mapping out our current problem space

Drafting on the Board

We began by breaking down the entire HAIT (Human-AI Teaming) research process into three main stages: Background Researching, Conducting, and Publishing. We decided to focus on the Conducting phase, as it is most relevant to FCV, and further divided it into three sub-stages: Planning, Testing, and Analyzing.

Breakdown of the Smart Guide at Different Stages

With this clear structure, we organized the ideas from our ideation session under these sub-stages. This step was crucial for visualizing where most feature concepts clustered, indicating where opportunities lie. From the visual representation, it became apparent that the “before” and “during” stages held the most potential for development.

Smart Guide Breakdown

Next, we filtered out ideas that did not fit the current process. We then fleshed out a conceptual model by integrating the new ideas as features within each stage, ensuring a cohesive and comprehensive design.

Next Steps

Entering the next stage of our project, we will be creating a low fidelity interactive prototype to test with Human-Robot Interaction (HRI) researchers. This prototype will serve as a foundational model to help us evaluate the core concept of the smart guide. By engaging with HRI experts, we aim to gather valuable insights and feedback on the usability, functionality, and overall effectiveness of our design. Based on the feedback received, we will iterate and refine the design, ensuring it meets the needs and expectations of our target users and stakeholders. This iterative process will help us to enhance the design and move closer to a more polished and effective solution.

--

--

Honda Research Institute MHCI @ CMU
99P Labs
Writer for

Hi there! We’re team Hondasss from Carnegie Mellon's MHCI program on our 8-month journey defining the future of Human-AI Teaming for Honda Research Institute!