Love us some feedback

Katherine Jiang
MHCI Capstone: Team Far Out
3 min readJul 8, 2019

Rapid user testing and finalizing our final features

With just a little over a month of this project (and the MHCI program) left, it’s officially crunch-time. Join us as we look back on another great trip to Marshall — this time for on-site user testing of the mid-fi prototypes that we made last sprint!

Hello again, Huntsville!

One of our biggest challenges throughout this project has been getting feedback from our end users. Between NASA engineers’ busy schedules, the relatively small number of people who do this kind of engineering work, and the challenges that come with doing virtual user testing, we realized that a second trip back to Marshall for evaluative research would be invaluable to our project!

And so, armed with mid-fi prototypes and an evaluative test plan, we embarked on the 12 hour journey from the Bay Area to Huntsville, Alabama for some good ol’ on-site user testing! We conducted one-hour test sessions with six participant across two days. These sessions consisted of guided walkthroughs of our flows, comparison testing of different screens, and card sorting.

Card sorting information types by priority level

While our solution is designed for Level 3 discipline engineers, we were able to get perspectives from a variety of roles, including both Levels 2 and 3. Overall, we received overwhelming validation of our problem scope and identified pain points and general positive feedback on the prototype itself.

Between the two days of our research trip, the team also set aside time to regroup and debrief one another on the high-level feedback we received. This really allowed us to maximize the value from the remaining user tests as we could refine our questions and activities to get specific feedback.

Back at the office…

After a long, harrowing journey back home (thank you multiple flight delays!), it was time to sift through the mountains of feedback we had collected.

The team categorized collected high-level data from each test session as pain points, opportunities, prototype-specific feedback, and concrete examples of data usage. We then affinitized these big takeaways to identify larger trends and patterns to inform design changes.

Synthesizing evaluative research notes

And now — looking forward to our next sprint finalizing our designs and seeing the light at the end of the tunnel! Hope everyone had a great July 4th holiday!

We are 5 MHCI students at Carnegie Mellon University, currently working on our capstone project, where we work with NASA to help engineers understand being “done” in building the Space Launch System (SLS). We will be taking turns to write about our research activities and insights, design decisions and how we navigate through ambiguity in general.

If you like what you’re reading, feel free to share or clap 👏👏👏 so that others can see it too!

--

--