Data Wrangling & Synthesis

Gaby Gayles
MHCI Capstone: Team Far Out
5 min readMar 1, 2019

Uncovering insights from our NASA user research data

Team Far Out really dove into analysis these past 2 weeks, synthesizing data from 11 interviews conducted with NASA engineers involved in designing the SLS super heavy-lift rocket. Though at times we felt like we were drowning in data, we made it out alive and with some great insights to show for it.

If you’re wondering who we are — this intro might be helpful.

Ending Phase 1

With the end of Phase 1 (Generative Research & Discovery) fast approaching, it was time to start analyzing our data. As you may remember from our previous post, we recently finished conducting semi-structured interviews and contextual inquiries at Marshall Space Center. And we got some pretty neat data (and T-shirts) out of it!

Our matching NASA T-shirts, courtesy of Marshall Space Center gift shop

Marshall left us with A LOT of raw information. Now, we needed to figure out how to begin wrangling it into actionable insights.

And so it begins

Our analysis began with interpretation sessions — team-wide meetings to review interviews and extract key statements in “interpretation notes.” This was no small task. Because we conducted 11 interviews, we had tons of data to sort through; we’re talking over 600 interpretation notes!

We decided to use two common HCI research methods to sort and categorize our interpretation notes: sequence models and affinity diagrams.

Affinitizing Our Way to Insights

We chose to construct an affinity diagram — a hierarchical organization of patterns & sentiments — to analyze notes from interviews and contextual inquiries. This method enabled us to capture themes that emerged across all 11 interviews.

It was a LONG process.

The laborious process of constructing an affinity diagram

12 hours later, the diagram was complete!

Finished Affinity Diagram

Sequence Modeling

We also made sequence models from our interviews to better understand complex processes NASA engineers engage in to mark a rocket design as “done.” This not only helped us develop a shared understanding of the verification process, but also enabled us to identify breakdowns and opportunities to improve current workflows.

Sample sequence model

What we found

Through analyzing the sequence models and affinity diagrams, we were able to glean multiple insights. They are as follows:

1. The adoption of new processes, systems, or tools at NASA is difficult, complex, and often unsuccessful.

Each NASA team has its own tools and processes that have been in place for a long time. It is hard for them to learn new ways of doing things, and change is often strongly resisted.

2. There is not a well-articulated shared vision of SLS priorities and goals between levels of the organization.

While upper level engineers (systems engineers who deal with integration of SLS subsystems) want to keep the SLS project moving and integrate subsystems effectively and efficiently, lower levels are more interested in their craft and refinement of their subsystem. Both think they should have more authority.This results in cross-level power struggles and trouble completing/integrating tasks.

3. There are no effective, formalized processes for knowledge sharing.

Building the SLS requires extensive communication and collaboration across teams, but siloed institutional knowledge slows people down and makes it hard for them to get the info they need. Even though FIQS tries to remedy this problem, it doesn’t offer the functionality or workflow integration necessary for this to happen.

4. There is a gap between the actual and intended use of FIQS.

Many engineers say they love FIQS but aren’t using it very often or taking advantage of all the features. This may be because most engineers have fixed tasks and the high-level overviews FIQS provides aren’t as useful for them.

(If you don’t recall what FIQS is, it is an internal NASA software tool designed to organize and consolidate rocket requirements data so engineers can better track SLS progress across subsystems. We are trying to figure out ways to improve it so engineers can mark rocket designs as “done” more easily.)

5. FIQS’ functional limitations may hinder its effectiveness.

Many engineers say they love FIQS but aren’t using it very often or taking advantage of all the features. This may be because most engineers have fixed tasks and the high-level overviews FIQS provides aren’t as useful for them.

6. Interdependent processes run separately, resulting in redundancies.

Duplicate data, process redundancies, process dependencies, and differing ideas about refinement results in the verification process taking longer.

Areas for Future Exploration

Our insights highlighted both areas for future exploration and design constraints we need to consider(for example, when designing a tool for NASA engineers, we need to take into account that NASA culture isn’t very open to radically new software solutions or process changes). We have a great springboard for initial design ideas and further research!

Moving forward, we plan to continue conducting remote interviews with NASA engineers, choose specific focus areas for deeper research, and begin ideating on design solutions! Keep following along for updates on our project; it’s going to be OUT OF THIS WORLD (lol).

We are 5 MHCI students at Carnegie Mellon University, currently working on our capstone project, where we work with NASA to help engineers understand being “done” in building the Space Launch System (SLS). We will be taking turns to write about our research activities and insights, design decisions and how we navigate through ambiguity in general.

If you like what you’re reading, feel free to share or clap 👏👏👏 so that others can see it too!

--

--

Gaby Gayles
MHCI Capstone: Team Far Out

Documenting insights about humanity, culture, and design. // Self-experimenter, UX Designer @ Google.