TeamSense

Mission:

Team Dado partnered with Draper Laboratory to enhance situational awareness for first responders in emergencies. Communication is essential to emergency response — responders need to know what each other responder is doing in order to share information, coordinate safely, and minimize duplicate work. However, existing communication structures do not scale, causing breakdowns as scenes become more complex. Limited radio bandwidth forces responders to proactively evaluate the relevance of their information, placing a heavy cognitive load on individual responders.

Solution:

TeamSense is a virtual-reality scenario and prototype that supports shared perception of team location and status. The scenario, built in Unity, acts as a platform for us to rapidly prototype and test different heads-up display and 3d audio presentation methods in a realistic context.

We focused on three use cases:

  • Helping responders integrate with ongoing searches through a historical view of where other teammates have been.
  • Alerting responders when a responder is injured and they need to shift their priorities.
  • Targeted search for a downed responder.

Role:

The entire project was highly collaborative, with each of us contributing heavily in research, design, and prototyping. As project lead, I was responsible for managing our discussions with Draper and making sure that we approached the design process in a strategic manner. I was also heavily involved in coding the prototype, especially in prototyping the audio interactions.

Breakdowns in Communication

We worked with first responders in Pittsburgh, Pennsylvania to understand how they deal with emergency situations. Through a combination of structured interviews, directed storytelling, contextual inquiry, and guerrilla research we got as close to emergency response as possible without actually stepping foot in a burning building. After gathering data, we modeled the information flows throughout the scene and identified two primary problems that we wanted to address with our prototype.

Radios communication forces responders to figure out if their information is worth communicating to everyone.

Existing Communication Models Don’t Scale

Currently, first responders communicate primarily through the radio. The radio is great at broadcasting information to a large group of people, but only allows one person to communicate at a time.

As scenes become increasingly complex, responders are forced to restrict their communication to conserve radio bandwidth. Responders have to make assumptions about what others need to know, rather than communicating all information and letting receivers pick out the important information.

If responders aren’t getting the full picture from their teammates, they may act in ways that endanger themselves and others.

Misunderstandings Compound

Emergency responses are collaborative efforts, and the actions that each responder takes depend heavily on a shared understanding of the scene. When responders are unable to communicate their understanding with the rest of the team, collaboration starts to break down.

A lack of communication causes misunderstandings to compound. When one responder doesn’t know what another is doing, he does not know what needs to be communicated or what safety precautions need to be taken. This is especially problematic in large-scale scenes, where scene chaos can quickly overtake the ability to communicate.

Building Shared Perception

Explicit, verbal communication places a heavy cognitive load on responders by requiring them to consciously evaluate the importance of their information. Our goal was to place information out into the world where it can be easily perceived by every responder.

Picking the Right Problem

Before actually building out the prototype, we wanted to validate which problems are worth solving for first responders. We made 17 storyboards and ran speed-dating with 12 responders. The most popular storyboards all centered around location information.

A historical view of where teammates have been.
Automatic triggering of mayday alerts (with location included.)
Ensuring teammates are out of harms way before taking action.

We found that location is a key piece of information in a variety of use cases, but it is not easily communicated through the radio. By building a system that allowed responders to see where there teammates are, have been, and might be we could have an impact throughout an entire emergency response scene.

Rapid Prototyping

One of the main obstacles in designing for first responders is that you cannot actually test within a burning building or in an active shooter situation. To get around this, we started with quick concept validation exercises, such as bodystorming, JavaScript web applications, and using Counter-Strike as a prebuilt environment.

We traced participants’ search paths on paper to test the usefulness of trails.
We built a JavaScript webapp to see how different audio factors (pitch, reverb, etc.) are intuitively understood.
We used Counter-Strike as a prebuilt environment to quickly test haptic concepts.

Making the Jump to Code:

The system we imagined is not the traditional web or mobile experience, and so we hit the limits of existing prototyping tools pretty quickly. We decided to build our own prototyping and testing platform in Unity, giving us the flexibility to build our ideas while simulating the pressures of an actual response.

We designed a range of layouts, including simple 3-apartment floors, labyrnthine 20-apartment buildings. Eventually we settled on a four-story apartment building.

TeamSense’s goal is to deliver location information to responders in a way that is immediately understandable and robust in emergency environments. We wanted to explore which modalities work best for which kinds of data and how our system could fail gracefully between sensory channels.

Visual trails displaying where other responders have been.

We ended up building visual and audio representations of historical trails and targeted search. Trails were a big hit, allowing responders to quickly understand which rooms had been searched, find other responders, and find the exit. Targeted search was also much quicker with visual and/or audio cues. Our prototypes required minimal training, allowing location information to be directly perceived and understood by participants.

Testing and Metrics

In testing TeamSense, we looked for both overall task completion and the cognitive effects of our prototypes.

Testing participants ability to navigate an unknown building with and without a view of where other responders had been previously.

For task completion we looked at the time to completion for each of our three stages (collaborative search, targeted search, and exiting the building) and the number of errors made in each stage.

For cognition we wanted to measure how intuitively understandable the feedback was and if there was any cognitive tunneling going on. We measured understandability by periodically pausing the simulation and asking participants to state their interpretation of the scene at as granular a level as possible. To measure cognitive tunneling we used two secondary tasks, requiring participants to call out whenever they heard “Ladder Six” on the radio or when their air dropped to a multiple of ten.

Further Work

We laid the groundwork for developing situational awareness prototypes, but delivering location information is only the tip of the iceberg. We built our platform so that data and presentation can be mixed and matched, allowing rapid prototyping of new concepts.

Beyond the delivery of new information and the development of intuitive presentation methods, there are two other areas that we think are worth exploring:

Implicit Communication

We focused on the receiving end of communication — how an individual receives information and makes sense of it. The sender’s side, however, also has a lot of potential. Our goal was to make communication passive, creating digital residue that can be observed by other responders; doing this, though, opens up more bandwidth for intentional, explicit communication.

Sensory Augmentation / Customization

The modular architecture of our platform was, for us, a means to quickly test multiple variations of prototypes. However, this could also be delivered as a customizable solution for individuals and teams. Looking at what information should be standard across whole teams versus what information can be varied and split between individuals could optimize information flows and cognitive load during emergency responses.