Loopback

Encouraging training and feedback in the Carnegie Mellon EMS system.


Scope

Loopback is an application designed with two goals in mind:

  1. Support EMT career growth by aggregating and making sense of feedback
  2. Help EMT supervisors provide effective training

The History, Team, and New Evaluation screens.

Flows for the key screens in the application.

From Research to Design

Loopback fosters feedback within an EMT station, ensuring each member receives the training they need.

Training is critical for EMTs. Proper training prevents errors that could lead to EMT endangerment, patient mistreatment, or improper administrative handling. Loopback promotes training and fosters a positive work environment by providing a central place for evaluations to happen. After the completion of each run, EMTs are evaluated by their direct superiors and given comments and suggestions on how to improve. These evaluations are saved and aggregated, allowing each EMT to track their progress as well as helping team supervisors structure training to meet the needs of their team.

Preliminary Online Research

The first place we went to research was the internet. This provided a wide cross section of life as an EMT. In addition to basic information about pay, responsibilities, and quality of life, we were able to find more varied and personal details within the EMT subreddit. Talking with EMTs on Reddit showed that there is no singular EMT experience; rather, it is a job that varies largely between stations and geographic region, but is always both stressful and proud.

Preliminary Interview

Prior to starting design, we interviewed a supervisor at the Carnegie Mellon University EMS, who I will refer to as Alan. As there is not a large amount of standardization between different EMS services, Alan was especially useful in drawing a complete, concrete picture of how runs are handled at CMU as well as of how training occurs constantly during their large periods of downtime.

Follow-Up Interview

After our preliminary research we identified promising problem areas to work on, eventually settling on feedback, evaluation, and training. With a goal in mind, we sketched out some ideas and met with Alan a second time. Alan validated the overall idea and further elucidated the evaluation process and how it could be improved. One of the key takeaways was that evaluations need to be general in order to adjust for unique, edge-case runs.

Ideation

We did ideation by brainstorming as many issues and technological tools as possible and combining those into as many ideas as possible. We then ordered them based on feasibility and impact and clustered related ideas together. We chose feedback and training because it was feasible and high impact.


Mapping and Flows

The structure of our application is straightforward: we have three top level categories (Evaluations, History, Team) controlled through a bottom nav bar. Under each of these are some subpages, such as the screen for filling out the evaluation or a drill down into a single report.


Design Insights

Personal History Screen

One of the most problematic screens was the personal history screen, where we wanted to quickly show trends but also provide navigation to both single-criterium drill downs as well as individual reports.

This is a problem of information hierarchy and reconciling data visualization with intuitive navigation without overloading.

Personal History started off abstract but unusable.
In the end, we used explicit labeling and a clear major axis to make the screen usable.

Major and Minor Axes

One of the main problems was that rows and columns were initially given even weight, making it confusing what the difference between a row and column was. To solve this, we made rows our major axis to keep single runs together and made criteria columns secondary.

Explicit Labeling

Our initial concept for the history was as an abstract, beautiful grid with minimal labeling information. It turns out this isn’t usable at all. Instead, we ended up focusing on explicit labeling and repeated iconography to minimize abstraction.


Team Overview Screen

Our initial team view was just a color-coded grid without any pre-interpretation.
We redesigned the team view screen to include suggestions and highlighted runs to reduce cognitive load for the supervisors.

Don’t just show the data—interpret it.

It’s not uncommon in writing to hear “show, don’t tell.” This gets flipped in a way when it comes to design work: it’s not always enough to simply show the data. Instead, make understanding as easy as possible for the user by providing suggestions and interpretation up front. This was the key lesson we learned in designing our team view screen. We weren’t able to properly design the screen until we grounded ourselves in what the team supervisor’s intentions are. Our final design provides training suggestions and highlights specific runs, reducing the amount of mental processing required by the supervisor.


With Sharon Lin, Israel Gonzales, David Ralley for Interaction Design Fundamentals, Fall 2014 at Carnegie Mellon University