Figure 1 — Was this case useful

What steps could we take to lift the signal from the noise? How could we get to understand what valuable content meant?

Problem:

Figure 1, like many other network-driven platforms, utilizes the Feed as the main medium to deliver content and value to our community. And as with every Feed, the key consideration to try to nail is relevance. In this regard, Figure 1 was just not cutting it. We would always hear user complaints about how a lot of the cases they end up seeing in their feed was just noise to them. As a platform that was open to all healthcare professionals and healthcare adjacent professionals, it was hard for us to approach personalization. Especially when within physician’s specialties themselves, what is considered relevant can vary so wildly. We knew that users not getting good, relevant cases in their Feed was one of the biggest (if not the biggest) pain points. So we knew what we had to do: deliver good, relevant cases to our users. But what did “good, relevant cases” even mean? What made a case “good”? What made it “relevant?”

We needed to understand what valuable content was before we could begin to deliver something valuable to our users. Thus, what we set out to find was…

How might we understand our content better to deliver valuable cases to our users?

My role:

Our roadmap for the quarter was to tackle content relevance. We wanted to do research and create some concepts to test that would get us closer to understanding what a good case was. As part of the Core product team, I worked closely with the product manager, senior designer, user researcher to conduct exploratory and evaluative research that helped us understand what kind of concepts we wanted to create and test.

Process

1. Audit of our case’s attributes
The first place of our investigation had to start was with our cases. We wanted to figure out what kind of value do our cases provide currently. Some questions we asked ourselves were: What dimensions of our cases can we leverage to create value? Which dimensions are the most valuable to which users? To what degree are the various dimensions valuable? Consulting closely with the in-house medical team, and referring to past qualitative user interviews, we tracked the attributes that we thought made sense and made hypothesis to their perceived value to deliver relevance.

Part of a huge audit spreadsheet that tracked various attributes of cases

After coming up with hypothesis on which attributes mean the most, we realized that most of what we perceived to be high-value case attributes were the qualitative aspects of the cases that we had no way of capturing in the app yet. This led to the next line of exploration of..

How could we qualitatively assess our cases in an accurate way that represented the wide spectrum of users we had?

How would we be able to do this without having our medical team painstakingly go through our thousands of cases and assess them manually? How about asking the users upfront? This wasn’t anything new, apps have been asking users for information on their content for a while now. So the next step was to do look around to see what others were doing.

2. Competitive Analysis

We did a quick analysis of how other apps go about collecting qualitative data on their content to understand what mechanisms are used and exist in the landscape already. This was fuel for our early brainstorming/mockups.

3. Brainstorming and UI Explorations

There many ways of potentially collecting feedback from users on our content. I started to explore a bunch of ideas on how we could create a feedback component for our cases. In addition to just creating a UI component we needed to find out what makes sense to ask users. Mapping out the user journey for consuming cases, we landed on putting a component in the Detail View, right after the case and before the comment thread.

We took a few variations of the component to User testing and got some valuable feedback:

  • Users noticed the component but scrolled past it in order to read comment thread first
  • Users found it unintuitive to have to scroll back up to the component if they wanted to rate the case after they were “finished” with it
  • Out of all the strings we tested, the terms “helpful” and “useful” seemed to be the most intuitive to users
  • The iconography (smiley/sad face, thumbs up/down) turned users off as it was perceived to be too unprofessional for the language they were used to
  • Users expected to receive some sort of feed personalization as a result of using the feature

I went back and updated our designs based on the feedback we got from user testing and the final designs ended up like this.

Final Design

The changes made for the final design:

  • Moved the component to the bottom of the case, bellow the comments
  • Landed on the string “Was this case useful?” as a first iteration. Decided we would be able to interchange strings to find out whatever attribute we wanted to know for our cases
  • Chose “Yes” and “No” as the binary answers instead of using iconograpahy
  • Added a qualitative form for users to tell us why they found the case useful

Results

When we launched the component, it quickly became the most used action on our app. Even though it was buried all the way at the bottom of the cases, users were clearly willing to give us signal on cases they found useful or not. We were finally able to get a qualitative read on our cases, straight from the users themselves. We’ve used the information we’ve gathered from this component to deliver more relevant content not only through the feed, but through our marketing and business development campaigns as well.

Like what you read? Give Jessica Lee a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.