How we defined our design opportunity

Three activities that helped us get there.

Duyen (Chanel) Luu Hai
Design Intelligence
6 min readJul 19, 2018

--

Image: Lauren sketching one of many ideas from our internal ideation session — some innovative, others bizarre.

After a few weeks spent researching the experiences of people with vision impairments, we gathered a few important insights. These came from interviews with over 40 people who are blind or low vision (BLV) and more than 20 subject matter experts. In order to generate potential concepts from our research and insights, we organized a few ideation sessions with Moment/Verizon designers. These sessions allowed us to gain a fresh perspective, provide our peers with an opportunity to learn more about our project, and gather valuable insights around the blind and low vision community.

What follows are a few activities that we lead in order to help us better define our design opportunity.

1. Internal ideation sessions

Before generating ideas, we identified the criteria for an ideal concept: our “product” should impact as many people as possible in the BLV community and should be technically feasible in the near future. We used a two-by-two to identify the “sweet spot” we’d aim for.

Because we intend to consider the impact that Verizon’s 5G technology could have on our concept, we focused on ideas that would be potentially executable within the next three to five years when the technology will likely be widespread.

Graph: The two-by-two graph where the x-axis measures feasibility from now to the future. The y-axis measures impact from no one to everyone.

To serve as prompts for the ideation exercise, we shared some of the insights we learned during our research :

  • How might we create a solution that BLV people better navigate in an indoor space?
  • How might we take the stress out of pre-planning for BLV people?
  • How might we create a technology that uses sensory inputs and tactile feedback to deliver navigation information?
  • How might we better inform sighted passersby of a disability? How might we raise awareness for BLV community as a whole?
Image: During our internal ideation session, designers sketched for five minutes before discussing their concept with the group.

We then asked the designers to sketch an idea for each prompt for a few minutes before sharing it with the group. After a few rounds, we generated an overwhelming amount of ideas. We threw out the unrealistic ones and mapped the rest on the two-by-two (see above). By summarizing the concepts that fall under the “sweet spot” of high impact and feasibility, we were able to define our value proposition:

Enable people with blindness and low vision to make decisions on the go and help them to confidently move through their world by way of a real-time, location-driven AI assistant.

2. Storyboarding a few selected settings

Design research is not a linear process. As soon as we narrowed on a value proposition, we found ourselves diverging as we considered specific scenarios and settings where our statement could apply. Some scenarios we looked into were brought up in conversations with people who are visually impaired:

  • How can one find the way to the right subway platform at the Times Square subway station? (For those of you non-New Yorkers, this subway station is particularly busy and hard to navigate.)
  • How can one find a specific item on a grocery store shelf?
  • How can one locate a bathroom at the mall?
  • How can one find a seat in a dark theater?
  • How can one navigate in a park or pedestrian plaza?

For each case, we listed out information that BLV people would need to know and what our concept would need to have in order to respond. For example, if the user needs to navigate to her seat in a dark concert hall, the system would need to know the static blueprint of the theater and where she is in real time. After this part of the exercise, we were able to prioritize the types of information, as well as input and output methods that we’d like our concept to incorporate.

Image: Lauren discusses a journey with the team using the guiding system concept.

We validated our assumptions by storyboarding each scenario before finalizing a concept that could work in almost all of them:

A smart-guide system that uses voice input and provides granular navigation data through a customizable combination of audio and haptic feedback.

3. On-site visit

As we drew and discussed storyboards for each use case, we realized that large and crowded transportation hubs (in New York, think of the Oculus, Penn Station, Grand Central, etc.) pose navigation problems that can also be found in other settings. Because the Oculus serves as both an entry point to NYC subways and PATH trains and is a bona fide shopping mall, we decided to visit the site to validate our concept.

Image: Interior of the Oculus. Source: Wikipedia

We arrived roughly an hour after rush hour so we could safely try our experiment with our DIY low vision glasses. One of my teammates, Darshan, put on the glasses and followed an imaginary narrative to temporarily step into the shoes of someone with a vision impairment:

Darshan was going to visit his friend in Hoboken, New Jersey for the first time. He needed to transfer from the N line to the PATH train at the Oculus. Once he arrived at the Oculus, he found himself wanting to get a cup of coffee before continuing the trip.

Image: Darshan attempting to read the metal sign on the wall that says, “PATH trains to New Jersey.”

Along the way, another team member, Lauren, served as a lo-fi smart guide system (yes, we know this is an oxymoron) to guide Darshan to his destination and answer any question he might have. We carefully took notes of the pain points that Darshan experienced:

  • Stairs at the Oculus are wide and there are not enough handrails.
  • Wayfinding signs have thin typeface, little color contrast, and often hide behind the building’s ribs making them difficult to locate. The lack of contrast in the PATH signage, made it difficult to navigate to the train.
  • The floor of the Oculus constantly changes—there are a few pop-up shops and temporary seating areas that pose new challenges to BLV people each time they visit.

Nevertheless, the “system” we prototyped on the spot was able to guide Darshan. He was able to get a cup of coffee and navigate to the PATH train. However, as we noted in an earlier post, as sighted designers, we can never fully understand the challenges that BLV people face every day, which is why we continue to validate our concepts with the BLV community.

The above three exercises helped us narrow the problem area and features that we can focus on in our final concept and we look forward to sharing more soon!

Every summer, interns at Moment (which is now part of Verizon) solve real-world problems through a design-based research project. In the past, interns have worked with concepts like autonomous vehicles, Google Glass, virtual reality in education, and Voice UI.

For the 2018 summer project, the premise is to design a near-future product or service that improves mobility for people with disabilities using granular location data and other contextual information.

Our team has narrowed down the prompt and through secondary research, we have decided to focus on mobility challenges faced by those who are blind or visually impaired when navigating New York City and similar urban environments.

Darshan Alatar Patel, Lauren Fox, Alina Peng and Chanel Luu Hai are interns at Moment/Verizon in New York. Darshan is pursuing an MFA in Interaction Design from Domus Academy in Milan, Lauren is an incoming junior at Washington University in St. Louis pursuing a BFA in Communication Design, Alina is pursuing a BA in Philosophy, Politics and Economics (PPE) with a Design Minor at the University of Pennsylvania, and Chanel is pursuing an MFA in Design & Technology at Parsons School of Design. They’re currently exploring the intersection of mobility challenges and technology in urban environments. You can follow the team’s progress this summer on Momentary Exploration.

--

--