Gaining empathy: Exploring user needs through vision simulations
The first thing we thought was, “this is going to be difficult.”
Do you have 20/20 vision and little to no mobility impairment? If your answer is yes, you’re in luck! You live in a world that is designed for you and others just like you. For those who can’t say the same, the world poses much greater difficulties. This year, our team has decided to focus on designing a product or service to improve mobility for those who are blind or low-vision in New York City or other similar metropolitan areas. With this in mind, we tried to put ourselves in the shoes of the visually impaired population to understand what challenges they face.
To further understand Blind or Low Vision (BLV) user needs—beyond what we have learned through secondary research and interviews with vision-impaired people and accessibility experts—we decided to conduct an empathy experiment.
As fully-sighted designers, it’s impossible for us to truly and fully understand what it’s like to live with a permanent vision impairment. However, through the exercise described below, we were able to temporarily mimic aspects of the BLV experience. Along the way, our own emotional responses to the difficulty we faced and the frustrations we shared helped us to begin to gain empathy for the people we’ll be conducting research with.
In an effort to make our empathy exercise as accurate as possible, we began by using an online Sight Simulator that visualizes the world of someone with vision impairment. The simulator has different modes that show various types of vision impairment, including cataracts, glaucoma, and retinopathy. Using Street View in Google Maps, a user can put in a familiar address and the platform overlays filters in an attempt to show what the environment would look like for someone with vision impairment. The platform was created by SeeNow.org, which is a project of the Fred Hollows Foundation, an organization working towards ending avoidable blindness.
We used the SeeNow Sight Simulator as the basis for the construction of our own DIY goggles — one for cataracts, the other retinopathy. All four of us wore the goggles for about 30 minutes each while attempting a few tasks around the office and Soho, our surrounding neighborhood.
The first thing we thought when putting on the goggles was, this is going to be difficult. Colors washed out, center of vision became blurry, we relied heavily on audio cues to gain an understanding of the environment, and anything further than a foot away was hard to make sense of.
Our first task was to navigate to the kitchen in our office and get a drink from the fridge. This short and usually simple task proved to be quite difficult for us. There is a real sense of fear when you know there are stairs nearby but aren’t able to detect them using your sight as you’re normally used to. We found ourselves relying on the mental maps we had formed during our time in the office and, most importantly, learned the importance of light levels, contrast, and color in order to position ourselves. Additionally, the cataract-goggles-wearer was allowed the use of a sight cane. Even without proper experience or training to use the cane, it proved immensely helpful in understanding the precise location of walls, floor ridges, and stairs. Regardless, at one point one us nearly pulled a fire alarm.
Since we know the office space fairly well, we decided to venture to territory that wasn’t so familiar: the surrounding neighborhood.
Upon going outside, we immediately learned that bright daylight washes everything out and creates a lack of contrast, making navigation even more difficult. We walked around a busy block—for the New Yorkers reading this, you know how congested Canal Street can get—to a popular local lunch spot while wearing the retinopathy goggles. This proved to be especially challenging because the abundance of pedestrians and obstacles—with low-peripheral vision, even bright orange construction cones were hard to distinguish.
We should point out here that without training or long-term experience with vision loss, this test more closely simulates the experience of someone with temporary vision impairment.
Through these relatively short empathy exercises, we were able to gain bits of insight into how someone with vision impairment perceives their environment. As helpful as this exercise was in gaining empathy, the short time we wore the goggles can in no way replicate the experience of living with vision impairment day-in and day-out. However, what the exercise did provide us with was an emotional response that we can tap into when designing a product or service to aid those who are visually impaired. Stay tuned to see how we incorporate our findings from this exercise.
Every summer, interns at Moment (which is now part of Verizon) solve real-world problems through a design-based research project. In the past, interns have worked with concepts like autonomous vehicles, Google Glass, virtual reality in education, and Voice UI.
For the 2018 summer project, the prompt is to Design a near-future product or service that improves mobility for people with disabilities, using granular location data and other contextual information.
Darshan Alatar Patel, Lauren Fox, Alina Peng and Chanel Luu Hai are interns at Moment/Verizon in New York. Darshan is pursuing an MFA in Interaction Design from Domus Academy in Milan, Lauren is an incoming junior at Washington University in St. Louis pursuing a BFA in Communication Design, Alina is pursuing a BA in Philosophy, Politics and Economics (PPE) with a Design Minor at the University of Pennsylvania, and Chanel is pursuing an MFA in Design & Technology at Parsons School of Design. They’re currently exploring the intersection of mobility challenges and technology in urban environments. You can follow the team’s progress this summer on Momentary Exploration.