Merging into the right lane

Aditi Dhabalia
SWPPA x pathVu x MHCI 2021 Team AMATA
6 min readJul 20, 2021

We are Team AMATA, a student group from Carnegie Mellon University, MHCI program. We partnered with SWPPA and pathVu to advocate for older adults and people with disabilities, and create a pedestrian navigation experience that is safe and accessible for ALL.

Figuring out the optimal flow

In our previous sprint we continued down our two main research avenues, data collection and visualization. To figure out the optimal data reporting procedure, we designed a few options of user flows and screens to test with participants. In this sprint we tested those prototypes with 12 participants, and iteratively synthesized our findings to understand what our users thought about the overall flow and process of reporting data.

Pictures first, info second

Users seemed to orient the reporting process around the picture they took, and found it intuitive to categorize their report into specific obstacles or features after they knew what it looked like. They were also confused about the right categories for some of their reports, and often wanted to select multiple ones when they weren’t sure.

I’m still not sure what to report

Despite a brief education section at the start, there was confusion about the threshold between something that wasn’t bad enough to report and something that was severe enough to be reported.

Why am I reporting positive things?

One of the biggest challenges we encountered was to find a balance between collecting “positive” and “negative” data. Users were confused about why they were “reporting”( a word that has a seemingly negative connotation) helpful accessibility features like the availability of a ramp or elevator. We need to find the appropriate language and flow so that people don’t feel biased or confused about needing to collect data about different things.

I want to know how I’m helping

Participants found it really motivating to know that their data was helping improve accessibility, but they wanted more information about who they were helping, and about how their data would be used in the app. Viewing the impact of their actions in more concrete ways would motivate them to continually engage and collect data on the app.

Just enough information displayed

As we continue to iterate on the navigation and data visualization prototype, we are also establishing the information architecture and the design system to ensure a consistent and user friendly experience.

Objective Measures

From our prior research, many people with mobility impairments shared, from their past experiences, that places advertised as “accessible” ended up not being accessible to them at all. They prefer to see objective information, such as “1 step at entrance” or a photo capturing the obstacles, so that they can make their own judgment based on their capability. We designed a profile set up and assess the accessibility of a location/establishment based on users’ own accessibility needs. People can also adjust their accessibility requirements as needed.

Objective Information

Progressive Disclosure

How to deliver “just enough” information? It’s always a challenge to display the right amount of information, keeping it informative and not overwhelming. Every participant has expressed the same sentiment that they want to know where they can go or not at the first glance, and they also want as much information as possible. Addressing their needs, we designed the flow to progressively disclose more detailed information, without piling up data all at once.

On the basic map view, we use both icons and color coding to indicate the accessibility of an establishment. We have gone through iterations of designs to ensure that we convey the meaning correctly and have enough color contrast for each icon.

Base Map and Key

Once a location is selected, there will be a color ring indicating “3/4 of your accessibility needs are met”, any major obstacles or hazards will be listed with a warning sign. Clicking on the card will bring the user to a more detailed list of accessibility features and obstacles. Another new feature that we are introducing is the “location map”, where it shows a zoomed in footprint of the building and marking the location of accessibility features especially entrances. We believe this can help people locate accessible entrances, planning the last mile travel more efficiently.

Accessibility Needs, Location Details

Reporter or Collector?

While we have been doing two parallel streams of research, our prototype has also developed with two distinct sets of modes. We have our navigation focused side of the prototype and the crowdsourcing side which we as a team now have to combine into one functional system which allows for users to easily complete the tasks they need to do.

To start our prototype combination, we decided to keep the two flows completely separate and test each one as well as the connection to the other. However, that begs the question: are there better ways to combine data collection and consumption?

This question is something that the team is iterating on and exploring different techniques to both combine things like search results while separating the map of your reports and a regular navigation map. An additional incentive, many of our able-bodied users wanted to see their impact and some are even interested in using the navigation portion of the app. So this leads to an even bigger incentive for us to explore the best way of transitioning and separating our two operation modes while creating a great experience for both.

Putting it all together

Two Different Flows

Once a user chooses a mode, they are taken through a certain flow depending on the mode they chose.

Two Modes

After setting up a profile, users who are primarily using the navigation mode will be able to input the mobility aids and their preferences for what accessibility information they want to know about. Users who are primarily using the collection mode will be able to learn about who they will be helping and what obstacles and features to look out for. They are then taken to a base map, which provides route and destination information for those in “Navigation” mode and report details for those in “Collection” mode.

Two Different Primary Functionalities

Once at the “base map,” the two types of users will have easy access to the specific functionalities related to their primary use case. Those in “Navigation” mode will be able to look for destinations and routes, and those in “Collection” mode will be able to add reports. While any user will be able to both report and navigate, the different modes have slightly different UIs to make it easier to reach the functionality that is more relevant to that user.

Iterate, iterate iterate….

To create the full-fledged experience that we envision for our users, we are working to build a single app that tackles both the data collection functionality as well as the navigation functionality.

We wish it were as simple to create this multimodal app as just sewing together the two parts. However, it isn’t — trying to integrate two fundamentally different user flows and interfaces into a single app takes finesse and deliberation. After all, we want it to feel well-crafted and unitary, not like a Frankenstein app. Therefore, we must prototype, test, prototype, test, and yep… prototype, test.

Some of our ideas for combining the two flows into one app

More specifically, we envision different types of users utilizing the two modes in varying levels of magnitude. For instance, an older adult with mobility challenges may find herself only using the navigation part of the app; whereas an accessibility advocate might find himself only using the data collection part of the app. Therefore, a large part of our job here is to seamlessly connect these different use cases through the app while still maintaining some separation between the modes so as to not overcomplicate the user experience

--

--

Aditi Dhabalia
SWPPA x pathVu x MHCI 2021 Team AMATA

Product Designer @The Mom Project. I write to understand 🧐