Summer Sprint 3: Planes, Trains, and Autonomous Vehicles

Eleanor Hofstedt
99P Labs
Published in
8 min readSep 1, 2022

Written by the MHCI x 99P Labs Capstone Team
Edited by 99P Labs

The MHCI x 99P Labs Capstone Team is part of the Master of Human-Computer Interaction (MHCI) program at Carnegie Mellon University.

Catch up on Summer Sprint 2 here!

The team started this sprint by brainstorming and planning together in Pittsburgh. We then spent the majority of the sprint working remotely from both NYC and San Diego. Team members attended two major UX conferences and learned skills to apply to our project.

Industry Learnings

We attended lectures and workshops at the following conferences: Good Research and UXPA International. Prior to the conferences, we discussed as a team what skills we need to develop as designers and researchers in order to better address challenges related to our Capstone project.

Raz Schwartz, Senior Product Insight Manager at Spotify, presents at Learners Good Research conference in NYC. He shares best practices for anticipating research needs.

Here are two key takeaways from our conference:

We solidified our understanding of quantitative research methods through talks held at Good Research. Because our project applies to a large user base of all transportation users, we aim to test for the average user which is a broad user group. Talks from speakers like Colette Kolenda and Emily Chu helped us to understand how to apply quantitative research methods to ​​collect and analyze large amounts of data from varying users.

We better understand how to address points of friction in a user’s journey, such as the in-trip bumps in the road, lack of trust with AVs, and areas of disruption when people are taking public transportation. There were many talks at UXPA, specifically one about addressing these points of friction, which assisted us in developing these skills further.

Successful Remote Collaboration

The pandemic has made remote work more popular and more widely acceptable than anyone could ever have imagined. Companies and schools all over the world have moved to remote or hybrid work. Knowing that remote work will become a norm, our team planned this sprint to meet, prototype, test, and synthesize remotely, and primarily asynchronously.

Our team working remotely while several of us attended a conference in NYC.

We discovered some great benefits:

Ad hoc meetings are more targeted and focused. While longer, routine meetings allowed us to go over an agenda with multiple items to address and resolve, but having ad hoc meetings enabled us to focus on the most urgent and important issues we need to target in our sprint.

More flexibility of individual work time results in greater productivity. We learnt how to use small gaps in our days to meet and do work.

However, there were also challenges:

One thing that stood out is the communication gaps. Since we are not physically working in the lab, we have to rely on email or Slack to communicate with each other. Although most of the time we were able to get feedback in time, it was not as effective as talking to someone who is sitting right next to us.

There are certainly also management challenges. The transition from in-person to remote work mode wasn’t always smooth. All of us need to take responsibility in keeping ourselves organized and make sure work quality meets expectations.

Brainstorming, Prototyping, and User Testing

During Sprint 2, our team started to test low-fi prototypes of built-in screens in an autonomous people-mover, and we created a built environment that simulated riding in a car. We learned how passengers react to unexpected situations, and what they need the built-in screen to be able to resolve for them.

Brainstorm

We wanted to build upon these learnings and further build out the capabilities of our attendant ecosystem in Sprint 3.

To start, we conducted a brainstorm of all the capabilities we imagine our AV’s ecosystem would have. This brainstorm was based on the research we completed throughout the Spring semester, and highlighted the level of complexity we envision for this system.

A Miro board that shows our brainstorm about all the capabilities and use cases for a multi-modal attendant ecosystem in autonomous shared transportation.

Team brainstorm of the capabilities we envision for our attendant ecosystem in a future autonomous people mover.

Learning Objectives and Hypotheses

We then identified the highest priority capabilities and built a low-fi mobile app wireframe to test our initial designs. Our primary learning goals, as laid out in our Sprint 3 Research Protocol for this test were:

  1. Assess if a user can use our system for its intended function, and understand how users expect to use the system
  2. Evaluate if the system allows a user to complete specific tasks that arise in a shared transportation setting
  3. Evaluate user preferences and comprehension of in-trip notifications with the target qualities of notifications being both effective and unobtrusive / discreet.

We had a few hypotheses that informed our design and our research protocol, including:

  1. We should design mobile first, because many passengers will use their personal devices to access this attendant ecosystem, and we will be able to adapt mobile to a built-in screen with more real estate when we expand modalities
  2. Passengers will be familiar enough with rideshare apps to navigate the basic flows of our design

Prototyping

We started by creating a map of all the screens and interactions we would build out in our wireframes.

A map of the capabilities that we envision for our multi-modal system in an autonomous people mover. This map includes all capabilities, and indicates which elements of the system will be available via which modality (built-in screens, personal devices, or audio/haptic interfaces).

A zoomed-in look at the screen map, which shows an example of the screens. Some capabilities that the system will have include options for passengers to adjust in-trip preferences, to hail a ride, and to get info prior to their trip beginning.

We then developed a wireflow based on this map, which we prototyped in Figma to test with our participants.

A zoomed-out view of all the mobile wireflows that we tested in usability tests.

A zoomed-in view of four screens that we tested. These screens allow passengers to view trip details and modify their drop off location .

Another zoomed-in view of four screens that we tested. These screens allow passengers to chat with a virtual agent when they encounter an unexpected vehicle stoppage.

User Testing

Number of Participants: 10

Method: Low-fi prototype usability testing with think aloud

Our team shared our wireflows via Figma and provided participants with four scenarios and tasks to complete. Two of the scenarios required proactive action on the system and the other two solicited reactive responses to notifications and nudges from the system.

You can see our scenarios in our Usability Test Protocol.

Findings

  1. Emergency stop language was misleading — Many participants felt like the word ‘emergency’ was too extreme when they wanted to come to an unplanned stop; it didn’t feel appropriate, but they felt like it was the only option given the scenario
  2. Notifications require too many ‘confirmations’ or ‘dismissals’ from users — Participants did not want to have to confirm that they received a notification. They mentioned that they might miss the notification, and feared that if they did not confirm their receipt, that something undesirable would happen. For example, if many assumed that if they did not confirm that they received the notification about disembarking, the vehicle might start moving.
  3. Users could not find ‘Lost and Found’ — In a scenario in which they had to report a lost item, many participants did not think Lost and Found should be under “Report a Problem”. Given how common this problem is, many felt that it should live more prominently.
  4. Users expected the car would immediately drive back with their lost belongings — Once they were able to report their lost item, many users expected that this would have an immediate result. They envisioned the system would be connected between the vehicle and their personal device, and that the vehicle would locate them and return right after receiving the lost item report.
  5. Users have varying needs when it comes to understanding the reason for an unexpected stop — in a scenario in which the vehicle came to a stop, some users wanted to know *why*, while others only cared how long the delay would be. Users expressed interest in why the vehicle was stopping based solely on curiosity, but many noted that they could just look out the window and see why. Others didn’t care about the reason, but did care about having an exact time estimate for when the vehicle would start moving again, so they could decide whether or not to continue the ride.

Design Implications

Test multiple content iterations — Our team will A/B test different content for the in-app options to ensure the language aligns with users’ mental models of the tasks they are completing. For example, we will try alternatives to the word ‘emergency’ and consider other ways for passengers to indicate that they want to stop sooner than originally planned.

Consider the level of specificity in explaining unexpected scenarios — We will also continue to explore specificity levels when it comes to giving passengers information. Our goal is to strike a balance between giving enough information to instill trust in the system, while not providing superfluous details that a passenger does not need to know.

Sprint 4 Planning

As we look ahead to the final sprint of our project (!), we plan to:

  1. Pinpoint objectives and goals for final prototype and summer deliverables
  2. Identify which system capabilities we will build out versus which capabilities we will recommend to our client for further exploration
  3. Continue to build out our design system that we will weave throughout all modalities of our attendant ecosystem
  4. Build and test the integration of the various system modalities and ensure that they work in a cohesive and comprehensive way.

While it’s bittersweet that our journey is almost at its end, we’re excited to bring all of our learnings to life in our final design, which will inform the future of shared AV transportation.

Follow 99P Labs here on Medium and on our Linkedin for future updates on this project and other student research!

--

--