Week 3: Motion states, user journey, and conversational scripts

This documents week three of the Interaction Design Studio voice user interface project. (Carnegie Mellon University School of Design, Fall 2020) Team: Alice Chen, Hannah Kim, Karen Escarcha, and Catherine Yochum

Bringing Aero to life through motion

Aero, the Motional voice assistant, will be the predominant mode of interaction for passengers in our autonomous, ride-share vehicles. To support the voice user interface, we drafted ten motion graphics to visually accompany Aero as it engages with passengers: Idle, Listening, Speaking, Thinking, Notification, Alert, Result, No Result, Greeting, Arrived.

Feedback from professors

We showed our professor this second iteration of motion designs and received positive feedback such as our motions were playful and there is a clear association between form, motion, and meaning. Based on further critique, our areas of improvement include:

  • Is every motion state necessary or can some voice interactions occur without a motion representation? For example, if Result and No Result are only triggered because of a specific act of system processing, then how important is it to show it as a visual? We decided to remove the Result and No Result motion states because if this very reason.
  • Is there enough contrast among states? Contrast is important for passengers to be able to quickly recognize the meaning of the form. We feel that the Greeting and Arrival states are high enough in contrast but plan to further explore how to make our other states unique yet still function as an ecosystem.
  • Are the potential scenarios behind Alert and Notification too similar? If the Alert state is really a matter of urgency to a passenger, then Aero might move directly to the speaking state and bypass the need for an Alert visual. Our professor recommended removing the Alert state and expanding upon the Notification state to include multiple instances that relay different levels of important information to the user (for example, the color of the circle could indicate levels of urgency.) We like this idea and plan to remove the Alert state and explore ways to add instances to the Notification state. (is this happening?)

Mapping the user journey

We drafted a user journey based on a previous storyboard. Fortunately, this was an instance where we could meet in person and collaborate using an old-fashioned whiteboard with stickies. We eventually digitized the user journey in Figma when it came time to get feedback.

Our main goal in creating a user journey was to imagine how passengers might interact with Aero and when. We broke the user journey down into contexts that aligned with our tourism-themed scenario. We considered the goals of the passenger as well as what they might be doing and thinking. Finally, we examined what mode of conversation (controlling, guiding, delegating, collaborating) might be taking place between the passengers and Aero as well as through which modalities (mobile, voice, visual). The modes of conversation were incorporated after a guest lecture on conversation design from CMU professor and cybernetician Paul Pangaro.

Draft User Journey

One noticeable change from earlier iterations of our storyboards is that we decided to change the demographics of our users. Originally we set out to design for empty nesters but in thinking ahead to what we can realistically film for our concept video given the pandemic, we opted for a more accessible audience of late-20s+ individuals. We believe this demographic would be likely, if not more so, early adopters of the autonomous rideshare model we propose.

We showcase the journey of Amin and Meredith, partners who enjoy traveling but want to feel safe in unfamiliar surroundings. They have just gotten off a plane for a vacation in Pittsburgh and are taking a Motional car from the airport to the hotel.

Feedback

The main piece of feedback we received on our user journey was that the unexpected events and itinerary planning context might flow better if they are switched given that passengers might want to know about the latter before they agree to do anything else. We made this update as shown below.

User Journey

Planning for conversations

We were tasked with imagining the breadth of conversations that could occur within our scenario as well as write scripts for specific conversations that take place on our user journey. We split up the user journey based on contexts: pickup & dropoff, settling in, itinerary planning, unexpected events.

When we came back together to discuss our scripts, we were left with some more questions such as: When does Aero speak first? What preferences does Aero know about passengers?

Feedback

When speaking with our professors on the scripts, we received feedback that we needed to narrow in and condense our script (in an effort to think ahead to the 3-minute concept video). We decided to shorten conversations around adjusting vehicle preferences as well as remove the scene of having the vehicle stop due to an ambulance. To show Aero’s uniqueness, we will further expand on the idea that Aero can suggest facts and photo opportunities for the passengers as they drive through the city.

Draft conversational scripts

Wireframing passenger screen displays

Our next step was to wireframe the visual displays the passengers would see. We found out from a contact at Aptiv (Motional’s parent company)that the current car model the company is using is the BMW 5-Series and the interface dimensions are 1150x430px (driver display screen) and 800x480px (headrest-mounted screens).

We decided to design for the headrest-mounted screens as we envision the passengers sitting in the back seats similar to if they were riding in an Uber or Lyft.

In thinking through what goes on the screens, we asked ourselves the following questions:

  • What are the features and capabilities accessible on the screen? What goes on the home screen as it relates to the travel scenario?
  • What can be controlled from the car interior versus on the screen?
  • What can be controlled from the car screen versus the mobile app?
  • Where is Aero positioned when there is another visual display on the screen? How prominent does Aero need to be? Should Aero include closed captions?
  • For a map visualization, is it more navigational or does it try to show what Aero is seeing?
Aero car interface brainstorm

We decided on some system principles to help us better consider what should go on the car interface. For example, when passengers are in the car we want to limit the use of the mobile app and instead increase the conversational interaction between Aero and the passengers. Another way we wanted the interface to encourage conversation was to not make the menu so prominent. We decided to use a hamburger menu instead of an always-visible side menu so that passengers might converse with Aero more instead of interacting with the display screen.

When going through these features, we had a realization that the cost of the trip might need to go somewhere. Specifically, if the current user journey was encouraging passengers to add a photo-op stop to their route, wouldn’t this affect the price of their trip? We felt this was moving into questionable territory in that Aero‘s suggestions could be used to try and get more money out of the passengers. We decided to change the scenario to Aero suggesting that a landmark was coming up and that the passengers should roll down their window and take a photo (eliminating the need to add a stop to the route). We also decided not to show the cost of the trip on the car display and will instead leave that for the mobile app interface.

Wireframe sketches for passenger headrest display

--

--