The Homestretch

Aziz Ghadiali
MHCI Capstone 2021 — Team CarMax
5 min readJul 26, 2021

The finish line is in sight for team Hi, Mileage! but, that hasn’t slowed our rapid prototyping and testing endeavors. Our focus nearing the final stretch of the project has been to create more functional tools to train, test, and present our conversational agent at higher levels of fidelity.

The Intelligent WoZ Research Tool

The first of our two major prototypes for this sprint was an interface to allow a Wizard of Oz simulation of a proactive voice assistant. This prototype was derived from some earlier research we had conducted using TTS to simulate a voice assistant. Our goal for this prototype was to create a tool for CarMax researchers to replicate and build on the discovery work we have been doing to design our conversational agent.

Our WoZ research interface

Key features of the system

When designing the interface in ReactJS, we tried to improve on our methods by creating a system that can dynamically pull data for a specific car to inform built-in messages we pre-loaded based on our own studies, automatically record transcripts of conversations between the user and “agent,” and even capture utterances to provide intelligent recommendations for statements that a WoZ researcher could respond with.

The Alexa Skill

In addition the proactivity simulated using our WoZ research tool, the team had prototype and test the reactive capability of the conversational agent. To do this we used Voiceflow to prototype a CUI and deployed it for testing as an Alexa skill.

From notes to intents

During our many previous rounds of testing our team was recording and organizing the countless utterances spoken by our participants. These utterances are the literal phrases and questions that people use to communicated with a voice interface.

These utterances are then organized into the user goals or intents. These intents are the not the literal words spoken but rather the intended actions that an utterance is used invoke. Together these intents and utterances help our team begin to model the interactions that a user may have with our conversational agent.

A conversational model from a single round of testing
An interaction model from one round of testing

Prototyping reactivity

Using our interaction models, members of the team started working in Voiceflow, a voice UI prototyping tool, to create an experience for user to ask questions of our conversational agent with out us having facilitate in real time. After working through the quirks of the tool we found Voiceflow to be a rather powerful and, most importantly, fast tool to build interactions for users.

We used our interaction models and research notes to start building intents and explore how they relate to one another. We observed in our research that many questions do no live in isolation. The responses users received from the conversational agent would lead to one tangential after another. Our team was able to design for these conversational turns throughout our prototype thanks to the modular nature of Voiceflow.

Example of the relationship of multiple intents in Voiceflow

Hey Alexa, I mean ‘Hey CarMax’

In order to test these interactions, we transferred the Voiceflow prototype to an Alexa skill that could be used by participants to ‘talk’ to our conversational agent. We had potential users walk around our trusty Prius and ask our conversational agent questions about the car. Our goal was two fold:

  1. Rapidly capture utterances to increase accessibility and help build our library of intents
  2. Gather feedback on the interactions we had already created
Testing the reactivity of our conversational agent

As expected, our prototype was unable to answer many of the questions that our participants asked. While identifying where the agent was lacking was a primary goal for our test, it was still difficult for us to hear your prototype say that it was able to answer a question. In addition to these gaps between the user and the agent, some of our key findings from the test:

  • Users want the ability to compare vehicles. If they already have the ability to interact with the data for one car, then they expect to be able to access others as well.
  • Many users found the cadence to be difficult: “Hey CarMax”…beep… “what is the price of this car?”. This interaction was created in response to drivers needing feedback before going into their questions. This is an interesting difference in usability between users who are driving and not driving.
  • Most of the users wanted some indication about what questions they could ask and subjects that the agent was not able to handle. Continually failing to get a response was tiresome for some users.

With these and many more findings in hand, our team will continue to build out our prototype, and publish the skill soon so you can try it out for yourself!

Presenting to the greater design community at CarMax

Our WoZ method for testing with real customers in real cars

We were also lucky to get the chance to showcase our design work at the bi-weekly CarMax UX Design stand-up where different design teams get to present their work and solicit feedback. Being gifted the largest presentation slot we shared the work that we did testing with real customers in Virginia Beach and received some great questions and feedback. The excitement and interest in our work expressed the CarMax design community really validated the value of our work and was an amazing experience for our team!

There’s more to come!

In addition to the two prototypes we built this past sprint, we have been finalizing the design of a future customer journey/service blueprint to envision how our conversational agent should exist throughout the end-to-end car buying process across contexts and modalities and starting work a concept video to showcase our “hands-on” solution! It’s safe to say that with 3 weeks to go we have our work cut out for us but, we are excited for the final leg of this journey and being able to showcase our final deliverables!

--

--

Aziz Ghadiali
MHCI Capstone 2021 — Team CarMax

Using my time to explore innovative ideas with conversational AI. Master of Human-Computer Interaction from Carnegie Mellon.