Illustration by Esther Amaku

Testing an In-Vehicle Digital Experience in a Lab

How frog was able to design and test the experience for an in-vehicle entertainment system with potential users before making it real in the car.

Julia Zhu
frog Voices
Published in
7 min readJun 5, 2019

--

When starting out an engagement for a new project, it’s important to test early and test often. Sometimes this requires testing experiences by any means possible to make sure they are working, even before we’re able to have a product in hand. This is precisely what we did when working with a client on a multi-platform entertainment system. In order to gain valuable real-world user insights, we used a variety of different prototyping tools and approaches that let us replicate the experience.

What were we testing?

A design concept for an entertainment system scaled across mobile and in-vehicle platforms. We worked with two proto.io prototypes, one for mobile and one for in-vehicle, to illustrate the complete digital experience.

Why test an automotive experience in a lab, not a car?

While we always want to be as close to the real thing as possible when it comes to testing for user insights, it’s not always the best approach. frog has lots of experience running usability tests in vehicles driven by participants, however, the logistics make it difficult to collect the most valuable data. A vehicle has to be adapted to the task, capturing and sharing data can be complicated and getting legal to sign off on the risks involved is hard.

For this initial stage of the project, we found that it would be easier for the user to focus on the task and the device without having to contend with the real-world vehicle experience. We also wanted the ability to test cross-platform, which would have been more difficult in a car. This is why we created a test process that took the care to lift the cognitive load from the participant, allowing us to obtain the most useful feedback during the concept design stage.

The lab environment allows for consideration of big changes early on, and is less expensive and faster to replicate. Because of its highly iterative process, it can be used early and often in the design process and even into early production phases without letting the conventional testing slow down overall production.

What does the lab look like?

The user testing sessions were performed in an indoor testing facility, which included designated areas for both the testing and observation separated by a two-way mirror.

Research Facility Floor Map

In the testing area, there were two cameras connected to laptops, a driving simulator to provide context, and a tablet device to run the prototype. In total, eight testing sessions were conducted in this room.

In the observation area, the rest of the team members were able to view the tests to capture their insights and record their observations throughout the session.

Driving Simulator

A driving simulation system was installed for the user to quickly settle into the context of driving without the need for too much pretext.

A City Car Driving Simulator runs on a Windows system.

A TV (45" Samsung) was connected to the laptop running the driving simulation program.

The steering wheel (Thrustmaster TMX Racing Wheel for Xbox One) was plugged into the same laptop in order to control the driving direction.

A Tablet (iPad Pro) ran the proto.io prototype that the participants performed specific tasks on.

The Recording System

A USB camera (Logitech C922X Full HD Webcam) was activated to record the participant’s entire interaction during the session.

The Tablet running the prototype was connected to the laptop with OBS (Open Broadcaster Software) to capture live screen recording.

OBS (Open Broadcaster Software) was used to record both video inputs spontaneously.

Another Wide Angle Camera was located on the wall above the one-way mirror to stream live testing events to the observation room.

How was the testing performed?

Participants were asked to complete a variety of tasks with minimal moderator input and to describe their intent and thoughts as they completed each task. Each test session took 45 minutes to complete and included structured tasks along with open-ended questioning to gain insights into how the participants felt at different points and to ascertain if they understood the design and operation as they were using the system.

Persons in the testing room

Having the right ratio of people helps to keep the testing session private and not too intimidating for the participants involved. Here, numbers were limited to three: one participant, one moderator, one note taker. In each session, the moderator usually takes on the MC role throughout a single session but leaves the room at the end when the additional questions are posed to the participant. This prevents the user testing session from feeling like an interrogation.

Testing Session

Once the participant enters the room, they are formally introduced to the driving simulation system. Sitting in the driver’s seat, they have the functional steering wheel in front of them. The participants are given some time to orient themselves with the system before the testing session is formally started.

We then make some initial introductions, such as “Imagine yourself stepping into a friend’s car and they have this new in-vehicle entertainment system…” or “Picture that you have just bought a new car, and this is the first time you have started the in-vehicle console…” in order to set expectations for the user.

Allow the participants to familiarize themselves with the driving simulation before introducing them to the entertainment system. Next, it can be helpful to encourage them to explore the system freely and describe what they see on the screen. After this step, they are ready to begin the test session.

Participants are actively encouraged and reminded to think out loud and to voice their intentions during the session. These insights are crucial for the success of any validation testing as it allows us to test the intention against the actual user action.

When the test has been completed, end with a recap of the session. Ask the participants about their favorite and least favorite parts of the experience, as this help to get a better sense of how the design was delivered.

Post Testing Session

After each session, we run a quick synthesis to capture quotations, insights and observations to help form key takeaways for each participant.

What we learned from our testing experience

You don’t have to have a vehicle to test an in-vehicle design concept

Design concepts come in so many different forms and shapes. It is essential to figure out the right environment for your testing subject. In our case, when the concept is in its early to middle stage, testing in a more stable area like a lab is conducive to having an in-depth conversation and understanding a participant’s real opinions.

Becoming familiar with the driving simulator is just like learning to drive a new car

People are so used to how their car corresponds to their movement, so if the driving simulator steering wheel turns more dramatically than their own vehicle, it can create unnecessary anxiety. To overcome this, participants are given time to familiarize themselves with the driving simulator so they are not surprised by its actions and reactions during the test.

Make sure appropriate tasks that match their cognitive load are assigned in the right mode — “driving” or “parked” in the test plan

People, in general, are very selective of things they do while driving. For instance, it is rare for someone to discover new content while driving. However, in most cases, people are comfortable turning the volume up or down when a car is in motion. If we ask a participant to discover something new while in driving mode, there is an 80 percent chance they would fail the test, which doesn’t necessarily equal the failure of the feature.

Participants from different age groups have significantly different behavior patterns

One of our 20-year-old participants was so comfortable with the design that they only needed a quick first glance to like it, while a 60-year-old participant struggled to even figure out what the “profile icon” meant. This shows that people with different tolerances for technology tend to respond dramatically different in regard to how much time they need to adapt to using the technology. However, it’s important to consider both patterns and to find the sweet spot.

It is qualitative, not quantitative

Testing is not always about the number, which is why it’s important to dig through to the ‘why.’ In the past, we were able to form relatively clear feedback themes from just the first six participants in almost every user testing event. This helped us to discover the key highlights and drawbacks of the design quickly, and iterate on the design fast enough to see the changes in the final product.

All in all, when testing in-vehicle design concepts, this format provides efficiency and accuracy, which are harder to gain through a cumbersome in-vehicle testing event. This format can be planned and conducted within 1–2 weeks without adding too much burden to the design process. It encourages us to collect user feedback in the early stage, which is extremely helpful since the in-vehicle design norm has not been established in the industry. With more and more attention around topics such as autonomous vehicles and the future of in-vehicle design, being able to run quick tests enables us to accelerate a path towards a true human-centered design process for an in-vehicle product.

Special thanks to the guidance and support from Theo Calvin, and the amazing team behind this: Patrick Marsh, Meredith Nguyen, Andrew van Hyfte, Birdie Chen and Ashley Conway.

Julia Zhu is an interaction designer at frog Design where she focuses on delivering intuitive and delightful human experience through digital and physical products.

Illustration by Esther Amaku

--

--

Julia Zhu
frog Voices

Design at AWS | argodesign | frog design alumni | Austin, TX. 🌵 juliaz.me