Seeing the Light: Our Call for a Standard Self-Driving Car Language to Communicate Intent
By John Shutko, Ford Human Factors Technical Specialist for Self-Driving Vehicles
In the journey to develop and deploy self-driving vehicles, there’s a tendency to focus most on the customers who will be riding in these vehicles. At Ford, we’re working to earn the trust of everyone involved, including all road users and entire communities where self-driving vehicles will be operating. For this technology to be successful, it’s critical it be integrated into society in a way that makes everyone confident in how it works to serve people and business.
The idea that pedestrians, cyclists and scooter users should change their behavior to accommodate self-driving cars couldn’t be further from our vision of how this technology should be integrated. It’s why we’ve been hard at work developing an interface we believe will help self-driving vehicles seamlessly integrate with other road users.
Today, we’re calling on all self-driving vehicle developers, automakers and technology companies who are committed to deploying SAE level-4 vehicles — and believe these vehicles should communicate intent — to join us and share ideas to create an industry standard for communicating driving intent, whether it be driving, yielding or accelerating from a stop. The work we’ve already done is now open to others through a memorandum of understanding that is intended make it easy for us all to work together.
Why is this the best approach? We want everyone to trust self-driving vehicles — no matter if they are riders in these vehicles themselves or pedestrians, cyclists, scooter users or other drivers sharing the road. Having one, universal communication interface people across geographies and age groups can understand is critical for the successful deployment of self-driving technology.
Testing the self-driving intent interface
Last year, we worked with Virginia Tech Transportation Institute (VTTI) to conduct a real-world study of what we call a self-driving intent interface, a light-bar mounted to the top of a windshield of a Ford Transit Connect van. We took this step after initial design and testing in virtual reality scenarios confirmed the learnability of the signal patterns we had developed.
By John Shutko, Ford’s Human Factors Technical Specialist for Self-Driving Vehiclesmedium.com
The VTTI team designed a seat suit that concealed an actual human driver to simulate the van operating on its own to determine if the signal patterns communicating its intent were successful.
We tested three different lighting scenarios, as well as a baseline condition where the lights were off, to observe how pedestrians and other road users responded to the vehicle signaling its intent:
Yielding: Two white lights moving side to side to indicate vehicle is about to come to a full stop
Active driving mode: Solid white light to indicate vehicle intends to proceed on its current course (although can respond appropriately to objects and other road users in the course of its travel)
Start-to-go: Rapidly blinking white light to indicate vehicle is beginning to accelerate from a stop
We outfitted the Transit Connect with multiple cameras that allowed us to observe hours of road user response to various signaling of the vehicle’s actions over the course of more than 2,000 miles. The VTTI team cataloged all the footage and found that the light signal interface did not encourage any unsafe behavior by other road users. The results prove there is a baseline for us to build from in terms of the potential to improve acceptance of self-driving vehicles and trust in the technology.
We then conducted another study in the virtual reality space to test the trust and acceptance hypothesis we’d arrived at. In the digital world, we placed study participants on a street corner to observe and gauge reaction to a complex mix of vehicles driving through an intersection, some equipped with the intent interface light signals and some without. With no prior explanation of what the different signals meant, we found it took about two exposures for participants to learn what a single signal meant and between five and 10 exposures to understand the meaning of all three lighting patterns.
What’s most encouraging is that the signals had a positive effect on people’s trust in self-driving vehicles, with participants reporting the light signals increased their understanding of what a self-driving vehicle will do.
Now, we’re ready to take our learnings from the virtual world back into the real world. We’re installing the self-driving intent interface on a small fleet of our Fusion Hybrid self-driving development vehicles to be used by Argo AI in Miami-Dade County. Ongoing testing will continue to expose pedestrians and other road users to the light bar so we can observe their reactions.
We’re also conducting research in Europe to understand how the same signals are received there so we can ensure they are universally understood across regions and cultures.
In addition to the proposal to accelerate the industry coming together to work toward standardization, we continue to work in parallel with the International Organization of Standardization (ISO) and the Society of Automotive Engineers (SAE) to create a unified communication interface for self-driving vehicles. Our goal is to reach an agreement in three core areas — placement of the signals on a vehicle, design of the signals and the color of the light signals themselves. To help anyone interested in collaborating, we’re open to sharing the scenarios developed for our virtual reality study, which we’ve already done with some universities and other companies.
Of course, we recognize some design elements may need to change as we move forward, and we’re open to working together to do that if necessary to find the best possible communication interface. It’s critical that the communications method we agree upon is as readily understandable as a brake light or turn signal indicator.
After all, ensuring self-driving vehicles are integrated into society without overwhelming or confusing anyone is what success looks like. So to do that, we’ve just got one simple request: Let’s all work together to make it happen.