The Human Transition To Autonomous Vehicles
A UX roadmap for progressing toward the autonomous cars of the future.
When autonomous vehicles (AVs) finally become ubiquitous, their impact will be profound for business and society. They will rewrite the economics of car ownership — giving rise to new business models and services. They will be transformative for the automotive industry at large — acting as a catalyst for new products and forcing organizations within the current ecosystem to evolve, while the ripple effect of these changes will be felt throughout the economy. AVs will revolutionize safety, enhance human productivity and democratize mobility. The government will need to move quickly to develop new regulations that can accommodate the pace of innovation, and our infrastructure and systems will gradually adapt to this new paradigm, altering the shape and scale of cities, suburbs and rural areas alike.
Depending on your degree of optimism, the estimated arrival of first-generation AVs is 2020, followed by mainstream availability a decade or more later (although the actual tipping point may come as late as 2050. Considering the level of drama involved and the potential impact of AVs, it’s not surprising that so much of the discourse has focused on the inevitability and implications of this future state at the expense of grappling with the complexities of the transition. This is particularly true in the case of semiautonomous vehicles (SAVs), which will be characterized by varying degrees of autonomous capabilities, will retain manual controls and may require an engaged human driver to monitor the system and assume control under conditions when the car can’t drive itself.
The OEMs vs. Google
Automakers like Mercedes, Audi, Volvo and others seem to prefer a stepwise transition to AVs and are introducing autonomous capabilities gradually — the earliest of which are already here. This includes features like adaptive cruise control, crash avoidance, lane keep, and self-park, which will be followed by increasingly sophisticated features over the next few years as vehicles get closer and closer to full autonomy. Original Equipment Manufacturers (OEMs) — facing their own Kodak Moment — would also like to avoid the premature sacrifice of near-term profits and decades of brand value built on the current business model, even as the imperative to innovate is clear and the race is on.
Alternatively, Google is advocating a wholesale switch to full AVs that have the capacity to operate independently without human participation (defined as NHSTA Level 4, the highest level of automation). The philosophical difference between this approach and that of the OEMs is stark: Google sees human drivers as the bug in the system, and is purposefully designing us out of the equation for our own good. Today, 93% of automobile accidents are caused by human error — and at a human cost of 33,000 lives each year in the US alone. The motivation to solve this problem is well-intentioned but not without self-interest. As a software company, Google has a lot to gain by placing artificial intelligence, machine learning, and connected services at the center of the AV revolution.
Even so, such a dramatic change will be difficult to achieve in the face of significant pragmatic, cultural and regulatory forces — not to mention the fact that building cars is already really hard and building fully autonomous cars that can safely and reliably operate across a wide range of complex conditions is even harder. This will take time, and most likely both scenarios will be realized in some way. OEMs will try to have it both ways — building near-term SAVs while working to bring full AVs to market first, and Google will use its considerable resources to realize its vision — whether as an automaker, as a platform or both.
Thus, the otherwise provocative Mercedes F015 retains a steering wheel — promising a future that theoretically blends the best of old and new, where the driver can drive or ride as she chooses. This may be a compelling proposition for drivers not yet ready to give up the wheel, and preserves the mythology of freedom and identity built into the automotive experience. However, any vehicle with a steering wheel is by definition an SAV, regardless of capability. In the near term, SAVs represent a difficult design problem, which involves creating a nuanced relationship between man and machine, understanding this hybrid mode of driving, and designing sophisticated systems and controls that balance driver agency and autonomous capability.
Learning to drive less, but better
Research has shown that as we replace active engagement in complex tasks like operating machinery or flying an airplane for passive monitoring of intelligent systems, our capacity for sustained attention diminishes rapidly, as does our capacity for performing the original tasks when called upon. In 2013, the FAA released a report on the effects of autopilot systems in commercial aircraft, urging “operators to promote manual flight operations when appropriate” as one of 18 detailed recommendations covering everything from policy to interface. In other words, pilots were told to spend more time flying manually in order to improve and maintain their skills — even as technology is increasingly capable of doing the work.
Likewise, SAVs may require that we become more proficient drivers even as those skills atrophy. This means that drivers must be able to pivot between high-engagement (driving) and low-engagement (riding/monitoring) states, while maintaining a foundational level of contextual and environmental awareness appropriate to the needs of the system. Early examples — like the Tesla videos that appeared days after the cars were granted limited autonomous features via a software update — should be read as a cautionary tale. Partial autonomy can be quite dangerous, particularly when the product and user experience have not been designed in sufficient detail and the capabilities of the system are ambiguous and poorly understood by drivers.
Much like the way early cars had more in common with carriages than what would become archetypal automobiles, self-driving cars will require an equally significant evolution of the interface. Eventually, “driving” an AV will require little of the passenger in terms of decision-making, focused attention and direct engagement — but we’re not there yet. As these systems mature, the period between today (NHSTA Level 2 / “active monitoring”) and the near future (NHSTA Level 3 “passive monitoring with adequate transition time”) will be particularly critical for both designers and users — which is perhaps the most important reason OEMs are adopting a conservative approach to the technology. It may be time to redesign the wheel.
A UX Roadmap for AVs
The shift to AVs will be as much a human transition as a technological and social imperative. In our recent work with Hyundai, we explored near-term SAV experiences through a human-centered lens. During that process, we identified three overlapping phases of development that we believe will shape UX requirements going forward.
Phase 1: Establishing Trust
Early AV experiences should be considered formative, trust-building opportunities. Because the stakes are so high (literally life or death), we believe the first step toward widespread acceptance of AVs is building trust in autonomous technologies. Over the next decade — and before most drivers take their hands off the wheel for the first time — autonomous systems will need to establish both rational and emotional trust with users. Drivers will need to understand the capabilities and limitations of the technology as communicated through the interface — which must be backed up by proven performance, direct experience and most of all, a compelling value proposition sufficient to motivate trial in the first place.
Phase 2: Designing the Co-Pilot
As autonomous capabilities increase, so will the potential complexity and ambiguity of the experience. During the transitional period between SAVs and their fully autonomous vehicles, designers will need to introduce new physical and digital affordances to manage the new, hybrid mental model of shared control and ensure safe transitions between driver and vehicle. Designing an effective co-pilot — and shaping that relationship to evolve over time — is the challenge with the most unanswered questions. How will drivers navigate this transition? How will it feel? Will user behavior align with our expectations? And when the driver isn’t in control, how will the vehicle maintain the right degree of engagement? How does the infotainment system balance readiness and immersion? And critically, how might the physical environment evolve to support both modes of operation?
Phase 3: Embracing the Passenger
Designing for passengers rather than drivers — in the context of fully autonomous vehicles — is arguably the less complex design problem, particularly once the vehicle no longer relies on direct human engagement to complete its work. As concerns over safety diminish, and as larger numbers of AVs are on the road, designers will need to consider the vehicle as a platform for an emerging array of connected services. The “living room on wheels” may finally arrive, resulting in the increased importance of infotainment and productivity experiences unburdened by the functional requirements we have come to accept as a given. Without a steering wheel, or even the need to face forward, the car becomes a radically different — and exciting — proposition.
The next 5–10 years are a critical period for design — as we grapple with the many complex challenges posed by AVs, as the technology matures, and as we learn to relinquish more and more control to intelligent machines. To get it right, it will be critical to engage early and approach these issues from a humanist point of view, with a realistic eye to how we will adapt as drivers and ultimately emerge as passengers. The benefit of all this work will be systems that serve our needs, keep us safe, and create boundless new opportunities for products, services, and experiences.
Are we there yet?