Autonomous technology for the people
a human-centred approach to autonomous vehicle legislation
A few weeks ago, the House of Representatives passed the SELF DRIVE Act. Apart from it being an acronym that doesn’t quite make sense (Safely Ensuring Lives Future Development and Research in Vehicle Evolution), this unanimously-passed act was not just a rare showing of bipartisanship, but has been seen as a huge boost for the autonomous vehicle industry in the US.
For once, the tech industry wants legislation; without clear federal guidelines, autonomous vehicle makers and testers in the US have so far had to deal with a mishmash of state-level laws that don’t necessarily comply with each other — meaning a self-driving car in one state may not be able to cross the border to a neighbouring state. Just today, the California Department of Motor Vehicles changed its legislation to allow companies to test autonomous vehicles without a driver behind the wheel next year, and to even let the public use them. In addition to the inclusion of federal protections for cybersecurity, privacy, and consumer education, the SELF DRIVE Act gives the National Highway Traffic Safety Administration (NHTSA) power to provide one set of rules that supersedes state laws with respect to the “design, construction, and performance” of autonomous vehicles, as they have with person-driven vehicles. The aim is to provide the autonomous vehicle industry with the flexibility to test vehicles on public roads nationally, collect data, and develop their self-driving technology within the safety framework NHTSA has laid out. So what has NHTSA laid out, and what does that mean to the public facing these vehicles?
Not a lot yet. NHTSA has one year to determine what the key performance standards required for evaluation are — for example how good vehicle vision needs to be under different conditions — and two years to develop automakers’ requirements for safety certification. They have, however, released a couple of ‘guidance policies’ for autonomous vehicles over the past year or so.
The first is the Federal Automated Vehicles Policy, released in September 2016. The document describes a 15-point “Vehicle Performance Guidance for Automated Vehicles” that tech companies and automobile manufacturers have to address to determine whether or not their driverless vehicles are safe for public deployment. One of the areas of safety assessment is the human-machine interface (HMI); the guidance simply states that “considerations should be made for the human driver, operator, occupant(s), and external actors with whom the HAV [highly automated vehicles] may have interactions (other vehicles, pedestrians, etc.)”. The follow-up guidance policy released by NHTSA last month, Automated Driving Systems: A Vision for Safety, does not add anything more specific or concrete to this interaction between vehicles and pedestrians; the guidance again mainly emphasises the interaction between the vehicle and the people inside, and shows little consideration, from a legal framework, of the needs of the people outside.
Cities are for people
Around this time last year, we came across a bunch of insights in our Masters project at the Royal College of Art and Imperial College London, which was done in collaboration with our friend and coursemate Adam Bernstein. As a group, we were exploring how we could make the urban environment a more pleasant and comfortable place for people to live in, while in parallel researching trends and emerging technologies in this urban environment. Autonomous vehicles is one such emerging technology that repeatedly came up in our research, and we realised that our position, and our values, seemed to be at odds with what was being shown by many of the automotive industry’s concepts of autonomous cars in our future cities; we thought that this gave us scope for an interesting human-centred project.
Our viewpoint seems trivial: that cities are for people — however it doesn’t always feel like that. We believe that while infrastructure exists to balance the power between pedestrians and vehicles, much of the current infrastructure was built around the needs of the vehicle. Aside from the potential of autonomous vehicles to cause far less accidents and fatalities in the urban environment, their arrival also provides an opportunity to rebalance the road power dynamics and give pedestrians an equal weighting in the city.
In thinking about these ideal future cities, we took inspiration from other fields like urban planning and architecture, and examined the notion of generating urban environments from the ‘bottom-up’ instead of the ‘top-down’ approach generally implemented in urban design. In particular we looked at the notion of deterministic chaos, as exemplified by the Kowloon Walled City, and the idea of growing infrastructure, as exhibited at Bankside Urban Forest, as case studies in bottom-up urban design in order to further develop our vision for a human-centred city.
In order to progress with this vision and its relation to autonomous vehicles, we thought it would be useful to imagine what the future autonomous city would look like, and see what part we can play in realising its future. After a number of unrestrained sketching sessions in the university café, we collected and grouped our favourite ideas, and populated our vision of what autonomous vehicles could mean in a future urban context.
The ideas illustrated in the above visualisations range from time-specific autonomous food trucks to smart infrastructure that learns from pedestrian behaviour. However, the one theme that emerged across many of our ideas was the notion of trust with these autonomous systems, or lack thereof. While tech and design publications detail the latest advancement in autonomous technology on a weekly basis, we felt we should go out and talk to people outside of this well-defined sphere of people.
As our interviews across different demographics in London proved, people just don’t trust autonomous vehicles — and probably rightly so. While a lot of attention has been put on the inner features and passenger experience of autonomous vehicle concepts, not enough consideration has gone into how people outside the vehicle feel when interacting with these autonomous vehicles, and how these vehicles can show that they have acknowledged their presence and intent.
The project developed a number of principles and values that as a group we believed are essential to the societal acceptance and trust of autonomous technology:
1. CODE OF TRUST
Pedestrians should feel safe and comfortable at all times when traversing the urban environment. Our interviews led us to understand that there is a wariness of autonomous technology and a concern for personal safety on the road.
By developing a new common language of trust between people and autonomous agents, we can encourage a more empathetic manifestation of self-driving technology to create a safer and more pleasant urban environment.
2. PEDESTRIAN FOCUSED DESIGN
While infrastructure exists to balance the power between pedestrians and vehicles, much of the current infrastructure was built around the needs of the vehicle.
The arrival of self-driving vehicles provides and opportunity to rebalance the power dynamics and give pedestrians an equal weighting in the conversation between man and machine.
3. DIRECT COMMUNICATION
Vehicles should acknowledge pedestrian presence and communicate their intent directly and clearly.
Acknowledgement through eye-contact has been identified as an important form of assurance between pedestrians and vehicles. Self-driving vehicles must replicate this interaction.
4. ANTICIPATION AND DANGER PREVENTION
Self-driving vehicles differ from conventional vehicles because they are able to predict and communicate their future trajectory.
The language of self-driving vehicles should communicate this trajectory as well as any potential dangers the vehicle can predict to provide a safer road environment.
5. CULTURAL TRAFFIC AWARENESS
The behaviour of pedestrians and their interaction with vehicles differ greatly between localities and cultures.
The language of trust between pedestrians and self-driving vehicles needs to be able to learn and adapt in order to reflect these different urban environments.
The insights and principles that emerged during the academic project last year have been further backed up by ustwo’s fantastic recent publication, coincidentally also called ‘Humanising Autonomy.’ As people are getting more aware of the societal and human-centred issues surrounding autonomous vehicles in the urban environment, it is time for some of these principles to be considered and evaluated in more detail by responsible regulatory bodies.
Future legislation and final thoughts
As the NHTSA in the US is realising, legislating for autonomous vehicles is a difficult balance between fostering innovation in the industry and ensuring safety on public roads. The UK Government’s Centre for Connected and Autonomous Vehicles made numerous proposals for autonomous vehicle technologies and legislation last year (with the Government response here), but this was again very much focused on the people inside the vehicle. Germany’s Federal Ministry of Transport and Digital Infrastructure developed, and importantly is actually adopting the first set of national ethical guidelines for autonomous vehicles in August this year. The guidelines clearly state, for instance, that the protection of human life enjoys top priority, and that all human life is equal (regardless of age, gender, race, disability, and presumably whether inside or outside the vehicle). However, again, the ethical framework doesn’t quite touch the issues related to the comfort of pedestrians, cyclists and other road users in their interaction with autonomous vehicles.
As part of its Horizon 2020 research and innovation programme, the European Commission released an ‘Expert Report’ on Automated and Connected Driving. The report discusses current advancements, as well as future research needs for developing and deploying connected and automated transport technologies; it highlights “social acceptability” as an area with a distinct “research gap and implementation barriers,” and in the case of autonomous vehicles on the road, specifically points out the gap in research and a shared regulatory framework around the handling of mixed traffic situations, including interactions with pedestrians and cyclists — or vulnerable road users as they are labelled.
Technologies interacting with people should adapt to fit what is natural for people to do, and not the other way around, where people are simply forced to deal with useful-but-ill-designed technology. The chasm between the regulatory frameworks being developed and the feelings of people in urban environments around the world shows that perhaps it is now time to encourage the discussion and evaluation of policy decisions around autonomous vehicles from a human-centred perspective — whether they are inside or outside the vehicle. A socially empathetic manifestation of autonomous vehicle legislation could not only foster more rapid innovation in the industry in participating countries, through greater social acceptance of vehicle testing on public roads, but it would ultimately steer the development of these vehicles towards a safer and more pleasant urban environment for all road users.