Are self-driving cars really safer than human drivers?

Just because tech CEOs make the claim doesn’t mean it’s true

Paris Marx
Radical Urbanist
25 min readJun 7, 2018

--

Tesla Model X after a fatal collision in March 2018

For several years, the CEOs of major tech companies have been repeating the line that self-driving cars are safer than vehicles driven by humans, and that if we could just transition our transportation system to one where autonomous vehicles are dominant, we would save millions of lives. Even the heads of major auto companies have started echoing these sentiments. Sounds like the obvious thing to do then, right?

The problem is that tech leaders — not coincidentally running the very companies best positioned to make huge profits if self-driving vehicles take over our roads — expect people to accept these claims without question, even though they don’t have the proper data to back them up. Autonomous vehicles are at too early a stage for us to have the necessary information to make a solid determination on their safety, but there are very clear signs that we should be proceeding with caution.

73 percent of Americans drivers report being afraid to ride in a self-driving vehicle, and while one might expect that figure to be a result of the fatal collision where a self-driving Uber test vehicle killed a pedestrian in Tempe, Arizona in March, that only accounts for a small portion of the fear — in late 2017, a strong majority (63 percent) were already afraid. And it’s not just old people: 64 percent of millennial drivers are also scared of autonomous vehicles.

73 percent of Americans drivers report being afraid to ride in a self-driving vehicle

It would be irresponsible to simply throw self-driving cars into the mix when there’s such a general unease about them being on the roads in the first place, especially when the data isn’t available to reassure people of their safety. While we may not be able to solidly say whether autonomous vehicles are safer, recent events and the limited reports we have from companies testing self-driving cars give us some insight into how we should approach their regulation; but before diving into those details, it’s worth reviewing the fundamentals.

A few things to remember

Self-driving vehicles are typically classified along a six-level scale developed by the Society of Automotive Engineers. There is some criticism of this scale, but it’s used by the National Highway Transportation Safety Agency (NHTSA) and much of the industry, so it’s the one we’ll reference.

The scale runs from level 0, meaning a completely normal car with absolutely no enhanced capabilities, to level 5, the gold standard and ultimate goal of the industry — a vehicle with an autonomous system that can handle any weather, road, or traffic conditions that a human could navigate (and, presumably, the computer would do a better job).

In order to detect their surroundings, self-driving vehicles are typically equipped with three types of sensors: cameras, radar, and LiDAR; along with computers to process all the incoming data and utilize artificial intelligence to determine the next steps or whether a human should take over. Tesla is the only company not to use LiDAR, but Argo AI CEO Bryan Salesky made a good case for why all three sensors are necessary.

We use LiDAR sensors, which work well in poor lighting conditions, to grab the three-dimensional geometry of the world around the car, but LiDAR doesn’t provide color or texture, so we use cameras for that. Yet cameras are challenged in poor lighting, and tend to struggle to provide enough focus and resolution at all desired ranges of operation. In contrast, radar, while relatively low resolution, is able to directly detect the velocity of road users even at long distances.

Whether major tech and automotive companies will ever reach level 5 remains an open question, though much of their management likely wouldn’t admit it. The reality is that vehicles claiming the “autonomous” label at present range from levels 2 to 4, and where they land makes a huge difference to their capabilities and potentially also to how safe they are.

Source: Navigant Research

Every year, Navigant Research ranks the companies trying to develop self-driving vehicles. As of early 2017, Waymo, a subsidiary of Alphabet, and Cruise, a division of GM, are leading the pack, while Apple, Uber, and Tesla are trailing — and that seems supported by the evidence available to us.

Waymo is in the process of rolling out a level-4 autonomous taxi service in a suburb of Phoenix, Arizona and GM has announced similar plans for San Francisco in 2019. In contrast, Uber’s autonomous-vehicle division is flailing after a fatal collision in Tempe, Arizona and consistent reporting on the poor state of its tech; Tesla’s body count keeps growing and its team seems unable to deliver the additional capabilities promised by CEO Elon Musk; and Apple… well, there’s no point even going there.

It is worth noting, however, that the fabled level-5 autonomous vehicle has been promised and delayed countless times. Musk has a track record of predicting proper self-driving vehicles are two years away, then moving the goalpost another two years when time runs out. A number of major auto companies recently pushed their timelines into the early 2020s, and it wouldn’t be surprising to see them pushed again in the future. It’s become a very common trend among these companies.

Making a blanket statement that self-driving vehicles are safer than those driven by humans is not only irresponsible, but ignores the current state of the industry

If it’s not already obvious, to make a blanket statement that self-driving vehicles are safer than those driven by humans is not only irresponsible, but ignores the current state of the industry. There is not simply one monolithic self-driving vehicle or autonomous-driving system, but a variety of systems being used on all manner of vehicles, each equipped with different sensor arrays, some of which could be far safer and more advanced than others.

And that’s why the question is so difficult to answer. While it’s incredibly unlikely, all of the systems could be safer than human drivers, or maybe just some of them are; but, honestly, the most likely scenario at present is that none of them are safer, but some may have the potential to be in time. But which ones?

A look at recent events

The conversation around self-driving cars is currently focused, without question, on safety, as a result of the recent high-profile collision involving an Uber test vehicle and several collisions involving Tesla vehicles using its Autopilot system. It’s worth remembering that Navigant Research places those two companies at the bottom of its list, and recent reporting on their efforts demonstrate why.

Uber’s fatal collision

Screenshot of ABC15 Arizona broadcast about Uber’s fatal collision in Tempe.

Let’s start with Uber. On March 18, 2018 at 9:58pm, an Uber self-driving test vehicle was heading down a wide stretch of road in Tempe, Arizona when it struck Elaine Herzberg, 49, as she was crossing the street with her bicycle. There was a “safety” driver behind the wheel, but footage of the crash showed she was looking down at a screen, not at the road.

In the immediate aftermath of the incident, some people — including a police chief — tried to place the blame on the pedestrian since she was not crossing at a designated crosswalk — and because, let’s be honest, she was poor and dead. There was also an attempt to place blame on the safety driver because she was not looking at the road. However, as details about Uber’s autonomous-vehicle program in Arizona and the system itself began to be released, it became quite clear that the real issues were with the company’s goals, which prioritized a quick rollout over a safe one, and the vehicle’s terrible autonomous system.

Documents obtained by the New York Times in the aftermath of the incident showed that Uber had recently cut the number of safety drivers per vehicle from two to one, placing the responsibilities to monitor the system’s diagnostics and take control in the case of a malfunction on a single person. There was also pressure on the team handling the project to be ready for a driverless taxi service rollout similar to what Waymo had announced by the end of 2018 — even though the tech was nowhere near ready.

One of the measures of progress and safety for autonomous systems is the disengagement rate — the average distance the vehicle can go without having a driver take over. While Waymo reported that its vehicles could go an average of 5,600 miles (9,000 km) per disengagement in California at the end of 2017, and GM’s Cruise autonomous-vehicle division reported 1,250 miles (2,000 km), Uber’s test vehicles in Arizona were stuggling to go a mere 13 miles (21 km) before a driver had to intervene at the end of March 2018.

Uber’s test vehicles in Arizona were stuggling to go a mere 13 miles (21 km) before a driver had to intervene at the end of March 2018

If that wasn’t already damning enough, the National Transportation Safety Board (NTSB) released its preliminary report on the fatal crash in May 2018, in which it described problems at the very core of the vehicle’s autonomous system and how the decisions made by the team compromised its safety.

According to the NTSB, the vehicle was traveling at 43 mph (69 kmh) and detected Herzberg six seconds before it collided with her. However, while it detected her, it didn’t know what she was, classifying her “as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path.” It wasn’t until 1.3 seconds before impact that the system determined it should brake, but it couldn’t; Uber’s team had disabled emergency braking in autonomous mode “to reduce the potential for erratic vehicle behavior,” meaning they didn’t want the car to brake if it misidentified something, ensuring for a smoother ride instead of a safer one.

After reading that detail, I couldn’t help but recall Aarian Marshall’s WIRED piece from November 2017 where she described her ride in one of GM’s autonomous vehicles as “herky-jerky” because the vehicle was “so careful that it jolted, disconcertingly, to a stop at even the whisper of a collision.” She further wrote what I took to be a compliment for the approach of the traditional auto companies: “[i]f the Silicon Valley motto is ‘move fast and break things’, Detroit’s seems to be ‘move below the speed limit and ensure you don’t kill anyone’.” Safety first, what a crazy idea!

If the Silicon Valley motto is “move fast and break things,” Detroit’s seems to be “move below the speed limit and ensure you don’t kill anyone”

Returning to the Uber, its Arizona team made another bad decision. Along with disabling emergency braking in autonomous mode, it also failed to install a light, an alarm, or anything that could have alerted the safety driver that there was something in the path of the vehicle and she needed to brake. With that said, if the warning had come just 1.3 seconds before impact, it still would not have given her enough time to brake. However, she did look up moments before the accident and applied the brake less than a second before impact, reducing the speed to 39 mph (63 kmh) before hitting Herzberg.

On the topic of the safety driver, there’s another detail that should also be noted. In the aftermath of the incident, she was dragged through the press for being on her phone. The NTSB’s preliminary report said she claimed to be looking at the system’s display, performing the function her copilot would have been doing had the second safety driver not been cut; but the police obtained logs from Hulu which showed she was, in fact, watching The Voice. The prosecution is now considering criminal charges against her which could absolve Uber of any fault — a terrible precedent to set, but one that would obviously be attractive to Uber.

In his analysis of the NTSB’s findings, attorney James McPherson (tweeting as Autonomous Law) calculated that the vehicle would have had more than enough time to stop had it braked when Herzberg was first detected. He further argued that the vehicle was impaired due to the limitations on its abilities and the decisions made by the team to disable emergency braking, which could result in a charge of vehicular homicide against someone at Uber.

The vehicle was impaired due to the limitations on its abilities and the decisions made by the team to disable emergency braking, which could result in a charge of vehicular homicide against someone at Uber

When Uber CEO Dara Khosrowshahi took over from disgraced cofounder Travis Kalanick in August 2017, he promised to change the culture of the company to rid it of the issues Kalanick had created, but he doesn’t seem to have been successful, at least with the self-driving team.

When Waymo sued Uber over having illegally obtained documents about Waymo’s autonomous vehicle program from Anthony Levandowski, a former Google employee who lead the team developing Uber’s self-driving vehicles, conversations between Kalanick and Levandowski were released in which they agreed the company needed a “strategy to take all the shortcuts we can” and that “cheat codes” were needed to win the self-driving vehicle race. Instead of getting rid of this behavior, Khosrowshahi seems to have doubled down on it, or at least set such unrealistic expectations that the autonomous-vehicle team felt little choice but to cut corners.

Overselling Autopilot’s capabilities

Tesla Model S after colliding with a transport truck in May 2016.

Uber’s failure resulted from flawed software and bad decisions made by its engineering team to try to make a subpar system ready for the limelight. Tesla is also trying to pass off its Autopilot system as something it isn’t because its CEO keeps making big promises that its engineers are unable to deliver on — and some have even left the company over it.

Musk paints an incredible picture of Autopilot’s capabilities, but the reality is that many of the features he promises do not exist, missed their deadlines long ago, and updated timelines have not been provided. Tesla sells Enhanced Autopilot and Full Self Driving packages for $5,000 and $8,000, respectively, based on the promises that Musk made to customers and shareholders, but because they’ve been unable to deliver, the company recently had to settle a class-action lawsuit for $5 million with customers who alleged they’d paid to “become beta testers of half-baked software that renders Tesla vehicles dangerous if engaged” since the promised features were not delivered.

Even though Musk has promoted Autopilot as a self-driving system, the truth is that its current capabilities are closest to the level-2 classification. Despite being hyped up by Musk, what it currently offers is increasingly standard on luxury vehicles: lane change assist, park assist, emergency braking, and enhanced cruise control that can stay within the painted lines and track the vehicle in front of you. However, if you only listened to Musk, you would likely believe it can do far more than that, and that misrepresentation has contributed to collisions.

Even though Musk has promoted Autopilot as a self-driving system, the truth is that its current capabilities are closest to the level-2 classification

A Tesla Model S was the first vehicle using an autonomous system to kill its driver in May 2016 when it collided with a transport truck. In its report on the incident, the NTSB cited the driver’s overreliance on Autopilot as a factor in the accident, which included taking his eyes off the road and hands off the wheel for extended periods of time, even though a level-2 vehicle should require a driver to keep their hands on the wheel at all times due to the system’s limited capabilities. It also found Autopilot was in use on roads it wasn’t designed to handle — it should only be used on limited-access highways, like interstates — and that the company did not set limitations on where it could be engaged.

A similar situation played out on March 23, 2018 when a Tesla Model X using Autopilot slammed into a concrete barrier and caught fire, killing its driver. According to Tesla, the driver did not have his hands on the wheel and did not respond to warnings to retake control, making it clear the company still has not issued a software update to require drivers to keep their hands on the wheel. A worrying detail emerged, however, when the driver’s family told the media that “he took his Tesla to the dealer, complaining that — on multiple occasions — the Autopilot veered toward that same barrier” which his Model X hit when he died. Another Tesla owner recorded himself driving on the same stretch of highway after the incident and found Autopilot directed him toward the barrier and didn’t give him any warnings.

In early June 2018, the NTSB released its preliminary report on the incident, and while it confirmed that the vehicle sent warnings to the driver, they were more than 15 minutes before the crash — a detail Tesla conveniently failed to include in its initial public statement. The report also said that the driver had his hands on the wheel for 34 seconds in the minute before the accident, but not in the six seconds before impact, showing he clearly wasn’t completely disengaged. In the seconds before impact, the NTSB found that the vehicle began a left steering movement and sped up before slamming into it and killing the driver.

The number of Tesla collisions on Autopilot have been mounting, but the improvements Musk has long promised have yet to emerge. Tesla used to have a partnership with Mobileye to provide its autonomous capabilities, but the relationship was severed in 2016 and Tesla has struggled to recover ever since. In October 2017 Andrew J. Hawkins reported that “Tesla vehicles built since October 2016 have many fewer safety and convenience features enabled than in older models” because the partnership was ended. It’s worth remembering that Tesla is the only company that does not use LiDAR sensors.

A former senior system design and architecture engineer said Autopilot’s development was based on “reckless decision making that has potentially put customer lives at risk”

Musk has already admitted that Tesla vehicles may not have enough processing power for full self-driving capabilities, but claims the lack of LiDAR isn’t an issue. However, Musk reportedly “brushed aside” the concerns of the Autopilot team about the system’s safety and several engineers resigned their positions when Musk publicly announced that Tesla vehicles would be capable of “full autonomy.” A former senior system design and architecture engineer even said Autopilot’s development was based on “reckless decision making that has potentially put customer lives at risk.”

The truth is that Autopilot would be better classed as a “semi-autonomous” system, as it does not have the capabilities to safely drive itself outside a few select environments and they do not seem to be coming anytime soon. Tesla, under Musk’s influence, will not admit this, and uses dubious statistics to claim Autopilot is safer than human drivers while denying Tesla drivers access to their crash data unless they present a subpoena.

Crashes are not the exclusive domain of Uber and Tesla — all the companies working on autonomous-driving systems have had collisions. It’s the reporting on the decisions behind the development of their systems and the details of the recent collisions that make the safety of Uber and Tesla’s self-driving solutions particularly worrying. But it’s hard to know for sure without looking at the data that’s available, and while that can’t provide a definitive answer, it can give us some hints.

Diving into the (very limited) data

A lot of numbers are thrown around in the debates over the safety of self-driving cars and the danger posed by human drivers, but before reviewing the data we have on autonomous vehicles, it’s worth keeping a few points in mind.

Defenders of self-driving cars often like to remind critics that human drivers kill more people and get in far more accidents than autonomous vehicles, which should be pretty obvious to anyone who isn’t a complete idiot. Humans in the United States drive more than 3 trillion miles (4.8 trillion km) every year, while Waymo’s vehicles drove 5 million miles (8 million km) on public roads between 2009 and early 2018.

Humans in the United States drive more than 3 trillion miles (4.8 trillion km) every year, while Waymo’s vehicles drove 5 million miles (8 million km) on public roads

Since Autopilot is on thousands of vehicles being used by customers, Tesla reports that Autopilot has been engaged for around 300 million miles (482 million km) and used in “shadow mode” for 1.3 billion miles (2.1 billion km). These figures can’t be compared to Waymo, however, because they’re very different systems: Autopilot is a semi-autonomous system that operates a bit like an advanced cruise control, while Waymo uses a level-4 system that is expected to control more aspects of the vehicle in many more scenarios.

Further, in order to judge the safety of autonomous vehicles against that of vehicles driven by humans, we can’t simply add up the total number of miles driven by vehicles operated by Waymo, Cruise, Uber, Tesla, and other manufacturers, along with the collision, death, and injury stats because each system is different and unique statistics will need to be calculated for each. While the number of miles driven by these systems will increase more quickly as they add more vehicles to their fleets, as many of the majors players are planning, it will still take a long time for them to have driven enough miles to have reliable statistics, and then we’ll also have to ask whether miles driven at an earlier time are an accurate reflection of current capabilities.

Self-driving vehicles may have to drive up to 11 billion miles (17.7 billion km) before we can have reliable statistics on their safety to compare to human drivers

And this is where things get complicated. There’s plenty of data for us to judge the safety (or lack thereof) of vehicles driven by humans, yet there’s very little that exists for autonomous vehicles, and much of what does exist is not available to the public or to regulators. Researchers at the RAND Corporation estimate that self-driving vehicles may have to drive up to 11 billion miles (17.7 billion km) before we can have reliable statistics on their safety to compare to human drivers, which means 11 billion miles for each autonomous-driving system. Not only will that take a long time, but we’ll also have to rely on private companies for the data when they have a financial interest in making sure those statistics portray their systems in a positive light.

With those details on the table, let’s take a look at the data available on Waymo, Cruise, and Tesla, and what the limited data available to us may suggest about the safety of self-driving vehicles and the future of the technology.

The leaders: Waymo and Cruise

California has the toughest data-reporting regulations in the United States for companies testing self-driving vehicles in the state, and it’s largely thanks to those regulations that the limited data we have on Waymo and Cruise is public at all. The reports released at the end of 2017 shed an important light on the efforts of both companies, but they also suggest caution about predicting the dominance of autonomous vehicles in the near future.

Source: Ars Technica

Based on the data that is publicly available, Waymo’s technology seems to be leading. In 2017, it averaged 5,600 miles (9,000 km) per disengagement in California, which isn’t a significant increase from the 5,100 miles (8,200 km) in 2016, but near the end of the year it was showing progress with an average of 8,000 miles (12,900 km) per disengagement in the last three months of the year and even 30,000 miles (48,200 km) in November 2017 — whether that last statistic will be maintained remains to be seen.

Cruise is behind Waymo, averaging 1,250 miles (2,000 km) per disengagement throughout 2017, but it also saw improvements in the final three months of the year, with that number increasing to 5,200 miles (8,400 km). Cruise hasn’t been testing vehicles as long as Waymo, and its testing has focused on San Francisco — urban areas are particularly difficult for autonomous-driving systems — while Waymo has been operating in a wider variety of areas, meaning its vehicles are also on roads that are easier to navigate.

In an analysis of Waymo’s report, Filip Piekniewski, a researcher focused on computer vision and artificial intelligence, compares the number of disengagements to the number of crashes involving vehicles driven by humans, while acknowledging it’s not a perfect comparison: not all disengagements would cause crashes, and the crash number is for all types of vehicles (including older vehicles with fewer safety features) in all weather and road conditions (not just the usually clear weather in California). However, for every one crash in an average 1,000 miles (1,600 km), there would be 100 disengagements, and while Piękniewski doesn’t think every disengagement would result in a crash, he writes it would be “naive” to think that “none (or only a tiny fraction) of these events would have lead to a crash.” He thinks even reducing it to one in ten is being too optimistic.

When the software fails and e.g. the control system of the vehicle hangs, it is more than likely that the end result of such situation would not be good (anyone working with robots knows how rapidly things escalate when something goes wrong — robots don’t have the natural ability to recover from a deteriorating situation). If that happened on a freeway at high speed, it would easily have lead to a serious crash with either another car or a barrier. If it happened in a dense urban area at small speed it could lead to injuring pedestrians. Either way, note that Waymo only reports the events that fulfill the California definition, i.e. these are actual failures or events threatening traffic safety as concluded by their extensive simulations of each event.

Piękniewski isn’t the only one to provide an informative analysis of Waymo’s report. Tasha Keeney of ARK Investment breaks the disengagements into two categories: Expected Failures (EFs), where the system is able to recognize that it can’t proceed and signals the driver to take over; and Unexpected Failures (UFs) where the system does not recognize that it’s doing something wrong and continues driving without signaling for help. Keeney estimates that as long as EFs only happen as often as vehicle breakdowns — every 50,000 miles (80,400 km) — and UFs happen as often as vehicle crashes — every 240,000 miles (386,000 km) — then they would be at acceptable levels.

However, in making her estimates, Keeney relies on the 30,000-mile figure that Waymo reported for November 2017, even though we won’t know until the end of 2018 whether they were able to keep up such a rate of disengagements. Further, she finds that numbers of miles per EF increased during 2017, while the rate of UFs dropped, meaning her previous estimate that EFs would reach parity with breakdowns during 2017 has now been pushed into 2018 and parity between UFs and crashes is still years away — that’s if the big November improvement was sustained.

Despite relying on the 30,000-mile figure and her own estimates showing UFs won’t reach parity with collisions until sometime in the next decade, Keeney still ends on a positive assessment of the potential of autonomous vehicles as an investment opportunity based on the assertion that “Waymo probably is trying to maximize its failure rate to identify faults and root them out.” However, there are some issues with this line of thinking.

Given that the California numbers are the only ones made public, wouldn’t Waymo be incentivized to make sure they make the company’s product looks promising? The weather and road conditions in California would be among the best in the country, and would thus be easier for an autonomous vehicle to navigate. Yes, it is testing in San Francisco, which provides more of a challenge due to the crowded nature of urban centers, but it’s also testing in lower density areas which are much easier to navigate.

Further, it’s the testing in Washington and Michigan that will really provide the challenging weather and road conditions that could increase the number of disengagements, but that data is not included in Waymo’s report because public disclosure is not required in those states. To get a better idea of how Waymo is doing, we will need to wait until the end of 2018, but even then it will still only be a limited picture because the data released is from a single state with generally good weather when testing is currently occurring in five others, as well.

Tesla’s dubious claims

If the arguable leader is having trouble improving its system, that doesn’t bode well for a laggard like Tesla, but what’s being judged here is different from the statistics we looked at for Waymo. First, Autopilot is semi-autonomous; it does not control near as much of the vehicle as Waymo’s system. Second, even though the data we have on Waymo is limited, we have even less on Tesla.

The first safety statistic can quickly be thrown aside. Tesla claimed the NHTSA found that “crash rates fell by 40 percent after installation of Autopilot’s Autosteer function,” but the NHTSA said it “did not evaluate whether Autosteer was engaged” and that the numbers were taken out of context. So much for that claim, but it’s not the company’s only misleading statistic.

Tesla boasts that Autopilot-equipped vehicles are 3.7 times less likely to be involved in a fatal accident, but there are a lot of questions about those figures

Tesla boasts that Autopilot-equipped vehicles are 3.7 times less likely to be involved in a fatal accident, based on “a fatality rate of 1 death per 86 million miles [138 million km] for conventional vehicles versus 1 death per 320 million miles [515 million km] for Autopilot-equipped vehicles,” but as you might guess, there are a lot of questions about those figures. Let’s first remember the RAND report: Tesla likely doesn’t have enough vehicle miles to get reliable statistics, and even if it did, it hasn’t provided any data to be publicly verified.

Data scientist and former Tesla employee Brinda A. Thomas, Ph.D. analyzed Tesla’s claim and accepted that it’s not a fair comparison. The 1 death per 86 million miles statistic includes all vehicles on the road (cars, trucks, SUVs, buses, motorcycles, etc.) and all fatalities (driver, passenger, and pedestrian). Boris Marjanovic expands on this by further noting that passenger cars account for only 40 percent of fatal crashes, and even then it’s not a fair comparison because the Autopilot number is skewed by including only “luxury vehicles with advanced safety features” — the average vehicle on the road is 11.2 years old, according to Piękniewski — and tend to be driven by “wealthy middle-aged people,” who have the safest driving record.

Since Autopilot offers little more than the enhanced safety features that have become standard on most luxury vehicles, it wouldn’t be shocking to find it makes vehicles a bit safer than the average vehicle that doesn’t have those features. However, Musk’s rhetoric about full autonomy seems to lead some customers to believe Autopilot has far greater capability than it does at present, and likely will for some time into the future. That’s leading people to use it in ways that are unsafe, could cause more collisions, and already seems to have been in a factor in two or more deaths. Tesla also refuses to release updates that place limitations on how people use Autopilot to ensure they’re only enabling it in designated areas and keeping their hands on the wheel while it’s engaged.

What’s the verdict?

The available data on the safety of self-driving vehicles provides no definitive answers on whether autonomous-driving systems are currently safer than human drivers. Waymo and Cruise are making progress with their level-4 systems, but the data they release is limited to California. Waymo also went through a long period where there was little improvement in its disengagement rate, and it remains to be seen whether the advancement registered at the end of 2017 has been maintained.

The available data on the safety of self-driving vehicles provides no definitive answers on whether autonomous-driving systems are currently safer than human drivers

The statistics shared by Tesla, on the other hand, are misleading, if not outright falsehoods. Autopilot — meaning enhanced safety features, not a fully-autonomous system — may make vehicles a bit safer, but it would be best measured against other luxury vehicles, not the average of every vehicle on the road. It’s troubling that Tesla makes very little information public and won’t even release crash data to customers without a subpoena. There should also be concern about Musk’s constant boasting about features he seems unable to deliver, which leads some customers to believe that Autopilot is more powerful than its current capabilities allow, and his refusal to constraint Autopilot to ensure it’s used safely.

The recent collisions involving Uber and Tesla vehicles are also troubling, particularly the reporting around the decisions made by Uber’s engineering team and the reactions of some Tesla engineers to Musk’s bold statements about future capabilities. The Uber team’s decisions to turn off emergency braking, reduce the number of safety drivers, and their failure to install a way for the vehicle to tell the safety driver to brake makes it impossible to say they’re putting safety first.

Anyone who suggests that autonomous vehicles are going to take over our roads in the near future is being disingenuous. We’re still a long way away from knowing how safe they really are, and if the industry leader isn’t seeing consistent improvements, what does that mean for the laggards? Until we can see real improvements and verifiable safety data, regulators need to take a more cautious approach to the technologies that private companies are placing on public roads.

Until we can see real improvements and verifiable safety data, regulators need to take a more cautious approach to the technologies that private companies are placing on public roads.

At the most basic, California’s reporting requirements should be taken national, if not international, and potentially have their scope increased, if only to ensure that state and federal agencies have access to the data they need to independently determine whether autonomous vehicles are as safe as industry leaders love to claim.

Since the Republican administration in Washington, DC, has taken a hands-off approach to regulating self-driving vehicles, California is taking up that responsibility. The state’s new regulations target the driverless taxi services that companies want to launch, requiring each vehicle to first do 90 days of on-road testing and companies to hand over all necessary data so safety can be accurately judged, including “miles traveled, miles traveled without passengers (aka “deadheading” miles), collision and disengagement reports, and transcriptions of any communications between riders and driverless vehicles’ remote operators within 24 hours.” The regulations would also force companies to initially offer the driverless service without a charge, and would ban them from doing airport runs or taking pooled rides. The industry doesn’t like these regulations, but they’re a move in the right direction.

The lack of federal oversight may also be coming to an end, as two Democratic senators recently requested information from car and technology companies regarding their autonomous-vehicle projects. The NHTSA has the authority to request even more information, but that may not happen until Republicans lose power.

The goal of reducing deaths caused by vehicles, and in our transportation system more generally, is a noble one, and it’s possible that autonomous vehicles may one day be part of the solution. However, we can’t simply rely on the assurances of CEOs to believe they’re safe; until we have verifiable safety data, we need to take a cautious approach to placing technology that is still in the early stages of its development on public roads.

It’s also worth remembering that even if level-5 autonomous vehicles become a reality, that doesn’t mean they’ll be ideally suited for every transportation scenario. There are fears that by making vehicle travel more convenient, autonomous vehicles could increase travel times and suburban sprawl, which would be an undesirable outcome and is why, when we consider the future of transportation, we also need to be placing an emphasis on public transportation and cycling as necessary alternatives in urban centers. There will likely be a place for autonomous vehicles, but ensuring they’re safe must be the top priority — and, so far, no company has proven that.

UPDATE (26/06/2018): Added information about NTSB’s preliminary report on the Tesla crash on March 23, 2018 and new details about the safety driver in Uber’s fatal accident.

--

--