The Moore’s Law for Self-Driving Vehicles
As the CEO of a self-driving car company, I’m constantly asked how long it will be until robo-taxis can take people pretty much anywhere, pretty much any time. We hear wildly different estimates from marketers (“Company X will solve robo-taxis in 2019!”) and from engineers (“ugh, it’s hard”), so who do we listen to?
For this post, let’s measure the performance of a system in terms of the number of miles per disengagement. A disengagement, roughly speaking, is when the technology fails and a safety driver must take over. A great self-driving vehicle will have a big number — that means that the vehicle can drive a lot of miles and only infrequently fail.
If the title didn’t give away the game, I’m going to draw comparisons to Moore’s law. Moore’s law is the empirical observation that the number of transistors in a computer doubles every 18 months. That’s an exponential rate of growth — Moore’s law is what allows your phone to run circles around your computer from 2000.
Exponential growth is rare. Trees and people, for example, grow linearly, which is much slower. Most things that do grow exponentially cannot sustain it; for example, bacteria reproduce exponentially until they become too crowded. In fact, Moore’s law doesn’t even seem to apply to computers anymore!
That said, it’s not unusual for technology in its early days to grow exponentially fast. It’s an optimistic assumption, but if you want to make big futuristic claims about how rapidly the world will change due to technology, you should assume exponential growth.
So today, we’re going to make big, bold, optimistic claims about the future of self-driving cars. We’re going to assume that the technology improves exponentially fast. In other words, we’re going to compute the Moore’s Law for self-driving cars. But you may not like the answer.
In 2004, the best self-driving vehicle was CMU’s Sandstorm, which “won” the first DARPA Grand Challenge by driving about 7.4 miles of a 150-mile course before getting stuck on an embankment and spewing smoke from futilely spinning tires. (I don’t say this to be cruel; everyone else did worse!) Let’s round that up and call that a failure rate of about 10 miles per failure.
In 2018, Waymo reported 11,017 miles per disengagement. (The term disengagement is defined by California, but roughly means “the technology failed”). That’s about 10⁴ miles per failure.
With those two data points, we can compute the Moore’s Law for Self-Driving cars. Drum roll…
… The number of miles between disengagements will double approximately every 16 months…
In a cosmic coincidence, the Moore’s law for self-driving cars is almost the same as the Moore’s law for computers — performance doubles every 16 months!
The critical question is: “how good does the system need to be?” Let’s assume that the goal is to match human performance. Humans are actually tremendously good drivers; only one fatality per 100 million (10⁸) miles! To put this in context: an average human driver might drive a few hundred thousand miles in their lifetime. The total distance driven by every autonomous car — ever — is likely less than 20 million miles.)
Between human performance (10⁸ miles per fatality) and the best-reported self-driving car performance (10⁴ miles per disengagement) is a gap of 10,000x. Put another way, self-driving cars are 0.01% as good as humans.
Even with performance doubling every 16 months, it will take 16 years to reach human levels of performance — that’s 2035. This makes the claims that AVs are coming in 2019 or 2020 sound pretty dubious. (We will, of course, see high-profile demonstrations from self-driving companies intended to showcase their technology. This doesn’t mean that their system performs as well as a human!)
Many self-driving failures will merely result in injuries, not fatalities. Humans “only” drive about 10⁷ miles between injury-causing accidents, so if we assume that an AV failure never results in a fatality (and only an injury), we shave off four years from our previous prediction. But it’ll still take about 12 years to achieve human levels of performance.
The path for self-driving car companies
My company, May Mobility, builds self-driving vehicles. But we are on public streets today with a great product. Why are we able to do this faster than 2035? There are two major reasons:
- We’re not playing the same game as everyone else. We’re not trying to match human performance everywhere — we’re focused on a small subset of the overall driving task: low-speed (< 25 mph) on well-known routes. Slower, simpler routes mean that the overall complexity is much lower.
- We have a fundamentally different technology that gives our vehicles a unique ability to understand what other cars and pedestrians are doing. We believe this technology will put us on a different (and much steeper) curve. Our technology, Multi-Policy Decision Making, can have a transformational effect on the industry.
There is precedent for such a transformational change. The cost of sequencing the DNA of a whole human genome in 2001 was about $100 million dollars. (Or, as plotted below, you could sequence about 1/10⁸ genomes for a dollar.) From 2001 until 2008, the technology was consistently improving at a Moore’s law-type rate — performance was doubling about every 20 months.
But then in about 2008, a completely new technology was applied to gene sequencing. This changed the Moore’s law “coefficient”; progress suddenly jumped to follow another line. Instead of performance doubling about every 20 months, performance started doubling every 4 months. This new technological approach cut decades off from the goal of a $1000 whole-human genome.
We believe the same thing can happen in autonomous driving: a new technology could make self-driving cars a reality much faster. In fact, we think that technology is Multi-Policy Decision Making, one of the key technologies developed by May Mobility that allows vehicles to understand what other road users are trying to do. But that’s an article by itself…
If you get nothing else out of this article, remember this:
- Self-driving cars are roughly doubling their performance every 16 months. That’s the Moore’s Law for Self-Driving Cars.
- Because self-driving cars are only about 0.01% as good as human drivers today, robo-taxis are likely to be a fantasy until 2035.
- There are two loop-holes to this grim forecast. A new technology could come along that changes the growth curve. Or, companies could decide to go after applications that are less difficult than “drive anywhere, any time”.
This might be bad news for robo-taxi companies, but it’s good news for shuttle companies.
For the detail-minded, here are three additional points to consider:
The “Moore’s law” for self-driving cars clearly depends on the data that we use. If you think self-driving car companies might have overstated their performance, it pushes the arrival of robo-taxis even farther into the future.
On the other hand, if you believe that today’s best self-driving cars drive ten times better than Waymo’s public numbers (i.e., 110,000 miles per disengagement), it would imply a much faster rate of improvement than computed above. But even if today’s systems were driving this well today, it would still take until 2028 to reach human levels of performance.
A key assumption made in the article above is that the technology will improve at an exponential rate. That assumption leads to the idea that the disengagement rate will double every 16 months. This figure is optimistic: we cherry-picked a very low autonomy rate in 2004 and cherry-picked the best reported commercial data in 2018, which will tend to paint a picture of rapid improvement.
We can check how optimistic this assumption is by looking at public filings in California. Are AV companies improving at an exponential rate? And if so, is the rate faster or slower than 16 months?
Waymo’s data is provided only at the yearly level, and I was only able to find four years’ worth of data. Note that this plot is not a log plot like those used above; if the trends were exponential, we would see a sharp curve bending upwards. Waymo’s reported performance in 2018 was twice that in 2017, but 2017 was essentially flat versus 2016 (And 2016 was a good year with respect to 2015!). What does this mean? Well, we can fit an exponential to the data, and we get doublings every 16 months. (Note: there’s a bunch of ways of doing this fitting; I let 2015 be year zero, then did a least-squares fit of the form A*exp(Bt).) However, the fit’s pretty lousy — see the second appendix if you want to see it. For argument’s sake, a linear fit (which would put the arrival of robo-taxis about 20,000 years from now) looks just as plausible from these four data points. But isn’t it neat that 16 months popped out again?
Cruise’s month-by-month data is headed upwards too, but it’s pretty noisy. Using the same fitting strategy as before, we get doublings of performance every 18 months.
Despite the challenges with this real-world data, it’s interesting that we end up with results close to 16 months. It lends credibility to the idea that no one is on a wildly different trajectory, and that as a consequence, robo-taxis will probably won’t arrive until 2035. Well, except for point three.
The definition of disengagement used by California excludes many types of interventions, and so is itself an optimistic measure of the maturity of the technology. In other words, these disengagement figures roughly correspond to failures of the system where the company expected the technology to work, and do not include situations where they did not expect the technology to work.
Well, you say, of course the system would perform badly in situations that it was not expected to work! But if the question is “how close are we to viable robo-taxis that can operate (almost) anywhere?”, then the fact that there’s a whole class of driving scenarios which are not counted at all in these statistics should give you pause.
Waymo’s 2016 disclosure summed it up pretty well:
The DMV rule defines disengagements as deactivations of the autonomous mode in two situations: (1) “when a failure of the autonomous technology is detected”, or (2) “when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.” In adopting this definition, the DMV noted : “This clarification is necessary to ensure that manufacturers are not reporting each common or routine disengagement.”
As part of testing, our cars switch in and out of autonomous mode many times a day. These disengagements number in the many thousands on an annual basis though the vast majority are considered routine and not related to safety. Safety is our highest priority and Waymo test drivers are trained to take manual control in a multitude of situations, not only when safe operation “requires” that they do so. Our drivers err on the side of caution and take manual control if they have any doubt about the safety of continuing in autonomous mode (for example, due to the behavior of the SDC [Self-Driving Car] or any other vehicle, pedestrian, or cyclist nearby), or in situations where other concerns may warrant manual control, such as improving ride comfort or smoothing traffic flow. Similarly, the SDC’s computer hands over control to the driver in many situations that do not involve a “failure of the autonomous technology” and do not require an immediate takeover of control by the driver…
Here are the plots that show the curve fits to the Waymo and Cruise data. Most of these model fits are fairly sketchy, but I think you can credibly look at them and conclude that it’s hard to justify a significantly faster rate of improvement than what was calculated here.