Tesla’s Road Less Travelled Towards Autonomy

Tesloop
14 min readMay 7, 2019

--

Two roads diverged in a yellow wood, and Tesla —

Tesla took the one less traveled by.

And that will make all the difference.

By Rahul Sonnad | CEO and Co-Founder of Tesloop & Carmiq

r

Learn more about features and service available to Tesla Owners through Tesloop’s Carmiq network. Learn More.

There are currently two roads towards creating the self-driving car of the future. Once brought to the mass market, the autonomous technologies — that remove the need for a human driver in vehicles — will undoubtedly become the biggest disruption to the automotive industry since mass production of the Model T over 100 years ago. Vehicle autonomy may even become the biggest and most rapid economic disruption in history as measured in total dollars/euros/renminbi. Thus the stakes for picking the shortest path, could not be higher.

Tesla’s long Autonomy Investor Day announcement last month, outlining their upcoming Tesla Robotaxi Network, brought the divide between these two paths front and center in the race towards autonomy.

The two approaches differ not only at a technical level, but are also fundamentally intertwined with radically different business models. And beyond the normal technical and business considerations, they fundamentally rest on different philosophies of morality related to life, death and technology.

On one side of this divide are a highly pedigreed group at a wide range of companies including Google’s WAYMO, Uber, Cruise, Aurora, Aptiv (formerly Delphi), May Mobility, nuTonomy, and Voyage. One the other side Tesla stands alone.

The “humans can’t be trusted” philosophy

Chris Urmson — with a storied career at pre-WAYMO GoogleX and currently CEO at Aurora — outlines this schism in the approach towards autonomous technology and go-to-market strategy. He describes Tesla’s approach as “let’s just keep making incrementally better driver assistance systems [in cars that are selling to consumers], and one day we’ll turn around and we’ll have a self driving car.” The other approach, which he pushed at GoogleX, was to design a system that would have no reliance on human supervision. “These are two distinct technologies. One is driver assistance and the market pressures and capabilities will pushing one direction, and the other is fully self-driving vehicle, where we can afford and need to apply different technologies to get to a level of responsibility where you can turn your back and trust it. And this is one of the big open debates in the space.”

To his credit, while he is fundamentally in what he calls the Google camp, he agrees there is a debate.

That’s more open minded than Emilio Frazzoli, CTO of nuTonomy, who dismisses Tesla’s approach as reckless and infeasible. He does this with a highly coherent thesis supporting the Google approach from both a technological and business model perspective, based on two critical assumptions.

“The problem [with the ADAS approach] is you have to cross this red band, where you are actually requiring human supervision of your automation system. The other path that people are following [NuTonomy, WAYMO, Uber, etc…] they are working on cars that will be fully automated from the beginning, and they start with a small geofenced application and then scale that up [to more areas].”

When people ask me ‘when do you think we will see autonomous vehicles everywhere on the street’, I ask them ‘what do you mean exactly’, because if you ask me when do you think that you will be able to walk into a car dealership and get out with the keys to a car where you just push a button and it takes you home, that’s not happening for another 20 years at least.” [ironically he may right on the dealership point ].

“On the other hand if you ask me when will you be able to go to some new city and get and summon one of these vehicles that picks you up and goes to your destination — this will be happening within a couple of years. There is a big difference between autonomous vehicles as a consumer product vs. a service you provide to passengers. For example, what is the scope… where must this car be able to drive. If it’s a product and I pay $10k then I want this thing to work everywhere…. But if I’m a service provider, I can offer a service in a particular location, possibly only in certain weather conditions. Thus the problem becomes much easier.

If I have to sell you a car, the cost of the autonomy package must be comparable to the cost of the vehicle. You won’t buy a $20k car with a $500k autonomy package. Another back of the envelope calculation: what is the value of the autonomy to the buyer? Let’s say it’s the value of all your time that you would save driving. If you do the math for an average person, this the net present value is about $20k. So a rationale buyer will not pay more than that to buy the autonomy package. So now you’re constrained by $20k. On the other hand, if you’re offering a service, you’re comparing this to the cost of providing the same service using a human behind the wheel. If it was 24/7 service, this would be 3 drivers worth, or roughly $100k a year. So now the cost of the LiDAR or fancy computer doesn’t matter that much.”

The Technical and Psychological Assumptions

This all makes sense if you buy into his two key assumptions:

  1. You can’t technically achieve autonomy with just cameras and radar in the next decade.
  2. Letting drivers oversee a partially autonomous system is a recipe for disaster & deaths (due to the psychology of human complacency) at a level that is both morally and financially unacceptable to any reasonable executive. In other words, crossing the “red band” is not a viable route towards autonomy, because you will very likely kill one or more people with your software, and this is not acceptable.

If you break this down, you’ll notice the first assumption is a technological one based on the rate of software progress. The second assumption’s basis is much more complex. It is effectively a human psychology argument around human-machine interactions, for the scenario where control of vehicles is shared between man and machine.

How lame is LiDAR?

As for assumption one: you can’t achieve autonomy with just cameras and radar. The argument underlying this is that until things radically advance many years from now, you need LiDAR that measures distance with lasers. In their autonomy day presentation, Tesla comprehensively lays out their conflicting thesis, which relies on the assumption of exponential machine learning progression to radically accelerate functionality. Their technical argument is essentially that everything LiDAR can see, so can cameras, which can in fact see with much higher resolution. As a proof for this they offer human perception, which uses only two cameras-called-eyes to drive, so clearly this is theoretically possible. Support for their argument can be found in some recent research, which determined that depth perception with stereo cameras, was nearly as precise as LiDAR.

The Tesla thesis is that to achieve true autonomy, you must start to understand the world through vision. If you can’t, differentiating a tire from a bag on the road is not possible, and these items require fundamentally different behavior when encountered.

They acknowledge that understanding the world through vision is no easy task and they can’t do this now well enough to release a self-driving product that works everywhere. However, they note that their cars are collecting an unprecedentedly massive amount of visual data and this is training new neural nets running on very very fast chips designed to do this.

As for LiDAR, Tesla’s comprehensive argument against it might be characterized as follows:

  1. It is excessively expensive to get into millions of cars quickly (thousands of dollars)
  2. It’s going to be handily outperformed on all relevant criteria by a set of $40 cameras and some AI software within 3–6 months, when combined with neural nets and radar in detecting distances of objects
  3. It adds complexity to the system (for no value)
  4. If you don’t agree, it’s very likely because you don’t really understand the shape of an exponential curve (or maybe the realities neural net advancement when fed by exponential amounts of data).
  5. If you still don’t agree, “sorry you’re wrong, because we’re several months ahead of you and are already driving with new hardware and software in our cars. Our neural net will blow your mind. Exponentiality secured. LiDAR lame! (it’s the new hydrogen)”

While this LiDAR technical divide is the one that gets the most press play, it is not nearly as important in shaping the competitive landscape in the race towards autonomy as is the moral divide.

The Morality of Inaction vs. Action

Assuming Tesla clearly demonstrates that you don’t need LiDAR, the transition towards the vision approach will have a clear path. It will require lots of new AI development, but it’s unlikely that a company like WAYMO isn’t up to the task of quickly chasing Tesla prowess in creating vision aware neural nets. However, in order to train these nets you need a massive amount of data. And three different types of data are required train three different things:

  1. Recordings from Cameras and Radars
  • used to train visual recognition and object behavioral prediction

2. Recordings of how humans control cars when driving normally

  • If you see ‘x’, move in ‘y’ manner

3. Recordings of how humans override when the car is autonomously driving

  • This is beta testing, with your life on the line, in the “red band”
  • This enables rapid refinement of the autonomous software

And it is in this third data collection activity that the fundamental moral argument comes to play which has forced almost everyone into the Google Camp. While other companies such as Mercedes and Mobileeye have headed down the initial ADAS road with Tesla, as they approached Frazzoli’s “red band” they all hit the brakes on bringing advances to mass market, as the technological complexity of their systems increased. They did not want to be perceived as following an approach that had a high probability of killing one or more people. This is the same thing that GoogleX had decided a long time ago, and everyone has essentially followed this doctrine.

Everyone except Tesla, who in 2015, crossed into the red band, progressing beyond adaptive cruise control of speed, with their new automated steering feature. From there features like prompted lane changing, exit ramp navigation, automatic highway lane changes with confirmation, and then automatic highway lane changes without confirmation.

This progress contradicted the second core assumption of the Google camp: Most humans can’t help but get lulled into a sense of complacent when overseeing autonomous driving that works well most of the time. Therefore, when they really need to, they will not override and will face death and injury. There is no question that some people will react this way, as has been demonstrated. More than one person has gotten killed while using Tesla’s Autopilot feature, but unexpectedly there has also been a high profile death caused by an Uber vehicle. So the important question is what % of people will put their life or limb in danger and to what degree, and can roll-outs of new features be managed in a way that mitigates risk? And how does the level of risk this causes compare with the additional safety that an autopilot system will begin to provide? This is a difficult question to answer, but one recent MIT study on driver behavior with Tesla autopilot supports Tesla’s view that this can be managed down to a very small number.

When looking at the morality of autonomous cars, it has become painfully cliche for people to call out the moral dilemma of the “Trolly problem”. This is where a car must decide to make a choice on who to kill or injure (i.e. the driver vs. a pedestrian). However, no software engineer has ever encountered this dilemma due to the state of the technology, and the real moral issue facing autonomous technology is almost never mentioned.

The fundamental moral question is whether you are willing to be technologically responsible for some number of deaths and injuries to avoid letting people kill themselves in cars in huge numbers. The thought exercise here would be to imagine a device that prevented all car accidents, but once every 500 million miles killed the driver. Since fatal accidents happen about every 90 million miles, this would be a big win in reducing death, and if installed in all cars, would save over 25,000 people a year in the US. But would any car maker install it? Probably not. But this is essentially the trade-off that the government’s of most countries have made by mandating seat-belts. They radically lower mortality, but once in a while they kill someone. The big difference, is that they don’t actually cause the accidents in which people die.

Fundamentally, in order to cross the “red band” you need to be willing to create a situation in which some people get injured or die in accidents that happen while the autonomy system is running. Clearly, this can and has happened with old-fashioned cruise control, and there has been some concern over the safety of that feature, which is now almost universally deployed by car makers. But as the level of ADAS gets better, there is an argument that it becomes increasingly difficult for people to avoid complacency and thus danger is increased. However, the countervailing argument is that smart ADAS actually can prevent accidents in many situations unlike old-fashioned cruise control, and thus this needs to be also considered.

Tesla stance is that they can manage complacency using two key techniques: first, by rolling out new features in layers. This started with adaptive cruise, and was followed by single lane steering, user-prompted lane changes, user-confirmed lane changes, and then non-user-confirmed lane changes. During this other features such as exit ramps and parking lot features were also added. The second technique is to offer these features first to a set of “early access” users who agree to actively test the system, acknowledging that they must stay highly alert in overseeing and correcting the vehicle behavior with the goal of making the system better through their actions and interactions with the autonomous software. These actions then train the networks feature-by-feature for the next wave of Tesla owners adventurous enough to use autopilot. But the training and tuning of the system by the first group of active real-world testers and their actions, drastically reduces the risk to the general public.

As long as they roll it out in a manner that effectively lowers the accident/injury rate of autopilot on vs. off in their own fleet by some significant multiple, Tesla considers themselves to be not only on solid moral footing, but on the most morally defensible path. First, because there are fewer injuries overall during the technology maturation phase. This is an important argument but realistically effects maybe dozens of people. But more meaningfully you pursue this because this method will result in the elimination of the vast majority of injuries related vehicles without autonomous safety systems years before any other path will. And the only pragmatic way to accelerate things here is to use real human training in real world scenarios.

As another thought exercise in corporate morality, you may ask how significant the first mover advantage is on this path. Today, with regards to injury rates, since there is no other option Tesla’s autopilot is competing with human drivers, and as such, the bar is low in terms of how safe the system needs to be. Just better than a human driver, which is quite good, but still results in 1m global deaths a year. But once 1, 2 or 3 companies cross the chasm and get significantly better than human drivers, it may become increasingly morally difficult to justify crossing the red band chasing the systems that are already on the other side. You would need to be willing to injure people not in a noble goal of creating safer technology for everyone, but rather for your systems business advantage. If companies already avoid invoking software in the red band, when there is no safer alternative, the idea of doing so when their is an alternative approach may be untenable. The alternative would be to license existing neural nets and the data driven weightings to allow you to move past the red band from the start. And it’s quite likely that companies like WAYMO would find this scenario highly aligned with their business goals.

The Tesla Autonomy Business Model vs. Others

Assuming that as they claim, Tesla will prove out their model in the next year and surpass the autonomous capabilities of all others by the year after, this puts them in an unprecedented position from a business model perspective. Interestingly, this is not a position that was orchestrated from the beginning, but rather one that was realized around 2014, and fortuitously aligned with their core goal and strategies in transitioning the world to electric cars.

Thus Tesla’s business model has now become:

  1. Create compelling electric vehicles, that offer the lowest cost per mile of operations by a multiple
  2. Equip these all with upgradeable autonomous technologies and retain all data ownership
  3. Scale out production as fast as possible
  4. Enable vehicle buyers to effortlessly resell use of their autonomous vehicles at 3x — 10x the cost they paid a robotaxi model
  5. Take a 30% share of robotaxi revenue
  6. Optional: Stop selling cars to individuals until you can catch up with robotaxi demand

While Tesla’s original model for a model 3 was roughly a profit of $8k on a $40k vehicle (20% margin), along with an $8k of profit for optional autonomous software upgrades (33% margin potential).

The new prospective business model adds roughly another $8k/year of profit for 10 years, after the owner payouts, and operational costs of insurance, cleaning and fuel, while also locking in the autonomous software sale, increasing the margin towards 75%.

Such a business model would give rise to the fundamental question whether it makes sense for them to sell cars at all vs. providing personal and industrial robotaxis, which arguably could push margins on a car even higher.

However, for the other companies in the automotive an autonomy space, the question is much more complicated. Any company or group of companies that want to compete favorably against Tesla in the core market of personal and industrial robotaxis will have a huge challenge in front of them, replicating the required elements of Tesla’s technical platform.

  1. electric cars that consumers love, that can be built profitably at scale, with low operational costs per mile, and very low maintenance requirements
  2. low cost battery production at massive scale
  3. modern, upgradable, flexible and extensible software architecture that spans the vehicle, chargers and the cloud
  4. a security architecture that can outmatch the best hackers integrate sensors into their electric vehicles that could be deployed at scale in the market
  5. specialized ASIC chips comparable to Tesla’s new FSD computer
  6. neural nets and path planning software that can be trained by car drivers, and that work in a generalized manner across the world
  7. a cohesively managed autonomous charging network

And they key factor here, is unless you have all of this, you are not in the running. Every component of the equation is critical to the network, and you need all of this at some minimum scale of a few hundred thousand units or it’s not competitive. Right now, you could argue that no other companies have a single one of these elements.

But if and when Tesla demonstrates that their path towards autonomy and high quality robotaxis, potentially as soon as next year, it will force one of the biggest industries in the world into a new paradigm for automotive mobility and the optimized foundations that Tesla has pioneered.

Two roads diverged in a yellow wood, as predicted by a neural net.

A holistic comparison of the two leading roads towards autonomy:

--

--

Tesloop

Enhancing The Connected Vehicle Experience Through Carmiq. Learn More: http://bit.ly/2THSfiR