Let’s Talk About Self-Driving Cars

Overview of a presentation by Andreessen Horowitz on the future of autonomous vehicles

Artur Kiulian
Jan 10, 2017 · 11 min read

I love everything about self-driving cars to the extent of even taking the Self-Driving Car Engineer Degree at Udacity. That’s why this particular video from the a16z Summit really caught my attention. Frank Chen (a16z partner) goes over the most commonly asked questions about autonomous cars and I’ve decided to dive deeper into each of those here on Medium.

Level by level or straight to level five?

The major assumption is that “Everything that moves will go autonomous”, and we are not only talking about cars, all the trucks on our roads, drones in the sky, shopping cars and even toys will move by itself to the extent that our involvement will become rudimentary, undesired or even illegal.

So how will we get there?

There is a six-level categorization scheme that comes from the Society of Automotive Engineers and describes the automated driving in a six categories.

Level Zero: Driver Only

This one is simple, it’s when you completely drive yourself.

Level One: Assisted

Cars that we mostly drive today belong here, those are the ones that have anti-lock brakes and cruise-control, so they can take over some non-vital processes involved in driving.

Level Two: Partial Automation

When the system can take over control in some specific use cases but driver still has to monitor system all the time is here, it’s applicable to situations when the car is self-driving the highway and you just sit there and expect it to behave well.

Level Three: Conditional Automation

This level means that driver doesn’t have to monitor the system all the time but has to be in a position where the control can quickly be resumed by a human operator. That means no need to have hands on a steering wheel but you have to jump in at the sounds of the emergency situation, which system can recognize efficiently.

Level Four: High Automation

When your car drives you to the parking lot you get to the level four, when there is no need for a human operator for a specific use case or a part of a journey.

Level Five: Full Automation

The holy grail lies here and means that the system can cope with all the situations automatically during the whole journey and no driver is required to the point of not having controls at all, which means you have no choice.

What kind of sensors? LIDAR or not?

The next question is which hardware will allow us to get to the level five, whether it’s gonna be some new sensors or incremental improvements to the current ones?

Companies like Google are relying on LIDAR technology, which stands for Light Detection and Ranging and is a remote sensing method that uses light in the form of a pulsed laser to measure ranges (variable distances) to the surrounding environment.

That 3D laser map combined with cameras and smart software, is enough information for a car to drive itself on a road without a human driver.

The current implementation is a moving bucket on a rooftop that costs $75000, which makes it a major obstacle from being applied across the industry on a scale. But there are some advances to make it solid state without moving parts which would cost around $250.

So what about stereo cameras that allow us to compute 3D space? The kind of cameras humans have in their eyes that allow us not to bump into each other. These cameras get us almost all benefits of LIDAR and don’t cost that much, but the counter argument is extra resolution and accuracy.

New types of precomputed maps?

Everybody uses Google Maps, Waze, Apple Maps and they have an amazing level of resolution that make our movement across the city frictionless. But this resolution is still not enough for a car that wants to drive itself.

Then what information is missing? Things like:

  • where are the curves
  • where are the traffic barrels
  • what time of the day do I get a glare in front of my camera that blinds me completely

And all other kinds of micro resolution details are completely missed out from the generalized view of maps. Because we as humans don’t care about that at all and those current maps solutions are for people, not machines.

So do we need to separate precomputed maps just for autonomous vehicles? It seems like yes, but who is going to provide them and how much will that cost? And the most interesting question that is no one asking, if there is an opportunity for a monopoly in this space? Because in the nearest future when we get to the level five and you are left with no choice of operating the vehicle yourself — you are completely dependent on the infrastructure that powers the self-driving car.

Will we see the cheaper version of precomputed maps that will allow our cars to operate within a certain low speed or with a certain level of safety?

Who will regulate this potential dangerous gray area?

It also has power implications, since you have to rely on a supercomputer in your trunk to operate those complex multimillion parameter precomputed HD maps, and boy those will drain a lot of power.

What blend of software techniques?

Deep Learning is making a sound across the Silicon Valley but there are other accomplishments in the field of robotics and path finding that should not be forgotten. The major difference between those approaches is the ability of a system to learn from a previous experience/dataset or drive decision solely based on a hardwired logic or rules.

As a matter of fact the Boston Dynamics robots that we all admire don’t make use of any machine learning at all, while still delivering impressive results.

Even though those directly programmed rules can’t beat Alpha-Go players with the same level of efficiency as Deep Learning algorithms they can still be combined with the recent advances in machine learning to provide better results.

How much real world vs virtual world testing?

With machine learning you NEED to have giant datasets to learn from, the datasets of previous experience, examples of correct behavior and environments.

The question here is how much of the data will be from the real cars driving the real roads versus the simulated environments.

And actually, there are a couple of attempts to simplify this type of learning already, through simulated universes like OpenAI’s Universe project.

Can we make sure the algorithms will converge to the same prediction as in real world? With fake worlds you can simulate a lot more situations but how accurate those will be considering the resolution and granularity of the precomputed maps that we have to take into account.

Will V2X radios play an important role?

V2X is a form of technology that allows vehicles to communicate with moving parts of the traffic system around them. And V2V technology, or vehicle to vehicle, allows vehicles to communicate with other vehicles.

The use cases range from having your car communicating with a traffic light when there are no other cars and you are sitting there for 5 minutes to more dangerous situations where cars have to urgently communicate the state of emergency such as a T-bone crash problem.

Which is especially interesting considering recent advances of Tesla and a viral video of a car spotting the crash accident way before a human could.

Imagine how many accidents could be avoided if proper communication existed on the road and around it.

The major concern here is a protocol compatibility and efficiency of communication since the decision should happen in milliseconds and there is no time for extra computation related to differences in a communication “language”.

This is a technology everyone sees as wonderful future, but no one plans for it as a first iteration of fully autonomous vehicles.

Can we get rid of traffic lights?

Four-way lights suck and if the cars can talk to each other and are fully autonomous why can’t we get rid of lights. This may sound very chaotic at first but hey, aren’t internet packets moving the same way?

This would involve having smart rearranging algorithms and extreme monitoring at a point of intersections but this is totally worth it, considering the improvements it will bring to the efficiency of transportation.

How will automakers “localize” their cars?

Every city has different driving culture, so how will autonomous cars treat these unique conditions. What’s safe in Bangalore may cause a traffic collapse in Boston, etc.

Localization is a term comes from computer science and means that software will somehow be prepared to the conditions of environment where it is executed.

So which form can this take in self-driving cars, will we have different kinds of algorithms dedicated to different cities? Boston version, Bangalore version etc for each city driving conventions? Or will we have a generalized algorithm that is able to adapt to any environment by spending some time on a road?

This relates to a whole other area of research that involves creating learning rules by absorbing how people and objects around us behave. By learning social conventions and typical human behavior cars should be able to perform better.

Who will win? Silicon Valley vs China vs Incumbents

The major assumption here is that incumbents are the ones that have the easiest way to success since they are already making cars and have all the needed infrastructure.

These companies are establishing offices in Silicon Valley and aggressively trying to hire people to drive the innovation faster since they understand this is going to happen through software primarily. But there is also an opportunity for a native Silicon Valley car company, like Tesla.

And there are plenty of Chinese manufacturers and software proprietors like Baidu that are very aggressively pushing towards this space. On a side note China publishes more papers on deep learning than any other country on the planet, so definitely worth keeping an eye on.

Will we buy cars or transportation as a service?

If we as consumers will change our behavior from buying cars from manufacturers to buying transportation service from companies like Uber and Lyft, the whole load of money will start floating to the another direction.

This will cause the car industry to look more like an airline industry, where you don’t necessarily pick which plane you would like to fly on but dedicate your money to a fleet provider — airline company.

This will cause car companies to transform into b2b companies rather than B2C and they will start selling to fleet managers like Uber. As a result, we won’t see any Super Bowl ads for cars and many other things that exist to manipulate us into buying cars will go extinct.

How Will Insurance Change?

And one of such changes will be the insurance industry. Right now insurance prices are calculated as a function that takes you as a driver, your demographics, cost of the car that you own and where you live.

What will insurance companies consider the ingredients within this new self-driving era? Will the efficiency of an algorithm be the core metrics? Car manager? Car manufacturer? Or still a driver that owns or rents a car?

How will the repair costs behave? We will definitely have fewer accidents on a scale but how complicated will be the actual repairs of your laser systems, precomputed map supercomputer analyzers and all other expensive hardware.

How Will Accident Rates Change?

If all cars become autonomous the accident rates will be zero since most, 24 of 25 causes of accidents are human error related. Things like speeding, distracted driving, drunk driving, running red lights are all causes of us sitting behind the wheel.

But what about the mixed state of things? Where some cars will be still driven by humans and will cause unexpected accidents that we’ve never experienced before.

When will it become illegal to drive?

If it’s true that algorithms are statistically better than drivers then we shouldn’t let people to drive. But there are still a lot of people that love to drive. This can be a fun recreational activity but we just don’t need those people on the road.

How will commutes change?

One argument is that commute times will take longer because we will become indifferent to how long the commute will take. As we approach the state where all cars are fully autonomous, there are no traffic lights and there are no accidents, we should be able to do anything we want during the commute.

Heck, we can even sleep during the commute like we do on trains or planes. Commute will definitely transform from something painful to a much nicer activity.

But also this will free up so much space (parking spots, auto repair shops) that people will live much closer to the work and won’t have to commute as much as today.

How will cities change?

There are so many 2nd and 3rd order effects that will happen and that are impossible to predict right now, as it was impossible to predict the phenomenon of Walmart.

These things will definitely transform our society and create unseen opportunities to capitalize on.

When Will This Start and then how quickly will we switch to autonomous cars?

There are a number of predictions by major players that state this will happen by 2020–2040. We will see this beautiful world during the lifetime and the question is if we are prepared enough.

Disclaimer, most the content in the article comes from this video, I’m writing it out along with my thoughts and extra information on the topic. Feel free to signup to my newsletter below to get curate content each week.

The Startup

Get smarter at building your thing. Join The Startup’s +737K followers.