Self-Driving Cars: An Era of Massive Change

Shoieb Yunus
Mar 4, 2018 · 10 min read
An autonomous vehicle spotted on Lawrence Expressway, in Sunnyvale, California.

Self-Driving Cars will transport passengers and goods with some level of autonomy by assisting or replacing human control. These vehicles will be equipped with a wide range of sensors, massive compute power, and deep learning algorithm.

On January 24th, 2018, I moderated a panel discussion on ‘Driving Innovation: Self-Driving and Connected Cars”. OPEN Silicon Valley hosted the event at the Santa Clara Convention Center in Santa Clara, California. The panelists include Mohammad Musa, Founder and CEO, Deepen AI; Elliot Katz, Co-founder, Phantom Auto; Ali Ahmed, Founder and CEO, Robomart; Ben Landen, Director of Business Development, DeepScale; and, Faisal Mushtaq, VP Technology, Global Logic.

L to R: Mohammad Musa, Ben Landen, Faisal Mushtaq, Elliot Katz and Ali Ahmed

The idea was to have a conversation to understand the impact of Self-Driving Cars and why it matters. According to Frost & Sullivan, there will be a paradigm shift from ‘Best Driving Cars’ to ‘Best Driven Cars’. According to McKinsey & Company, technology-driven trends will revolutionize how industry players respond to changing consumer behavior, develop partnerships, and drive transformational change.

The International Society of Automotive Engineers (SAE) has defined the Level of Autonomy (0–5) for the vehicles: Level 0 — No automation; Level 1 — Automation of one primary function (adaptive cruise control, self-parking, lane assist, or autonomous braking); Level 2 — Automation of two or more primary functions designed to work in unison to relieve the driver of control; Level 3 — Limited self-driving; Level 4 — Full self-driving without human controls within a well-defined operational domain, even if the driver does not respond to a request to intervene; and, Level 5 — Full self-driving without human controls in all driving conditions.

Let’s start the conversation.

Shoieb: In an autonomous vehicle, what are the different sensors and what function do they have?

Mohammad: For each level of autonomy, you have certain requirements. For example, for Level 1 through 3, you need camera and radar. In Level 4, the vehicle is completely autonomous. The driver is not expected to take over at all. In Level 5, there is no driver intervention. There is no geo-fencing. The car must drive anywhere, in any condition — at all times. The radar gives you velocity and direction of an object that you anticipate. It also gives you the object list, a car, and a metal object, etc. It does not detect humans. The camera recognizes humans and other objects, but does not calculate the depth. LiDAR calculates the depth, i.e., x, y, z coordinates. Beyond Level 3, the vehicle would need camera, radar, LiDAR and other sensing technologies. The Nissan self-driving car has six LiDARs, eight radars, twelve sonars, and eight cameras.

Shoieb: What is Sensor Fusion? And what’s the significance?

Ben: Each of the sensor modalities has its strengths and weaknesses. Sensor Fusion aims to complement and supplement those strengths and weaknesses. Sensor Fusion is a spectrum. Today in mass production, we have late fusion in the vehicles. For example, a radar gets pings and says ‘I learned when I see those kinds of pings and it’s probably a car.’ It gives you an object list saying, ‘I see cars here.’ At the same time, you could have AI (artificial intelligence) running on the camera doing AI-based object detection. The camera tells you that ‘I see cars in these places.’ The late fusion data model will compare each one of those and assign weight to the votes from each of those sensors. Then, there’s Raw Sensor Fusion. Raw means don’t process the data at the node, channel as much information into the central processor, and build a holistic model that tries to put all of these sensors into the similar domains, so you can understand the holistic representation of the environment. The approach is to resolve some of the corner cases without having to rely on the voting scheme where you assign a hierarchy of sensors based on what you think is good. You think LiDAR works better at night, but there are some corners cases where the camera picks up something that LiDAR does not. So, you don’t want to be in static voting hierarchy. Traditional algorithms and learned algorithms combined together provide the best representation of the environment.

Elliot: As advanced as the sensor suite has become, there are still some limitations, which is why I’m a big believer in Vehicle-to-Vehicle (V2V) communications. V2V allows you to see through objects, whereas a sensor suite is pretty much still line of sight. For example, you are on the freeway and a car gets into an accident four cars ahead of you. With V2V communications, you will be notified immediately that you need to slam on the brakes. This is critical safety technology. For last fifty years, we have been working on safety technologies (such as seat belts, air bags, etc.) that would allow you to survive a crash. V2V is a technology that would allow us to prevent crashes from happening in the first place.

Faisal: There’s a considerable interest in developing LiDARs, specifically solid state LiDARs. There are three types of LiDARs, namely (1) MEMS-based, (2) OPA (optical phase array), and (3) Flash-based. The idea is to remove the mechanical moving parts from the LiDAR. LiDAR offer a shorter field of view, i.e., 60 degrees. You need six LiDARs to get a full 360 degrees view. The current cost of sensors is very high. A fully-loaded vehicles with various sensors may cost up to $150,000.

L to R: Moderator, Shoieb Yunus and the panelists.

Shoieb: Let’s dig a bit deeper. GPS works only with a clear view of the sky. Cameras work with enough light. LiDAR can’t work in fog. Radar is not particularly accurate. What makes a Self-Driving Car safe?

Ben: It’s a matter of creating a system that can leverage the strengths and try to factor out the weaknesses of each of those sensors. We take it for granted that we as humans are very good at making sense of very complex things that are going on. In fact, we have a lot of things working for us at the same time. For instance, it’s raining, it’s dark, you start taking a blind turn, and so you naturally slow down. You specifically look for certain things on the road, like the nearest reflector that I can see. That’s a very complex algorithm — from noticing what even you should be looking for, and all the way to acting on it. Sensors are dumb, mechanical things that we have built are good at very specific tasks. Certain wavelengths do certain things, and certain wavelengths don’t do other things. Figuring out not only what I can extract from each of those sensors, but how I use that information than I start making smart, safe decisions about where this vehicle should drive — is a very complex problem. And, it is why we are quite a ways from a cameras-only system.

Mohammad: Humans have two eyes, so we are able to perceive depth. It’s called stereo camera. Some of the companies are using multiple stereo cameras. To answer, what makes a self-driving car safe in all conditions and at all times — I don’t think that the technology will ever get there because it’s not economical. When we talk about safety in autonomous driving, we are talking about cases where it makes logical business sense whether it’s for delivery or for the mobility of people. There are conditions where these sensors and technology that we have will always be limited in some fashion.

Elliot: Our basic philosophy is to have fallback, redundant safety systems. We think at this point in time that the optimally safe fallback mechanism is a human. And, that is not to say that humans are great drivers. The statistics show otherwise. 94% traffic accident fatalities are caused by a human error. Machines are really good in doing things that are repeatedly learned to do. There are numerable amount of edge case scenarios. For example, the famous Google edge case scenario was where a woman in a wheel chair chasing a chicken with a cane across the road. That’s not something these autonomous cars experience everyday. And, it can happen on any given day. It may not technologically feasible to expect the autonomous vehicles to be prepared to do everything that a human does while operating a vehicle.

Shoieb: What’s the role of deep learning and machine learning in a self-driving car?

Faisal: In self-driving cars, there are four main components, namely: (1) Sensors, (2) Perception, (3) Sensor Fusion, and (4) Localization and mapping. These elements require machine-learning algorithms, as these are non-deterministic problems. Humans learn, observe, and adapt. The same principles are being applied to the machines in the case of autonomous vehicles.

Ali: With a deep-learning approach, you train a robot to understand its environment. The algorithm computes the localization; it prepares its path plan using a reward and penalty system. With such approach, the robot knows when to make a left or right turn and so on, and so forth.

Shoieb: For the foreseeable future, self-driving cars will need to coexist with human drivers. Human drivers make a lot of mistakes. Are the Machine Learning models and training data sets sufficient to be ready for this reality?

Mohammad: In order for a system to learn, you have to teach the system what a human or car look like in a multi-sensor format. In a fusion world, you have LiDAR, radar, and the camera with different inputs. There are thousands of people processing images for the key players. These people are drawing polygons around humans and labeling them as humans, and around the cars. You need millions of labeled data sets to be accurate to train your AI model. It’s a huge bottleneck and a costly process. There is no sharing of labeled data sets amongst the autonomous vehicle companies. In deep learning, there will always be edge cases.

Shoieb: Transportation is a much larger business than the iPhones and Google Ads. Who will own this business? Traditional automakers, or the technology companies?

Ali: A few years ago, the automakers weren’t really moving fast and the tech companies were giving them a stiff competition. Then, a couple of years ago, General Motors bought a startup called Cruise for $1 billion and Ford invested $2 billion in another startup called Argo AI. Several automakers have been working on their own initiative. They started realizing that this change is coming a lot sooner. It would be very hard to displace the automakers, as they own the hardware platform, i.e., the actual vehicle.

Ben: I think what’s really critical to realize here is that we are in a very bullish macro-economic condition right now. And, the only companies making money on the automotive right now are the traditional OEMs. Big companies tend to over correct for the changes in the macro-economic conditions. Saying self-driving cars are coming sooner than expected might be true for the first cars out on the road driving themselves, but there’s an enormous chasm between that and the commercial viability. I’m talking about potentially 10 plus years of chasm.

Elliot: Mass-producing these vehicles is extremely difficult. The barrier to entry to mass-producing a vehicle vis-à-vis to the barrier to entry with coming up with a ride-sharing service has no comparison. Automakers have been perfecting this for last one hundred years. Tesla has been able to do something truly unbelievable, granted at a much smaller scale. General Motors can pump out 10 million cars a year, whereas, Tesla is having issues getting 500,000 out the door. On the other hand, I think Waymo has the best technology. The analogy I can think of Waymo is like Michael Jordan without a basketball. They have the best technology, but if you don’t have the vehicles, it really doesn’t matter. So, Waymo needs to figure to get their unbelievable technology into the vehicles. In conclusion, you will see a lot more partnerships in this space. I don’t think there will be one silver bullet company that takes everything and does it themselves.

Shoieb: How do you foresee the rollout of Self-Driving Cars in the next 3, 6, and 10 years?

Mohammad: Within three years, you will go from the proof of concept to actual production vehicle with limited use cases.

Ben: What we will see is micro-featurization. With a software-defined vehicle, you might be able to buy a ‘self valet’ feature, what that means that I will get out of the car and car will drive itself into two block radius and park itself. If you have enough ‘micro features’, then you can get enough out of autonomous driving without the ability to say that my car does not have a steering wheel. The long-term promise of autonomous driving is that it will increase safety and lower the cost of transportation.

Faisal: Level 3 should be omnipresent. Level 5 will be in specific use cases, such as in geo-fenced environment, transporting elderly people, and moving people around the campus on the same, predictable path at a lower speed, or on the highways. In addition to technology, liability and insurance issues need to be tackled as well.

Shoieb Yunus

Written by

Product Guy, Cloud Computing, Machine Intelligence, Social Video, Touch ID; Self Driving Car Enthusiast