Autonomous Driving: Programming Common Sense into Machines
Back in March, I sat down with Ben Landen, Head of Business Development at DeepScale.ai, to talk about some of the very interesting challenges yet to be conquered in Autonomous Driving. Earlier this month, DeepScale announced that it closed $15 Million in Series A funding. Let’s get to the conversation.
Shoieb: Tell us about yourself and your company.
Ben: I myself have been working in the automotive industry ever since I graduated from my undergraduate degree. And then, I worked for Maxim Integrated, a semiconductor company, and managed P&L of automotive semiconductors, mostly infotainment and ADAS (advanced driver-assistance systems). Then, I got my MBA from UC Berkeley and then joined DeepScale at the beginning of 2017.
DeepScale is a software company where we provide efficient neural networks or deep-learning AI software to vehicles to help them with more advanced autonomous or automated driving features all the way down to ADAS, like traditional automatic emergency braking, adaptive cruise control — the features that you see today, but maybe it can be improved in terms of its accuracy, capability, or its cost model in terms of reaching more vehicles. We also work all the way up to Level 4 and Level 5 autonomous systems that include fusing the sensors doing deep learning.
Shoieb: What was interesting to you about autonomous vehicles? And why have you focused on it?
Ben: When I was choosing what to do next after I spent 7 years at my last job, I knew I wanted to stay in automotive because that was the technology place to be. I was mainly in infotainment, which had its heyday about 5–10 years ago timeframe when it really took off and is still doing great, but to me, the next great wave was in automated driving. Being able to work on systems that instead of making user experience in the vehicle great as can possibly be, but can actually have an impact on people’s lives. There are going to be far-reaching implications that go farther than saving people, but actually changing the way we live our lives as the society is structured around transportation, was really a strong calling for me.
Shoieb: Teaming up with Visteon, what do you plan to accomplish with that partnership?
Ben: The reason that we have come out and publicly stated that we are working with Visteon and why Visteon was interested in making that statement is because we really see the world of automated driving in a very similar way. The viewpoint of that world is that open collaboration is going to be key to covering all of the corner cases that are out there to bringing these systems down market to lower-end vehicles so the automated driving is not an expensive vehicle niche. It should bring benefits to all of society. Setting up a collaborative platform is going to be the most impactful and have the most positive influence on the automated driving system.
Shoieb: I read somewhere that there are a pre-determined set of rules to rapidly and conclusively evaluate and determine the responsibility when autonomous vehicles are involved in collisions with human-driven vehicles. In the real world, how would that play out?
Ben: It is important to understand that the inputs and outputs of these systems are critical things. In automotive today when something goes wrong, there is a defined process for how you go and troubleshoot that problem. There are fishbone diagrams and a ton of other ways to QA every situation. It is not necessarily different when the systems are making more complicated decisions. There is still going to be a similar type of root-cause analysis. Did the suppliers do everything they could to ensure that they understood the probability of this situation occurring? And if they did, what did they do to mitigate it? These nested questions of ‘if this happened, what would happen’ need to be documented. Automated driving is a new technology and it’s not perfectly defined yet, but it’s not totally reinventing the wheel. We are going to combine the components that are going to do things that humans would not be able to do on their own. They are going to actuate. We will have to extrapolate from that to see when something goes wrong, how it could have been prevented, and why it happened the way it did. And we continue to make improvements.
Shoieb: Do you think if it is possible to spell out clear rules for fault determination in advance? Using some sort of a mathematical model?
Ben: I have seen some early proposals making rigid rules that say exactly who is at fault in that situation. I find that might be a little bit trivializing the problem. I don’t think it’s going to be that straightforward, but with that said, you kind of have to start with at least trying to find answers to the difficult questions because if we say, ‘well, that’s not the perfect answer’, then we are not going to get anywhere. I do believe we need to put our best foot forward and say, here’s what the best practices teach us, here’s according to the science and technology, and here’s what we think we know about how you would assign fault in these situations, and we will have to be ready to iterate.
Shoieb: A few questions pop in my head about the responsibility and insurance issues. In the event of an accident, if an automated system says it was a human fault, the human driver may ask the automaker how they have programmed their system; the automated system could be biased against the human driver.
Ben: There are some interesting takeaways from comparing the automated systems to what humans do. So, the known facts about the accidents on the road, 94% accidents are caused by human error. We tend to focus on paradigms as we know them and try to apply them to other things that we don’t understand very well because we assume there’s a commonality. We automatically jump to them if these are the accidents caused by humans, how do we judge those same accidents when an automated vehicle does it? When the reality is, if you build a good enough system, the promise is to prevent those 94% accidents caused by humans. The reality is in most cases an accident is not happening because of the limitation in the world of physics and mechanics. It is not like it was not physically possible for me to stop before I hit the car. No, it’s because you looked at your phone or you were following too closely to begin with. And when those things are programmed in to the vehicle, the conversation actually changes quite a bit. The assigning fault is not necessarily as prominent as it is when you have two humans pointing at each other, a he said/she said type of situation. I’m very excited for the promise of eliminating those situations.
I would think everyone always comes to the trolley problem, how are you going to choose what direction to steer the car to minimize the damage. I don’t think there is an answer to that question because if you ask an ‘x’ number of people what they believe about utilitarianism and what’s the most valuable saving this person, that person, or doing nothing. There is no consensus. The promise being, you don’t get yourself into that situation to begin with. And in the rare cases that you do, I think that’s where we may be ascribing ourselves too much responsibility. It’s a bit of a stretch to say that you can calculate with great precision the probability of damage that something is going to happen in the future. I can tell you we are working on motion anticipation and motion prediction to answer the question of these objects of interest on the road such as, where could they be in half a second, one second versus where will I be. It’s an incredibly difficult problem to solve. I can’t even assign a number of permutations of the ways that could play out. It’s a convoluted big problem, and I’m hoping that we can actually use the technology to assuage that problem and make people realize that we are actually doing the best we can to prevent it completely as opposed to making it a limiting factor in the implementations.
Shoieb: You just talked about motion anticipation and motion prediction. I remember working on motion compensation and motion estimation many years ago. Those were very hard problems to solve back then.
Ben: It is, and that’s why we can take something how human drive and make our automated systems safer. For instance, when you talk to tier-one OEMs — one of the biggest problems or corner cases that people want to solve is what happens if a pedestrian jumps out between two cars on the road and the system does not pick up in time. The reality is when can learn a little bit from what humans do because you try to put the onus completely on the technology, then you start asking these really difficult corner cases such as, how do I see a human before they jump out, how do I use gesture recognition to know the difference between if they are moving towards or away from the road. These are really tough problems that we are going to try to solve, but at the end of the day once you see that the human is there on the road, as a good defensive driver I would slow down and move a little bit out of the center of the lane to move away from the person in case they make a bad decision, then I can still react. There is no reason that we can’t leverage the data from the sensors around the car to inform the path planning system to do these kinds of small corrections to provide safe driving.
Shoieb: Human driving requires tons of common sense. Can common sense be programmed into the machines?
Ben: When you look at the specific problems arise while you are doing, we do everything intuitively as humans because you learn how to drive. If you have a good driving instructor you learn how to act in certain situations. The reality is that most of the situations are relatively narrow, such as, you have a car on the left, or you don’t have a car on the left. It’s literally codifying the situation. We can and we should, since these are machines, emulate common sense. You can closely emulate common sense by having a combination of learned system and rule-based system. It’s common sense to slow down near school zone. Often times, people don’t slow down. With automated systems the vehicles will slow down near a school zone.
Shoieb: What’s the long-term vision of DeepScale.ai?
Ben: We really buy into the school of thought of open collaboration of lots of experts in automated systems. These systems are so complex that we really think that (1) we can serve a very large market. There is not going to be a ‘one size fits all’ kind of solution because every region and every type of vehicle are going to have different driving requirements. And we are going to see all sorts of new types of vehicles getting enabled by the fact that you no longer have to have the driver and passenger facing forward. For us, we see ourselves being a supplier in the automotive industry, license software to the world, and have the notion of getting that software in as many as systems and vehicles as possible because the whole aim is to show that it is a lot safer than what we have been doing up until now.