Self-Driving Cars are Hurtling Towards an AI Brick Wall
Quest for Self-Driving in Space Miniseries, Part 2
I. Humans Have Reached the Limits of Moore’s Law
Chip manufacturers have stopped developing their industry-wide roadmap for the foreseeable future because by 2020 it will be too expensive to build semiconductors smaller than 10 nanometers. It will cost more, create an increasing amount of heat, and require more electricity than the world currently produces.
Our use of computing power is not shrinking. In fact it’s growing faster than even the number of humans born every year. As more objects become smart, connected to the cloud, and we run more machine learning on the data we collect, it means the exponential growth with begin to grow exponentially.
I know that’s a bit of a brain sidewinder. Exponential growth, exponentially? Just replace it with “a helluva lot”.
As you can see, we’re reaching fundamental limits of human manufacturing. Of physical things. We’re getting down to atomic levels that have very real stop signs based on the known laws of physics.
If you’ve followed tech for some time you immediately recognize this as an intersection of forces that makes it ripe for opportunity. A leap of innovation.
We either put our heads together to rethink the way we do things, on a truly fundamental basis, or we stagnate as a species. Do you really believe that all we as a species can achieve is Google search and Facebook apps running on an iPhone?
Of course not.
So let’s talk about the ways we may be able to fundamentally disrupt the infrastructure we’ve spent the last 100 years building.
The first is energy. Without some type of power, we can’t do anything else. No light. No movement. No computation. So we need an abundant power source. More than the amount of fossil fuels on earth. More than the electricity we can generate from water or wind. We need that giant fusion reactor in the sky (dyson sphere, anyone?). And we need to collect much more than the 25% limit that we currently get from the best solar panels.
There are no power plugs in space.
But that’s a side technology compared to what’s needed for exploring and transporting things from one place to another. That’s why this post is about autonomous navigation.
II. Differentiating Self-Driving Cars
Everyone is chasing the same white horse. Truly human-less navigation from one point to another. Level 2 autonomy, which is basically what the Tesla feature does today, is about as far as we’ve come for something running in production. Normal people can buy level 2 autonomous vehicles today.
It tracks lane markings on highways, and that helps you while commuting in a traffic jam. Basically, lane keeping. But Tesla is still about 2 steps away from true autonomy with their Summon feature (i.e., push a button and your car drives across country to the exact place you’re standing).
That’s a helluva differentiating feature that everyone is chasing but nobody yet knows how to do.
Aside from that, the only other compelling feature I can come up with for electric vehicles is an interchangeable parts system for interior design purposes.
III. Extending to Any Autonomous Vehicle
What happens when you put a car in an area that Google or Apple hasn’t tracked with their map database? What happens when you’re driving between LA and Vegas and your car’s cell signal drops out? Does the car just stop?
Taken further, how will flying drones not run into new construction or other drones that can’t be mapped by satellites in real-time? What happens with rovers on the surface of Mars or with mining operations on asteroids when there’s no internet, communication loops take nearly an hour, and have no maps to speak of?
How does a machine know where to navigate when nobody has ever navigated it before? And how does it do this in real-time where you can’t spend days or weeks training to recognize a specific rock formation or sky line so your billion dollar space rover doesn’t drive right off a cliff?
Have a look at the machine intelligence 2.0 chart below showing the various startups in the industry. In the red-colored upper-right hand corner, you’ll see the autonomous systems. There’s something that stuck out to me. Where is space? The graphic below only has air, ground, sea, and industrial. I think I know why. Because self-driving is exponentially harder in space.
It’s also because the answer won’t come from Deep Learning. But before we get into why, it’s important we define the different levels of autonomy. The public may think they’re all one in the same, but the capabilities are staggeringly different.
There’s been a few things published about it, but I don’t think any of them are very good. Below is my first stab at it based on NHTSA’s description, where I think each level is best applied to products, and what job is done by software and by humans.
Tesla’s current autopilot system in production is Level 2. GoogleX’s car in R&D is Level 3. Some other R&D labs are potentially in Level 4, including SpaceX. But everyone, regardless of industry, is pushing hard towards Level 5.
It’s what I call Wayfinding AI. It gets you from point A to point B without any human intervention. That’s the type of self-driving we need on Earth and on Mars.
However, the car market here on Earth is about $1.5 trillion. That’s a lot of money at stake and one reason why GM bought Cruise for a reported $1 billion dollars (new news states it was less than that), and Uber bought OTTO after 5 months for $680 million. The market is red hot for a new approach that goes beyond the limits of deep learning.
Why? Because if training data has never seen a Black Swan event before, how can it decide how to handle it?
IV. Deep Learning’s Brick Wall
The answer to Level 5 autonomy (i.e., Wayfinding AI) does not lie in the same approach as what’s required to reach Level 3 or 4. Rather, we need to make another quantum leap. We need some other type of innovation to jump over that brick wall standing in our way. Much like the answer to traveling isn’t hotels but AirBnB, and for transportation isn’t your own car but an Uber, the answer to Level 5 autonomous driving will not be mathematical, but biological.
There are only about 3 people in the world who get this. That’s because it requires a math degree, a computer science degree, and a biology degree. 3 incredibly different and difficult skills to develop expertise on. As one of my friends who has some of these unicorns skills recently said while working on this:
The computer scientist in me is often in conflict with the biologist in me.
That right there is what we’re talking about. If you don’t have the right experience, everything looks like a mathematical nail for your deep learning trained hammer. I used to be one of them. Heck, I have a degree in theoretical mathematics. So I get it, guys. I was in the same camp until the day I had the epiphany below.
I think of the expertise needed much like a tightrope walker. Lean too far to one side and you fall. You need to balance both in order to make it to the other side.
Deep Learning is just a “trendline” approach and some of the concepts are even taught in high school (mostly it’s linear algebra and regression in college). But for autonomous driving, it requires a massive amount of data, large parallel processing from GPUs, lots of electricity and days to train a model.
That’s not how we learn or navigate the world. We touch a hot stove once, it hurts, we don’t do that again. We don’t have to burn our hands 1 million times before we learn not to touch the stove. Once is painful enough (secret algorithm alert!). In fact, our biology is so good that we don’t even have to burn ourselves to learn. We can watch someone else burn their hand on a stove and realize we shouldn’t do that either. In short, we can learn from the experience of others.
I’d like to give some more detail, so what follows is a comment I added to The Information’s article on Tesla losing its mapping leader.
It seems we’re starting to see a trend in the self-driving arena where leaders both on the business side and technical side are starting to jump ship. Not just in Tesla, but also in Google. I think I know why this is.
Deep learning is going to hit a brick wall.
The AI techniques the Valley is employing is nothing more than a regression line. Or, a trend line you can right click and add to any Excel scatter plot chart. It’s focused on optimizing a single solution set. Much like a home’s square footage predicts price, and certain curves in handwriting can predict specific letters, or moves in Go can predict better moves in Go, this is all single use case stuff.
Said differently, it’s “narrow” AI. Single approaches for optimizing single answers to single questions. General intelligence, on the other hand, is what happens when you have a lot of these singular things working in harmony. But much like you couldn’t make a machine “see” a cat using standard software algorithms, self-driving cars won’t be able to pass Level 4 autonomy using standard machine learning algorithms.
Level 5 autonomous driving means no human intervention is required other than turning the machine on and setting a destination.
Thus, we need a wholesale different approach to reaching this Level 5 milestone. Why?
Because there are no power plugs in space. There are no maps. There’s no data. There’s no connection to the internet or the cloud. There’s nothing but a small machine with some wheels all by itself on a foreign planetoid. On Mars, when the roundtrip for any communication or data transfer is a hefty 40 minutes, it just compounds the problem that we’re trying to solve for city streets.
Single AI using deep learning just ain’t gonna cut it, folks.
Especially when you need to not just know what’s in front of you (it’s a cliff with a 1000 meter drop-off), but that you shouldn’t drive off it like Thelma & Louise. Then, it needs to control it’s “hands” and “legs” after the thinking and understanding has been done. Sure, you still need some deep learning for detection, but what about the rest of that equation? It sure seems like deep isn’t quite deep enough.
Again, deep learning will hit an autonomous brick wall.
We’re seeing people literally crash their heads against the wall at GoogleX and now at Tesla. They’re leaving to start their own thing or start up new things like Udacity (former GoogleX founder).
So what’s the answer to this new approach? Well, that’s the $1 trillion quest(ion), isn’t it? And the persons who solve it might have something patentable and incredibly valuable on their hands. Not just in a monetary sense, but in a saving-the-species sense.
V. The Connectome As A New Approach
The best way to get a robot to move about in the world isn’t to try to recreate a robot from scratch, but rather to steal from nature. Why try to recreate evolution in a deep learning simulation when we can already use it as a head start?
We have just begun mapping the human connectome (brain + nervous system), but we are incredibly far away from completing it. Note that the nervous system part is often what gets left out when describing intelligence. We need that in order to move. We need muscles. That’s why robotics hasn’t gotten to the dream of science fiction. We’re all focused on the brain, but forget that the definition of Neurons include nervous system connections as well brain connections. There are 100 billion neural connections in the human body that needs to be traced. We are 0% complete with that. But we do have a few other animals fully and partially mapped.
In terms of learnings, there are three major things we’ve begun to understand about this new approach:
- The connectome as the core of an autonomous navigation system works better than deep learning by itself. It handles the unexpectedness problem everyone is struggling with. “Is that a puddle, hole, oil slick, or a shadow?”
- Tuning the nervous system is a sensitive procedure: too little and it doesn’t start up, too much and it overloads.
- You need a “Behavior Engine” to back out some of the animalistic tendencies that develop. This is not a trivial matter, and akin to programming in reverse. You remove certain behaviors inherent in the connectome rather than programming for every nuance and scenario with deep learning.
The connectome is a better approach, but that doesn’t mean it solves every problem. As you level up into more generalized autonomous mobility, there are new and different problems to solve. You also need the ability to see and hear, which likely will require deep learning to make sense of the sensory input.
In short, you need the biological approach at the core for human decision-making, and the mathematical approach at the edges to give it superhuman senses.
Below is the most recent paper on the Human Connectome and gives you some sense of the complexity we’re dealing with. As you read through it, I think you might begin to believe what I now believe. That deep learning is only one tool to get closer to AGI, but isn’t the best hammer for the self-driving nail.
As a neurobiologist already working on this connectomic self-driving solution, Timothy Busbice, said to me recently:
Sensors are sensors, like we need our eyes to drive; we can’t drive blindfolded. They aren’t mapping the streets as they travel down them, just looking for obstacles to avoid and rely on maps, in memory, to tell them where to go. This is very rote “intelligence”. What we are trying to do is attach those eyes to an intelligence that can react to the environment and travel plans like an animal (human). If the car comes across construction, no big deal, just like you or I, it can decide on a course of action around the situation and get to the destination. I can’t see where weather will be an issue any more than it is for humans.
Read More of the Self-Driving Mini-Series
Note: I am an investor and insider in the biological self-driving AI product being discussed above. Learn more at http://prome.ai
- Part 1: Humanity’s Extinction Event Is Coming
- Part 2: Self-Driving Cars Are Hurtling Towards an AI Brick Wall
- Part 3: Coming Soon