Innovation Disparity: The Self-Driving Software & Hardware Struggle

Source: Electronic Design

Self-driving cars are only possible by the marriage of two main technology phenomena: software and hardware. The former, consists of commands and data for the computer to identify what’s in front of the car and decide how to react in specific situations. The latter, includes the tangible elements, electronic and mechanical, which enable the car to ‘see’ and physically stop if needed. In an autonomous vehicle, the software could not survive without the hardware and vice versa.

Therefore, you could argue that innovation of either phenomena needs to be at par. In an ideal world, data scientists and robotic engineers would reach breakthroughs at the same rhythm; they’d progress hand on hand. However, in a nascent automotive ecosystem where devices are ‘borrowed’ from peripheral industries, such is not the case.

Radar technology has not changed much throughout the years.

Let’s start with the hardware. In my book, I interviewed radar systems engineer Anirudh Nimbagal. He works on the current self-driving radar technology for the Tier-1 auto supplier Aptiv. According to him, unlike what most people may think, the radars fitted all around the AV test pilots of today are the same components that Navy submarines have used for decades. Radars are sensors invented about one hundred years ago, initially for military transport to see in poor climate conditions. And although their design has not changed much, what has actually changed is the way these sensors are today assembled.

This is just one example. Lidar for instance was a technology used at the turn of the century for archeological purposes in the deep Amazon tropics to find lost-in-the-jungle indigenous temples. Now, the same technology is used for on-the-road object identification in a self-driving car (as Austin Russel, CEO of Luminar shows us in the video below).

It’s almost like all the components were out there in the world, serving their purpose across industry verticals, waiting for highly intelligent individuals with a big enough need and incentive to figure a way to bring them together into one self-driving effort. We often call these ex-DARPA engineers and scientists (who’d end up at the Google-X project) the technology disruptors; rather I’d say that they are the integrators.

Barry Einsig, Global Automotive and Transportation Executive at Cisco Systems, would confirm my theory:

Most hardware components of self-driving cars today have not changed much from what they looked like twenty years ago. Functionality has gotten better but technology remains the same. The revolution lies on the way these components are piece together, which has led to the real breakthroughs.

I have to say my driverless utopia was experiencing a bitter taste of reality. Integrating radars and cameras which have not changed in the last two decades didn’t exactly match my definition of a sophisticated next-generation self-driving era. Besides, hardware components suffer from heat generation, size and weight constraints, as well as high manufacturing costs. The brick and mortar, real-life world was an unforgiving one.

Luckily, at the time of my conversation, Barry would pull me out of my woe and helped me understand that there are other forces at play. Take for instance, nanotechnology, which is being used to develop computer chips that will make AV sensors higher-performing, smaller, and less costly. Prototypes of new Lidar sensors are being engineered and tested with new materials and fabrication techniques to solve the physical issues faced today.

Similarly, connectivity will allow the streaming of data remotely about what the car sees, does, and errs. This bidirectional information flow can make remote driven operation feasible. An ‘operator’ can be sitting miles away from the vehicle, seeing what the car sees, hearing what the car hears, and monitoring that the car reacts accordingly. This could mean that the hardware components don’t have to be the sole eyes, ears and brains of the vehicle. I was relieved. There was hope for our step-wise technology innovation. Despite the fact that it wasn’t a quantum leap one.

Moving on to the software. Looking at the AV technology innovation lifecycle, it’s evident that while the hardware is dictated by developments in the last century, the software has made huge strides, re-defining deep neural networks and machine learning. To be fair, having smart algorithms replace the human driver, does required an accelerated Darwinism evolution in computer science. This is the core principle of a car that’s able to drive itself.

Fortunately, we live in times were, constantly, in the media, we see how everyday we get a step closer to a better way to train our algorithms and gain overall better self-driving performance.

Adrien Treuille, former VP of Simulation at Zoox, professor of robotics at Carnegie Mellon and ex-Googler (X-division) would summarize it better than me: One dog years equates to seven human years; one AV year equates to ten of 20th century automaker years.

Adrien has lived in flesh the rise of automated car technology. From the time that no one believe cars could think for themselves till today’s proliferation of AV prototypes in places like Arizona, Texas and California. It was highly insightful to my book to hear him speak about how the software of AVs have changed rapidly through time.

During his tenure at Carnegie Mellon, AI (artificial intelligence) was at a nascent stage and deep learning didn’t officially exist. We are talking about the core basic principle of a driverless vehicle not being around even in the early 2000s. Even iPods are older than these smart algorithms! It’s not a surprise to learn that every aspect of the AV technology stacks has come a long way from what they were only three years ago.

Similarly, the approach that we take today for ML (Machine Learning) barely resembles what it was in the beginning. As team leader of Google-X projects, Adrien recollects how the software part (the thinking machine of an AV) of Waymo’s self-driving car looks way different from what it was a third of a decade ago.

The same applies to the cars at Zoox. The billion-dollar valuation startup founded in 2014, promises to deliver the first robo-taxi to Silicon Valley by 2020. For this, they need a self-driving computer technology that’s ready to tackle the highly-congested San Francisco city roads. This will have to be an SAE (Society of Automotive Engineers) Level 4 self-driving offering.

But going back to Adrien’s first statement. The automotive industry has taken a conservative approach to the integration of software in their vehicles. We do see an upward slope in innovation as car dashboards went from analog to digital. And then ADAS (Advanced Driver Assistant System) would breakthrough with Tesla’s Autopilot, establishing a new benchmark in automotive software. It took a while but it appears that startups and carmakers are finally committed to a digital [software] path. And self-driving cars is the culmination of such commitment.

Technology innovation is not about industry experts putting their faith in the rapid sophistication of car subcomponents. Rather, it is about getting the integration of current subcomponents right. It is about the building blocks of new technology and how these are assembled together. A more accelerated innovation cycle has been seen for the smart algorithms representing the AV brain. Systems that weren’t even imaginable before the year 2000 have advanced at an exponential rate.

They may not be perfect yet but they’ve set a precedent of rapid development and endless capabilities. This is good news for self-driving cars.

I hope you enjoyed this post related to my book, Autonomousity: Autonomous Vehicles & Emerging Business Models. You can order a copy via this link: https://www.amazon.com/dp/B07QDM7HTX

I’d love to continue the conversation about self-driving cars! You can either leave a comment in my Medium page or connect with me via email at BejaranoAPaula@gmail.com or LinkedIn.

--

--