Prime Movers Lab Webinar Series: Autonomous Vehicles
A conversation with experts and entrepreneurs on the latest trends, technology, and safety of autonomous vehicles
In last week’s episode of the Prime Movers Lab webinar series, we spoke with self- driving technology expert Kevin Peterson, safety manager Antonio Priore and radar developer and entrepreneur Natan Mintz to understand the current state of self-driving technology and what’s coming in the future.
Nathan Mintz is the Co-founder and CEO of Spartan, an automotive radar technology startup. A serial entrepreneur, prior to Spartan he was the founding CEO of Epirus, a defense start-up. Nathan has also helped incubate, lead and advise multiple startups in the defense, autonomy, SaaS, and cybersecurity domains. Nathan spent the first 14 years of his career in a mixture of business development, systems engineering, and leadership roles at Boeing and Raytheon. Nathan holds six U.S. patents. He received his Bachelors and Masters Degrees in Materials Science Engineering from Stanford University.
Kevin Peterson is Head of Perception for Trucking at Waymo. Before that, he was Autonomy Architect at Caterpillar after the acquisition of delivery robot maker Marble, the company he co-founded. Before Marble, Kevin was the CTO at Astrobotic Technology where he led the engineering team and was responsible for the design of the Griffin and Peregrine landers. Kevin holds a Ph.D. from Carnegie Mellon University (CMU) and was part of CMU’s DARPA Urban Challenge autonomous driving team.
Antonio Priore works for Arm as Director of Functional Safety (FuSa) and CyberSecurity. He’s the acting Arm Global Functional Safety manager and leads the FuSa Centre of Excellence team, responsible for the definition and continuous improvement of the functional safety processes in Arm. The team is responsible for showing compliance to multiple safety standards (e.g. ISO 26262, IEC 61508, DO-254, DO-178C, etc.) across different market segments. Antonio is a UK Chartered Engineer and member of the IET, with a career spanning more than a decade in Functional Safety Engineering across different domains like Automotive, Aerospace, Industrial, and Railway. He has authored multiple papers and is a member of the British Standard Institute. Antonio holds a Master’s Degree in Electronic Engineering from the University of Pisa, Italy.
Highlights from panel discussion
- DARPA Grand Challenge inspired and pushed several universities to work on full-autonomous technologies. The early pioneers were students who were attracted by the crazy project, wanted to do something beyond academics, and were extremely passionate about software that moved “things”. DARPA Grand Challenge is a prize competition for American autonomous vehicles, funded by the Defense Advanced Research Projects Agency (DARPA), and has the objective to promote the development of technologies for the first fully autonomous ground vehicles.
- Some of the expertise and experience from the aerospace industry can be leveraged to develop autonomous driving technologies, keeping in mind the different challenges, such as limited power, limited space, and limited computational power.
- One of the lessons learned from aerospace is that sometimes more isn’t necessarily better because more sensors mean more conflicts on what’s actually there and what isn’t. Often sensors come with false alarms and different types of sensors have different types of false alarms. The average autonomous vehicle is going to have petabytes of data per day that’s going to be downloaded and when you scale that to a fleet, that number is daunting. If you scale to hundreds of millions of vehicles there’s not enough storage in any cloud anywhere to do that. At some point, there will be a necessity to make the sensor pipe thinner. This will improve the reaction time of autonomous vehicles to obstacles, because the perception layer will have to process fewer data, reducing the processing time. There will be a tradeoff between system complexity and system reliability.
- In the past 5 years, the industry moved from a domain that was predominantly academic (where the focus was on how to do machine learning, how software can understand what the world looks like) towards one that is more product-oriented. Today, there is a convergence between technology and business disciplines to develop a new product that can revolutionize urban mobility and impact towns and cities.
- There are two schools of thought when it comes to fully autonomous vehicles: traditional OEMs are taking the gradual approach of adding more sensors for additional features, such as Automated Driving Assistance Systems (ADAS), that for example can assist with brakes or provide acoustic warnings during parking. Fully autonomous developers or pioneers have a top-down approach where they design and equip the vehicle with all sensors from day one to reach full autonomous driving. These two approaches are reflected in a different level of hardware architecture complexity and also in the amount of work needed to prove the sensing architecture: for example if there is a driver behind the wheels, a certain level of failure is permitted, while if there is no driver, the requirements are more stringent.
- The different levels of the software stack are:
1. Raw data coming from different sets of sensors generate a stream that must be analyzed. Early processing can be done and data from cameras, radars, and other sensors must then be fused together.
2. Perception refers to processing information from sensors into a concise model of the environment.
3. Localization is locating the vehicle relative to the lane, road, and the world, represented by the maps.
4. AI/ML Scene understanding gets to a semantic understanding of the perceived world: for example, the software must understand if the car is in a construction zone, or is near a pedestrian, where the road is.
5. At this level, the software must “think’’ about what is happening next. This step is called prediction. For example, the car is driving near a biker and needs to think about it and its moves.
6. The planner is the next phase where the software decides which trajectory to take, given how the world looks like. Driving decisions are made based on a set of goals and constraints by the environment, and the vehicle’s motion is planned.
7. The last level is when the vehicle’s desired motion is sent to actuators via the controller. At the end of this stack, there are important things like fallback and safety that must happen in real-time. For example, the vehicle must know when one sensor fails or manage the situation when a tire is damaged.
- Companies like Arm are encouraging the ecosystem to standardize on the non-differentiated pieces of the stack (i.e. the hardware, the low-level software) where the value is not going to be there. To save costs it’s convenient to standardize it and then each autonomy developer or OEM can focus on the differentiating parts of the stack.
- At the moment an all-camera approach doesn’t guarantee enough safety to allow a driver to leave the driver’s seat. The issue with cameras is poor depth resolution. But things are developing rapidly and the ability to learn what the world looks like to actually produce a depth image from a few cameras has advanced dramatically in the past few years, and we’re going to see that shift over time.
- Adding other sensors on tops of cameras, such as lidars or radars help in several ways: 1. Add depth perception 2. Redundancy of information 3. Work in all weather conditions (snow, fog, rain). Relying on multiple sensors allows for better safety, and also to widen the Operational Design Domain (ODD), which is the scope in which autonomy is deployed. It also allows for fewer Black Swans, events that weren’t predicted, and that autonomous systems were not tested against.
- The two design approaches (with and without lidars) come down to cost and to the type of application: in the case of trucking or taxi applications you can afford to spend more on the hardware, while for private cars the cost of lidars could be prohibitive.
- When will deployment at scale happen? From a hardware perspective, CPUs that will be able to manage the workload are in design right now and will be ready in the next few years. Then it will be a matter of regulation and legislation. There are also various degrees of answer because autonomy is already here and for example, Waymo has deployed its fleet in Phoenix. The crucial question is how do we manage the complexity, how do we grow over time and and and also how do we make a business out of autonomous vehicles?
- Commercialization of fully autonomous vehicles is going to happen first in trucking, mining, construction, and logistics and we’re already seeing it. Caterpillar has already 40 million miles of fully autonomous truck driving; in Canada there is Pronto.AI; Aurora and Waymo have their own trucking fleets and timelines for rollout will be 2024–2025.
- Supply chain: Companies like Waymo are vertically integrated because it allows them to control the performance of the full system, the safety, and the quality. This is particularly true at this stage because it’s not clear what the sensor’s performance needs to be. In the future, this approach could change.
- In 10–20 years, there could be an app ecosystem, where a piece of hardware and software gives some information about the world, representing fundamental building blocks (the main reasoning functions of the scene around the vehicle). OEMs will develop the safety system that underlies how the brakes work, how the steering wheels work for example. In the future there could be a dichotomy like Apple and Microsoft for PC: some players will prefer to vertically integrate in order to control the performance, while other players will focus mainly on the software.
- What should an investor look for in a team, technology, and company in the transportation space? Interesting investment opportunities lie on the sensor side because they are a key ingredient of full autonomy. There will also be space for both hardware and software models. The three fundamental aspects to look for an investment are:
1. Replicability of the operation, so you can take the software that you wrote once and do it over and over again
2. The operations cost outside of the hardware. For example, food delivery would be hard because it requires a huge amount of deliveries to generate enough profits. Other applications such as mining are more lucrative because autonomous trucks can add productivity for mining companies and in some cases replace the drivers, who cost hundreds of thousands of dollars per year. In this case, safety and comfort become fundamental.
3. How technically challenging is the problem to solve.
4. Identify a simpler operational domain where autonomous driving is deployed to have a clear and faster path to commercialization. This is true for mining, agriculture robots, and other sectors.
- What we are learning from this huge human pursuit of autonomy has impacts on other sectors and there are improvements related to better radars, computers that are super reliable, reinforcement learning that can run in a simulation and then can be transferred to the real world, with a direct impact to manipulation tasks. The biggest contribution is the amplification of human ability to do work. Like computers and the internet organized information and there was a proliferation of applications, amplifying the human ability to do work will be transformative. What happens next is the biggest question.
Article Note: I’ve done my best to paraphrase the panelists’ conversation maintaining a high level of accuracy. Any errors in summarizing are mine. Watch the full webinar to enjoy the experience and gather the best knowledge directly from the experts.
Prime Movers Lab invests in breakthrough scientific startups founded by Prime Movers, the inventors who transform billions of lives. We invest in companies reinventing energy, transportation, infrastructure, manufacturing, human augmentation, and agriculture.
Sign up here to subscribe to our blog