Prime Movers Lab
Published in

Prime Movers Lab

Prime Movers Lab Webinar Series: Autonomous Vehicles

A conversation with experts and entrepreneurs on the latest trends, technology, and safety of autonomous vehicles

Experts Panelists

Highlights from panel discussion

  • DARPA Grand Challenge inspired and pushed several universities to work on full-autonomous technologies. The early pioneers were students who were attracted by the crazy project, wanted to do something beyond academics, and were extremely passionate about software that moved “things”. DARPA Grand Challenge is a prize competition for American autonomous vehicles, funded by the Defense Advanced Research Projects Agency (DARPA), and has the objective to promote the development of technologies for the first fully autonomous ground vehicles.
  • Some of the expertise and experience from the aerospace industry can be leveraged to develop autonomous driving technologies, keeping in mind the different challenges, such as limited power, limited space, and limited computational power.
  • One of the lessons learned from aerospace is that sometimes more isn’t necessarily better because more sensors mean more conflicts on what’s actually there and what isn’t. Often sensors come with false alarms and different types of sensors have different types of false alarms. The average autonomous vehicle is going to have petabytes of data per day that’s going to be downloaded and when you scale that to a fleet, that number is daunting. If you scale to hundreds of millions of vehicles there’s not enough storage in any cloud anywhere to do that. At some point, there will be a necessity to make the sensor pipe thinner. This will improve the reaction time of autonomous vehicles to obstacles, because the perception layer will have to process fewer data, reducing the processing time. There will be a tradeoff between system complexity and system reliability.
  • In the past 5 years, the industry moved from a domain that was predominantly academic (where the focus was on how to do machine learning, how software can understand what the world looks like) towards one that is more product-oriented. Today, there is a convergence between technology and business disciplines to develop a new product that can revolutionize urban mobility and impact towns and cities.
  • There are two schools of thought when it comes to fully autonomous vehicles: traditional OEMs are taking the gradual approach of adding more sensors for additional features, such as Automated Driving Assistance Systems (ADAS), that for example can assist with brakes or provide acoustic warnings during parking. Fully autonomous developers or pioneers have a top-down approach where they design and equip the vehicle with all sensors from day one to reach full autonomous driving. These two approaches are reflected in a different level of hardware architecture complexity and also in the amount of work needed to prove the sensing architecture: for example if there is a driver behind the wheels, a certain level of failure is permitted, while if there is no driver, the requirements are more stringent.
  • The different levels of the software stack are:
  • Companies like Arm are encouraging the ecosystem to standardize on the non-differentiated pieces of the stack (i.e. the hardware, the low-level software) where the value is not going to be there. To save costs it’s convenient to standardize it and then each autonomy developer or OEM can focus on the differentiating parts of the stack.
  • At the moment an all-camera approach doesn’t guarantee enough safety to allow a driver to leave the driver’s seat. The issue with cameras is poor depth resolution. But things are developing rapidly and the ability to learn what the world looks like to actually produce a depth image from a few cameras has advanced dramatically in the past few years, and we’re going to see that shift over time.
  • Adding other sensors on tops of cameras, such as lidars or radars help in several ways: 1. Add depth perception 2. Redundancy of information 3. Work in all weather conditions (snow, fog, rain). Relying on multiple sensors allows for better safety, and also to widen the Operational Design Domain (ODD), which is the scope in which autonomy is deployed. It also allows for fewer Black Swans, events that weren’t predicted, and that autonomous systems were not tested against.
  • The two design approaches (with and without lidars) come down to cost and to the type of application: in the case of trucking or taxi applications you can afford to spend more on the hardware, while for private cars the cost of lidars could be prohibitive.
  • When will deployment at scale happen? From a hardware perspective, CPUs that will be able to manage the workload are in design right now and will be ready in the next few years. Then it will be a matter of regulation and legislation. There are also various degrees of answer because autonomy is already here and for example, Waymo has deployed its fleet in Phoenix. The crucial question is how do we manage the complexity, how do we grow over time and and and also how do we make a business out of autonomous vehicles?
  • Commercialization of fully autonomous vehicles is going to happen first in trucking, mining, construction, and logistics and we’re already seeing it. Caterpillar has already 40 million miles of fully autonomous truck driving; in Canada there is Pronto.AI; Aurora and Waymo have their own trucking fleets and timelines for rollout will be 2024–2025.
  • Supply chain: Companies like Waymo are vertically integrated because it allows them to control the performance of the full system, the safety, and the quality. This is particularly true at this stage because it’s not clear what the sensor’s performance needs to be. In the future, this approach could change.
  • In 10–20 years, there could be an app ecosystem, where a piece of hardware and software gives some information about the world, representing fundamental building blocks (the main reasoning functions of the scene around the vehicle). OEMs will develop the safety system that underlies how the brakes work, how the steering wheels work for example. In the future there could be a dichotomy like Apple and Microsoft for PC: some players will prefer to vertically integrate in order to control the performance, while other players will focus mainly on the software.
  • What should an investor look for in a team, technology, and company in the transportation space? Interesting investment opportunities lie on the sensor side because they are a key ingredient of full autonomy. There will also be space for both hardware and software models. The three fundamental aspects to look for an investment are:
  • What we are learning from this huge human pursuit of autonomy has impacts on other sectors and there are improvements related to better radars, computers that are super reliable, reinforcement learning that can run in a simulation and then can be transferred to the real world, with a direct impact to manipulation tasks. The biggest contribution is the amplification of human ability to do work. Like computers and the internet organized information and there was a proliferation of applications, amplifying the human ability to do work will be transformative. What happens next is the biggest question.

--

--

Backing breakthrough scientific inventions transforming billions of lives.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store