Self-driving Cars vs Humans — 100x Power Efficiency Gap
The development of self-driving cars is moving rapidly, with Waymo launching the self-driving car service on the road without front safety driver earlier this month. Kudos to the team! It’s an awesome outcome of the research that was spurred by Darpa urban challenge almost a decade ago.
To be clear, getting the technology out at scale, in all environment, and at right economics is still a ways out. One of the challenges to overcome in that journey is power consumption. According to a recent report, BorgWarner estimates that the self-driving cars today consume 2–4 kilowatts of electricity — that’s running 50–100 laptops in the trunk of a car! This impacts, amongst many things, the choice of drive-train, fuel efficiency, and compute architecture. Gill Pratt, CEO of Toyota Research Institute (TRI), has noted that when humans drive, our brains use about 30 watts of power. We are talking an efficiency gap of 100x and an opportunity to bridge it. For this to be accomplished, the future solutions will, in all likelihood, require task specific compute vs. general purpose architectures of today. Below are three recent examples.
Realtime Robotics, which was founded out of research at Duke University, focuses on architecture for motion planning and collision avoidance. Essentially, deciding how robots move through the space — really fast and at a very low power budget. Compared to current processors, Realtime Robotics’ technology can accomplish same task at least 100x faster, while consuming ~30x less power. TRI’s sister company, Toyota AI Ventures, recently invested in Realtime.
Nvidia’s Graphics Processing Unit (GPU) — Drive PX platform, has a default workhorse for majority of the self-driving car platforms. While CPUs and GPUs are good at deterministic computing, Graphcore’s Intelligence Processing Unit (IPU) are designed for processing graphs and focuses on optimizing machine learning. With integrated IPU and Poplar graph programming framework, recent report from Graphcore shows that IPUs to be 10–100x faster compared to GPUs. Graphcore also recently raised funding from Sequoia.
The third trend in the thread is compute and architecture of cloud infrastructure. As enormous amount of data — 7 TB per hour for every self driven car, gets generated on the edges (car). Centralized cloud infrastructure with general purpose microprocessors will likely not cut it due to latency, economics, and distance issues. Companies like Fungible and their Data Processing Unit (DPU) is focusing on optimizing solutions in the world with distributed data centers. Founded by seasoned industry veterans as well, Fungible also recently raised Series A to deliver on this vision.
As said this earlier, the technologies will not be developed in silos, and will have far-reaching impact on multiple industries, several companies, and business models. This is what innovation is all about — we still have ways to go, but in end we will have better solutions for the customers and society.