Autonomous Cars’ Big Problem: The energy consumption of edge processing reduces a car’s mileage with up to 30%.

Teraki
6 min readMay 15, 2019

--

Electric car manufactures promote fully autonomous driving (AD) at levels 4 and 5 (L4/L5) as a way to improve the car’s mileage reach for the same charge of its electric battery. But is that really the case? At Teraki we have studied publicly available data [1] on several electric car brands, into three scenarios of driving: on a highway, in the city, and a mix of the two. We have made an estimation of what would be the electric power consumed by processing of edge data required to enable Autonomous Driving (AD). We roughly observe the same pattern across all the car brands: a 10% mileage reach reduction on a highway and a 30% mileage reach reduction in a city.

Turning on full AD at L4/L5 on a present electric car would reduce mileage reach by about 10% on a highway, 30% in a city, and 20% for a mixture of the two.

This blog is a high level calculation on the impact of electricity consumption of the AD’s edge processing on a car’s mileage reach. At this stage, it does not pretend to be scientifically accurate or very elaborate. Its main aim is to signal the current computational challenges of AD and that — ceteris paribus — this will have an significant impact on mileage.

It’s useful to start by plotting the electric battery capacity in today’s cars, from the public data [1]. The clear leaders are Tesla with 60–70 kilo-Watt-hour (kWh), followed closely by Chevrolet with 65 kWh. There is a large cap, followed by the rest of the brands with 20–35 kWh: Volkswagen, Hyundai, Ford, BMW, Kia, Nissan.

The energy capacity in the electric car batteries for major car manufacturers. Tesla and Chevrolet lead clearly, followed by the rest from a distance.

The next immediate question is what is the attainable mileage reach from these batteries in the city, on the highway, and in a mix of the two. This information also comes directly from public data [1].

Driving range on a full battery charge in the city, highway and combined.

We can draw two observations. First, the mileage reach varies at first order proportional to the batteries, with Tesla and Chevrolet allowing drives of 220–250 miles, and at a distance the rest of brands allowing 60–140 miles. The probable reason is that the former two are targeting the US market, with a driving style focused on daily long commutes on highways, while the latter are targeting consumers who drive mostly in busy cities, either in US, or the rest of the world, mostly in Europe and Asia.

We can further take the two plots and divide them to obtain the electrical energy consumption per 100 miles. This plot confirms the qualitative observation from the first plot, but also gives a quantitative measures that for most brands there is 20–30% increase in energy when driving on a highway.

The electrical energy consumption per 100 miles.

From now we expand the study further by introducing certain assumption on top of the original data, namely the average speed. We choose 31 miles per hour (mph) for driving in a city, 56 mph for driving on a highway, and 43 mph as an average speed for mixing city and highway. Combining the mileage reach and the speed, one can deduce the time a car is able to run on a single charge.

Duration of driving on a full charge at typical speeds for city, highway and combined.

These plots reinforce the idea that most brands are only able to make a one way trip to a nearby city about 100 miles away, but it does work to drive to work and back and charge the batteries overnight.

We are now ready to expand our model to include full autonomous driving (AD) at L4/L5. There are three types of sensors typically used for AD: telematics (1D), video (2D), point clouds (3D). For 3D, Tesla uses radars, while all the others use LiDARs. To adopt AD, one needs to consider the one-off cost to first equip the car with the sensors and the edge computing resources [2] (see our prior blog post on edge computing that the sensors incur. Then there are the operational costs of the power consumption when the AD is turned on. We have built a rough model, with 1D spending a negligible amount of power, 2D needing 1 GPU at 750 Watts (W), 3D needing 1 CPU at 500 W. One needs at least double of each for redundancy and safety, leading to minimal 2500 W. We assume these values for driving in the city. On the highway, the situation is less complex, leading to fewer computations to make. We assume only 1500 W are needed for a highway. An average of 2000 W are assumed for a combined driving style of city and highway.

By multiplying this power consumption with the duration of the ride from the previous plot, one obtains a rough estimation of the additional energy needed from the battery just to run the AD hardware and software stack.

Energy needed by AD hardware and software stack.

We observe that the energy consumption is a lot larger in the city than in the highway. That comes from two reasons. First, in the city one drives at lower speeds, so longer duration, and at constant power consumption, that means more energy. The second reason is that more calculations are needed in the city, due to the more complex situations, with more cars, pedestrians, bikes, and unusual situations.

We can now divide the energy needed by the AD with the energy stored in the full battery. We see it is roughly 10% for the highway, 20% for a combined driving style and 30% for the city.

Ratio of energy required by the AD hardware and software stack, relative to the energy in a full battery.

This is roughly by how much energy actually available for driving when AD is turned on. We expect the mileage reach and driving duration to be reduced to a percentage of the nominal values (given by the first plot in this blog) as shown in the two plots below. It shows that using AD in a city can reduce a car’s reach with roughly up to 50 miles, a significant 30%.

Mileage reach on a full charge, with AD on, on the highway.
Mileage reach on a full charge, with AD on, in the city.

To conclude, it is not realistic to have full autonomous driving with edge computing running on CPU and GPU, but requires custom optimization through tailored chipsets that need to be trained with data that satisfies the safety standards in the AD industry. This aligns with recent studies describing the overall mileage required to reach specific safety standards [2]. Not only due to the high cost of CPU and GPU, but especially due to the energy required when using AD-sensors. The solution to this problem is found in efficient algorithms that run in the car and compute with efficiently pre-processed edge data.

Teraki’s embedded software delivers such data by efficiently pre-processing and compressing telematic (1D), video (2D) and point cloud (3D) data. Why edge computing is important we presented in a post: blog#1. How Teraki achieves efficient region-of-interest based compression in 2D is presented in the previous post: blog#2. How Teraki achieves effective 3D point cloud analysis will be the topic of an upcoming blog post.

Daniel Richart

Would you like to be informed on the state-of-the-art in edge processing and automotive applications? Click here to sign-up for Teraki’s fortnightly blog.

https://www.teraki.com/autonomous-cars-big-problem/

Bibliography:

[1] https://pushevs.com/electric-car-range-efficiency-epa/

[2] https://www.rand.org/content/dam/rand/pubs/research_reports/RR1400/RR1478/RAND_RR1478.pdf

--

--