Self-driving cars — should we buckle up because of hackers?
Securing the CAN bus and dodging sensor spoofing are part of the ride to level 5 autonomy.
Since the very first webcam we’ve seen plenty of anecdotes of connected devices going awry. The snowballing of consumer and industrial IoT amplified the number of possible devices maliciously targeted. Hackers have taken control of devices, spied on people, disrupted businesses and governments, swarmed thousands of devices into botnets. And the list goes on and on.
Every new technology drives bad people to spend time and energy to find ways to take advantage of others. IoT tech is a big target: smart assistants, connected homes, healthcare devices, traffic systems, manufacturing sensors, etc.
IoT security is a hot topic, and the big players know it. Recently the chipmaker ARM announced the Platform Security Architecture (PSA), a proposal for a standard across the industry aimed at developers, hardware and chip providers. Standard schmstandard, but Microsoft, Google, Cisco, Sprint, and others are endorsing it, so who knows?
Car hacking and self-driving tech
The general concern in IoT security has driven tech and mainstream media to generate clickbaity and sensationalist news. Not surprisingly, the same is now being seen toward self-driving technology.
CAN bus and car hacking
The Controller Area Network (CAN) bus is a standard that was developed by Bosch and Intel in 1983 and its current version was released in the 1990's. CAN is a serial communications protocol that allows distributed real-time communication and control between vehicle components like: brakes 👀, power steering 🙀, windows, A/C, airbags, cruise control, infotainment systems, doors, battery and recharging systems for electric cars, etc.
By reverse engineering the CAN bus we’re able to issue commands to a vehicle via software. So taking control of a car is a matter of getting access to the CAN bus. Since its main focus is safety and reliability it never offered any ways to enforce security — through authentication or encryption, for example.
The Car Hacker’s Handbook is the“bible” that introduces anyone to these components. The book examines vulnerabilities; provides detailed explanations of communications over the CAN bus and between devices and systems.
Self-driving vehicles will bring substantial transformation to our lives. Advancements in artificial intelligence, computer vision and sensor technologies are driving us to the level-5 scenario (state when the steering wheel in a car is “optional” with no human intervention ever required.)
All made possible by Moore’s Law and billions in capital.
And so it happens that our old friend, the CAN bus, suddenly became popular. The reason is that most self-driving car companies aren’t designing or building their own vehicles from scratch. They are creating software to control the car — more specifically to control steering, acceleration and braking. Check here how the engineers from Voyage (a self-driving taxi service) used the CAN Bus of a Ford Fiesta to control its temperature.
Commands to brake, accelerate or change the steering angle are sent to the CAN bus based on the firehose of data captured by the self-driving tech sensors. Mounted cameras, radars and LIDARs are the eyes and ears of the cars. Roughly, a dozen sensors blend data (known as sensor fusion) to track the vehicle environment and the local software makes all decisions in real-time.
But are cars part of the Internet of Things?
The good news for self-driving tech is that each car acts much like a closed system. There is no need for continuous cloud connection to the car. They will eventually connect to the external world to send and receive information, such as traffic reports. There’s still the risk that another system, unrelated to self-driving features — like wi-fi — could allow an entry-point for hackers. Such type of hack prompted Chrysler to recall 1.4 M cars in 2015.
External factors can influence the behavior of the car sensors or trick its AI to “think” it’s “seeing” different things.
Many papers on adversarial perturbation have shown that deep neural networks can be fooled to misclassify objects. For instance, by adding a sticker camouflaged as graffiti art on a traffic sign the software will interpret it as something different. Watch the video below.
Besides the camera there are other known types of sensor spoofing. A Chinese group of researchers managed to launch attacks using off-the-shelf radio, sound and light tools to spoof Tesla’s ultrasonic sensors and millimeter wave radars, affecting its autopilot system.
Another group in South Korea figured a way (PDF) to create fake objects on the road that are detected by the LIDAR causing the car to lock its brakes to avoid a crash.
The road ahead
Since we’re dealing with lives of passengers and pedestrians, security must always be always a top concern. Weaponization of vehicles is unfortunately already a reality without sophisticated technology. Most of these hacks are the equivalent of ad-hoc attacks like someone throwing a heavy object from an overpass onto a car. Or analogous to aiming a laser pointer into the eyes of a driver.
When was the last time you took an elevator from a top floor in a high-rise concerned that a hacker would make it plunge 30+ floors? Well, some people argue that self-driving cars are like elevators.
The change is coming, it’s now a matter of when, and not how. So let’s buckle up for the ride!