Smart Cars Need Smart Infrastructure

On the need for machine-readable infrastructure.

JC
Argumenta
3 min readMar 21, 2017

--

Image Source: Grendelkhan

Self-driving cars have been an object of interest for a few years now. The current streak of interest on the subject began with Stanley, Stanford University’s DARPA Grand Challenge winner of 2005. With its descendent, the Google self-driving car, the technology has improved much, but is yet to go mainstream. Various companies are now working on automating some, or all, tasks involved in driving, including Silicon Valley tech companies like Waymo and Uber, automotive technology companies like Delphi, and traditional automobile manufacturers such as GM and Volvo. Tesla, which is more of a Silicon Valley tech company rather than an automotive company, is also a major player in this market, and has already commercialized more automation than its competitors.

Autonomous vehicles, or ‘smart cars’ henceforth, rely on various sensors whose data is analyzed by a computer, that then makes decisions based on that data. Various companies use various kinds of sensors. For example, Waymo and Uber, in addition to using cameras to detect lane markings, road signs, signals, etc., use Light Detection and Ranging (LIDAR) sensors that illuminate the vehicle’s surroundings with infrared radiation, to process the reflected light in order to build up a map of the world around it, including obstacles. Tesla, on the other hand, uses cameras and radar to achieve the same goals. In other words, these machines attempt to understand a passive, human-readable world, using passive (cameras) and active (LIDAR, radar) sensors.

This is to be contrasted with, say, an airplane that can use Instrument Landing System (ILS) for landing. As a specific example, ILS provides radio navigation aids at airports to identify the airport and the runway. In other words, the infrastructure (i.e., the airport) is active, or machine-readable.

Machine-readable infrastructure simplifies smart car design considerably. For instance, lane markers, street signs, and signals can be equipped with short- or medium-range radio beacons that antennas on the car can pick up and process, without the need for sophisticated processing of camera images. This obviates the need for confusion-resolution mechanisms, which is direly needed when a machine tries to interpret a human-readable world. For example, in some intersections, especially ones with streets joining at an acute angle, it can be difficult for image processing systems to distinguish the direction a signal is controlling. There can be issues with signals or road signs being backlit by the setting sun or other bright light source, or with glare produced in the camera’s optics. A radio beacon that transmits information about, say, the street and direction a signal is meant for, the state of the signal (green/orange/red), and perhaps the time left in the current state, has the potential to drastically improve the performance and safety of the smart car.

Eventually, we should consider removing pedestrians (who are not machine-readable) from the world of smart cars, perhaps by using crosswalks elevated above intersections, and barriers along side walks. This will not only improve the safety of pedestrians, but also simplify smart cars.

The United States Department of Transportation recently proposed a rule for vehicle-to-vehicle (V2V) communication for the purpose of crash avoidance, in all cars, smart or otherwise. It is a timely regulation, but it is unclear how much more useful it would be for crash avoidance in smart cars, compared to onboard active sensors. What governments should do, is to take steps to make the infrastructure smarter. This will dramatically improve the ability of smart cars to navigate the world, and make fully-autonomous driving a reality.

--

--