Three Approaches to Solving the Autonomous Vehicle Orientation Problem

What different driverless car technologies reveal about their overall strategies

Thoughtworks Canada
Connected
6 min readMay 14, 2019

--

Ever since the emergence of companies like Tesla and Waymo, major manufacturers have been rapidly expanding their focus on autonomous vehicle development. These efforts accelerated in 2018, thus raising the hope that autonomous passenger vehicles can become a reality in the near future. GM, for example, already have vehicles without a steering wheel or a pedal for employee-use in San Francisco and self-driving rideshares through their investment in Lyft. Ford, VW, Toyota, and Mercedes-Benz are all throwing their hats into the ring. Tesla also announced that they’ll have fully self-driving cars by 2020.

Although the future of cars chauffeuring humans looks bright, autonomous driving remains a complex task-one requiring the continuous analysis of a rapid stream of data points, not to mention a plethora of decisions to be made that are trivial to humans. Object detection, distance, speed, orientation and local traffic laws are all factors that help us make decisions when controlling a vehicle. To reach level 5 autonomy as defined by the Society of Automotive Engineers (although Ben Evans explained it best), computer systems need to be able to perform all the above essential tasks. And for each task, a different technological solution must be found. Current technologies that help with each task are:

  • Distance: Lidar (Light Detection and Ranging)
  • Speed: Radar
  • Object Detection: Cameras, image processing, and machine learning
  • Traffic Laws: object detection (for traffic lights and signs), image processing (for lane detection), and access to local data
  • Path Planning: Software algorithms (such as A* search algorithm)
  • Orientation: Simultaneous Localization and Mapping (SLAM) tools or pre-recorded three-dimensional maps

Out of all the elements above, orientation (or knowledge of one’s position relative to the surroundings) seems to be the most difficult for driverless cars to master, and that has to do with the dynamic nature of cities. Construction sites, road closures, new signs, and missing road markings are just a few examples of the kind of uncertainty that can change urban surroundings to the point of confusing even humans, let alone software. Naturally, there is no single method of dealing with the problem. Recent advances, however, reveal three noteworthy strategies.

1. The Tesla Approach

Tesla’s approach to orientation has been to pack as much intelligence into each individual car as possible. Rather than rely on pre-recorded maps, Elon Musk wants to combine image processing and machine learning to give each Tesla vehicle a real-time knowledge of its surroundings. Tesla vehicles learn as they go and share their knowledge with other cars. They rely on the world around them rather than on historical data (maps, etc.), they don’t run the risk of relying on an outdated map. However, this kind of real-time processing adds a lot of complexity to the car.

This approach carries a number of benefits. As real-time navigation cannot rely solely on historic data but requires up-to-date knowledge of the environment, Tesla vehicles can adapt to the changing condition of the environment. This shows an initiative to use methods similar to SLAM. We can infer that Tesla is moving in the direction of building vehicles that can drive in any condition, independent of their surroundings. Elon Musk has said publicly that Tesla does not plan to include Lidar in vehicles. This approach, as Musk put it, gets rid of the “ugly, expensive and unnecessary” mapping equipment at the cost of a deeper dependency on cameras and software in dealing with uncertainties.

2. The General Motors/Mercedes Benz/Ford Approach

GM and Mercedes Benz have both been heavily investing in a passive pre-mapped approach rather than active surrounding analysis. GM acquired its own Lidar supplier in 2017. Ford, in partnership with Baidu, invested $150M in Velodyne (a Lidar supplier) and Mercedes Benz also awarded a contract to Velodyne as a supplier of Lidar.

These vehicles depend on a pre-recorded 3D high-resolution map of their surroundings which is captured beforehand using vehicles equipped with Lidar. The vehicle can then use the map, determine if the environment has changed using its own Lidar equipment and then take control while cruising in the mapped area. This shows a slightly broader scale of autonomous driving strategy. In order to keep the maps updated and the vehicles useable, a wider collaboration between municipalities and auto manufacturers is needed to create and maintain an up-to-date map of streets for vehicles to use. Currently, as we can see in systems such as Cadillac’s Super Cruise, the vehicle can only navigate on freeways which are pre-mapped as long as the consistency and safety checks pass. This method offers a high degree of reliability and predictability but comes at a higher cost due to the effort needed to record the maps and manufacture vehicles with Lidar equipment.

3. The Volkswagen Approach

Another approach to autonomy in driving is (counter-intuitively) not focusing so much on making cars smarter to adapt to their environment, but rather creating smarter environments. This eases the burden on vehicles to be able to figure out all the uncertain elements in their environment. In this scenario, the environment would alert the vehicle of changing surroundings and let them know with a higher precision what the surrounding conditions are. Think construction cones that can tell an incoming car where exactly the construction area is and where the temporary lanes are.

Volkswagen (VW) has been hard at work to establish themselves as a pioneer in Vehicle-to-everything (V2X). VW announced that all 2019 vehicles will be shipped with a full V2X suite of capabilities. Another move in Vehicle-to-Infrastructure (V2I) testing of smart traffic lights by VW was in collaboration with Siemens in their hometown of Wolfsburg, Germany. The partnership is testing smart traffic lights in real-world settings.

The approach is not limited to communication with infrastructure. Cars can also talk to each other and inform other cars of important events such as accidents and objects on the road. This technology, dubbed Vehicle-to-Vehicle (V2V) communication, is already available in production on Cadillac CTS. This is a much broader ecosystem compared to the previous two strategies, where the complexity of the vehicle autonomy and degrees of uncertainty are reduced by investing in smarter roads. This requires auto manufacturers, V2X suppliers and municipalities to all collaborate and create the infrastructure and standards for vehicles to navigate smoothly and with a lower threshold for error.

Where do we go from here?

Interaction between humans, machines, and their surroundings is not a trivial problem to solve. It’s still very early to tell which of these strategies will play a more dominant role in the future of driving. It is more likely that an amalgam of all tools, processes and the technology that we have today will shape the roads of tomorrow. The landscape of the automotive industry is changing quickly with initiatives such as micro-mobility gaining traction ( GM and Ford are two examples) and technology companies such as Google’s Waymo starting to manufacture cars. Technology, however, is only one aspect of an autonomous future. Regulations and safety rules also need to adapt to the changes in driving. This year Ford, GM and Toyota formed a partnership to champion for safety regulations pertaining to autonomous vehicles. The human experience is another crucial factor and food for thought. Only time will tell how these problems will be solved, but one thing is clear now: we live in one of the most exciting times for mobility product innovation.

Stay tuned for more mobility blogs, in the meantime, read our latest blog on How Netflix Built an Innovative Culture.

Originally published at https://www.connected.io.

--

--

Thoughtworks Canada
Connected

Creating extraordinary impact on the world through our culture and technology excellence. ry, while advocating for positive social change.