Why a self-driving car doesn’t use Google Maps

Machines need maps of their own

Neehar Garg
mapper.ai
6 min readAug 2, 2018

--

At mapper.ai, we are mapping the world for machine use-cases. One of the questions we get asked a lot is how what we do differs from Google Maps, Apple Maps, Open Street Maps, etc. The truth is the maps we build are categorically different — the Googles and Apples of the world build human maps, whereas we at mapper.ai build machine maps. While both kinds of maps are tools used to make location-based decisions, the end users are distinct and as a result the types and quantities of information each contains are remarkably different. Here I’m going to dig into the substantial differences between these two types of maps.

So what is a human map?

(Think Google Maps, HERE, Apple Maps, among others)

When you think of maps, this is probably what you think of. It’s the map you pop open on your smartphone, the blue dot that tells you where you are, and the routing directions you layer on top of it.

A simple human map in San Francisco

These maps are extraordinarily useful. They enable countless apps that rely on location (Google Maps of course, but also Snapchat, Uber, Foursquare, and thousands of others). Using a simple phone GPS, these maps can tell you where you are within a few meters — they can track traffic patterns, recommend routes, provide local discounts, and a million other useful things.

The kinds of useful semantic data on these maps include places of interest (POIs), as well as street names and addresses. In a car, you can layer routing instructions on top of these maps, things like “turn right on Main St.” A great map like Google’s will even include granular information like the number of lanes in a road, which results in improved driving directions like “stay in the right two lanes to continue on I-90.

Human maps enable decision-making for a driver who also has common sense

Human maps are a great resource, and provide information to help the most powerful pattern-matching computer in the world today: common sense.

Common sense is frequently thought to refer to the simple or the obvious, but common sense is incredibly difficult to replicate in code; it comes from thousands of years of evolution, countless small cues from your surroundings, and it relies on your brain, which is far better at pattern matching than anything fashioned from silicon (so far).

Because common sense is so powerful, human maps can afford to be less precise. They rely on GPS and simple GPS receivers (like those in an iPhone), which give you that little blue dot accurate to a few meters most places in the world. This is powerful technology, and more than enough when paired with common sense. The blue dot tells you you’re on Main St, and then relies on your common sense to not drive on the sidewalk, even though the GPS doesn’t know the difference. The blue dot may disappear in tunnels and around high-rises, then relies on your common sense to recognize that you have neither ceased existing nor stopped moving, and you continue until the blue dot re-appears. The blue dot tells you that you’re getting close to an exit that will be in the right-most lane of a 5-lane freeway, but it doesn’t know what lane you’re in — it relies on your common sense to tell you that you’re in the left-most lane and need to begin moving right. If the blue dot suddenly moves over the ocean when you’re standing outside your house, common sense will keep you from panicking and trying to swim.

Machine maps enable decision-making for a powerful computer that lacks common sense

Machine maps provide a supplementary resource to the computer in an autonomous car just like human maps provide a supplement to the human brain. Like a human, this powerful computer makes driving decisions in real-time, but it lacks that essential common sense to interpret context around it. As a result, the map must be incredibly precise — the car must know where it is within 10 cm! It must be able to convey information a human understands intuitively, as well as more granular instruction, such as:

  • Exactly which lane the car is in, as well as any traffic signals and signs that apply to that lane at that time
  • The elevation of the car and the road — it must understand where the car is even when one road is beneath another (something GPS is insufficient for). On a highway overpass, it must know that you are on the highway, not on the road above
  • The map can’t turn off: it must continue to provide that granular localization even when GPS signals are weak or missing

So what is a machine map then?

(Think mapper.ai, of course)

A machine doesn’t care about a pretty interface, a night mode, or carefully chosen fonts. A machine map is simply the unvarnished aggregation of information that is essential to a machine’s navigational purposes. It includes an up-to-date 3D model of the world around the car, plus layers of semantic information layered on top of the 3D data.

Some semantic information that could be layered onto a machine map

3D point cloud

A 3D point cloud of the Mission neighborhood of San Francisco

Videos like these are probably familiar to anyone interacting with autonomous vehicles. A point cloud is spatially-organized data, essentially a large collection of 3D elevation points. A precise, up-to-date 3D point cloud map enables a few essential tasks for an autonomous car:

  1. It allows the car to localize. Any autonomous car using LiDAR will generate a point cloud in real-time. By overlaying the point cloud against a pre-existing map, the car can figure out exactly where it is within a few centimeters, even when GPS is unreliable
  2. The real-time LiDAR is constantly working on object detection and avoidance — how to account for the moving objects around it. An accurate point cloud map serves as a baseline view of the environment. The more that has already been mapped out, the easier it is for the on-board computer to focus its limited computational energy on the (quite literally) moving parts.

Semantic information

Simple semantic information: each lane’s direction of travel

Layered on top of this point cloud is the semantic information that tell an autonomous car the rules that should govern its behavior at any given time. This semantic information provides context to the car’s AI and serves as a blueprint of the world around the car. At its most basic level, that includes road boundaries (i.e., drive between these lines please). The more layers of information that are readily available on top of this, the easier time the on-board computer will have trying to navigate, since more of and more of the environment is sketched out for it ahead of time. This can include things like:

  1. Lane markings
  2. Direction of travel
  3. Traffic signals and signs, and the lanes with which each is associated
  4. Pathways through road intersections
  5. Curb markings and parking rules
  6. “High-confidence routes” (pathways that are easiest for an autonomous car to navigate)
  7. Construction zones and any temporary rules that accompany them

When done right, the combined product is pretty cool — a 3D point cloud on city-wide scales with clear and accurate annotations layered on top of it.

A 3D point cloud with semantic information layered on top of it

The semantic information most useful to the autonomous car industry is still in flux, and will continue to come into focus in the coming years. In the meantime, it is crucial that any map providers are able to add and customize more such layers to their maps quickly and accurately.

I hope this was a helpful primer on human versus machine maps. In future articles, I will flesh out more about what makes for a good machine map, and the best ways to create those maps in the right places and at the right scale to accelerate the development of self-driving cars.

Mapper.ai is building the world’s largest repository of machine maps. If you’re interested in trying us out, reach out at info@mapper.ai

--

--