Cognition for Autonomous Cars Using 6D Localization
By Anuj Gupta, Product Mgt. — Sensor Fusion and Alexa Lee, Marketing
To drive autonomously, vehicles need software which emulates the routines of natural human cognition (processes used to judge, plan, acquire knowledge, or otherwise — “think”). Autonomous vehicles must be able to understand the world that surrounds them, and this environmental context can be provided in the form of a machine-readable, high-definition “semantic map.” Detailed 3D semantic maps, also commonly known as “HD maps,” have become the industry-wide standard to enable higher cognition in self-driving cars and Civil Maps is trailblazing at the frontier of this emergent market.
Even so, HD semantic maps are of little use to a vehicle without precise localization — the ability for an autonomous vehicle to accurately position itself within the reference map. Similar to yourself, while intending to go somewhere, an intelligent vehicle needs to know where it is currently located before it can design a route, and then follow its desired path. Moreover, while the new generation of highly-detailed 3D maps are far more comprehensive than traditional 2D mapping projections, they are not entirely sufficient for achieving Level Four (SAE) autonomous driving, wherein the human driver has no necessitated responsibilities towards vehicle control or route planning. Truly “self-driving cars” need much more, in the form of “cognitive tools” to aid in environmental awareness and decision-making.
Civil Maps has addressed this knowledge gap by developing techniques for localizing a vehicle in six degrees of freedom: the movement axes (x, y, z) and also rotational axes (roll, pitch, yaw) that are more familiar to pilots than automotive enthusiasts. The above concept video shows the result of combining our highly detailed, 3D semantic map with localization in six degrees of freedom (“6DoF” also referred to as “6D”). Localization in 6D allows the 3D semantic map to be projected in the field of view of vision sensors such as LiDARs, cameras, and radars. By utilizing Civil Maps’ localization routines in this manner, the car is given an additional layer of assistive map information, enabling smarter decisions and safer driving. With both location and orientation in 6DoF, the vehicle can focus (foveate) its sensors towards a particular region in space, where a need-to-know action is occurring in the car’s local frame of reference and environment.
A natural question to ask: What happens when the car is not localized in 6DoF?
Without 6DoF localization, the semantic map projections will not be accurately aligned with the physical objects that the car’s sensors are recording. A vehicle without six degree localization would be unable to accurately track its position, misjudging the precise location of expected signs, signals, and other roadway infrastructure. Additionally, the vehicle would only get a rough idea of its surroundings from the 3D map. Consequently, it would need to re-derive a semantic understanding of its surrounding and match that with the map for validation. This makes the ride less comfortable and reduces the safety envelope of the autonomous system.
The results of faulty localization can range from disastrous consequences that may endanger human life to basic computational inefficiency, requiring a vehicle to do much more processing than necessary to assess its bearings and situational awareness.
You may be wondering how well this all works in the real world at high speeds. In the video above, the team at Civil Maps is sharing footage of a recent localization and map usage demo shot in Plymouth, Michigan. Using sensor fusion and the Atlas DevKit, our demo vehicle is able to localize the car while driving at speeds approaching 70 mph on a major highway. The future is here and we all need to move fast to keep up!
Follow our blog this summer as we release more about localization in 6DoF.
P.S. We’re hiring! Don’t see your role there yet? Email: firstname.lastname@example.org.