An integrated approach to helping vehicles see

How we solve the most pressing issue in autonomous driving with disruptive technology and an integrated business model

Christoph Bonik
Artisense
Published in
3 min readJan 6, 2019

--

Vertically integrated perception stack for autonomous vehicles and mobile robotics

Autonomous vehicles (AV) require highly accurate perception and localization capabilities, augmented by machine-readable, navigation-grade and real-time 3D maps.

Artisense’s proprietary software, ArtiSLAM, is setting the new benchmark in camera-based Simultaneous Localization and Mapping (SLAM). Our computer vision and machine learning R&D teams develop a scalable approach to perception and 3D mapping at navigation-grade quality — rather than betting on expensive sensor systems or working in 2D.

…a scalable approach to perception and 3D mapping at navigation-grade quality

MONARX Mobile Mapping System

The MONARX Mobile Mapping System (MMS), together with tightly coupled sensor fusion algorithms and ArtiSLAM, produce selective 3D point clouds (3D reconstructions) and highly accurate trajectories (vehicle location over time) independent of GPS. Machine learning algorithms add semantic info to the point cloud, for example objects and road boundaries, to form HD maps for AVs. Artificial Intelligence (AI) also enhances SLAM performance under poor lighting conditions or in further reduced sensor setups. Given the MONARX’s size, low hardware and processing requirements as well as the light nature of our selective point clouds, any vehicle can easily be retro-fit with an Artisense MMS, collect data and optionally update dynamic 3D maps.

Point cloud from Odaiba in Tokyo, Japan
Point cloud and trajectory from Frankfurt, Germany

Such maps are used by AVs as reference for navigation. Hence, the map must be centimeter accurate and always up-to-date. City planners may use spatial and temporal information of the map for analytics in a smart city context, insurers use it to better assess risk. OEMs, automotive suppliers and mobility companies use the data for AI training and simulation, exploiting the inherent trajectory information, which effectively captures human driver behavior in the observed environment. Fleet managers achieve better localization of vehicles and robots in GPS denied environments or can optimize fleet behavior through analysis of trajectories.

Offering a fully vertically integrated solution for the perception stack, the product can be used as map as well as in AV operation and ADAS.

Offering a fully vertically integrated solution for the perception stack, the product can be used as map as well as directly in AV operation and ADAS (Advanced Driver-Assistance Systems). Our algorithms work on edge in the vehicle and in the cloud to create the environmental understanding that feeds into the planning and eventually into the control software.

We are excited to play our part in the creation of full autonomy to the benefit of everyone. I am always eager to exchange ideas and discuss, so please reach out on LinkedIn and/or meet us at ces@artisense.ai in Las Vegas. If you are passionate about autonomy, artificial intelligence and mapping, please see our career page for recent openings and stay tuned for more info here.

Follow us on Twitter, LinkedIn and subscribe to our publications on Medium.

--

--

Christoph Bonik
Artisense

Building the backbone of the autonomous economy @Artisense3D