So how it works is that when you launch an ARCore/ARKit app, the Tracker checks to see if there is a Map pre-(down)loaded & ready to go (there never is in v1.0 of ARCore & ARKit), so the tracker Initializes a new map by doing a stereo calculation as I described in my last post. This means we now have a nice little 3D map of just what is in the Camera’s field of view. As you start moving around, and new parts of the background scene move into the field of view, more 3D points are added to the map and it gets bigger. And bigger. And bigger. This never used to be a problem because trackers were so bad that they’d drift away unusably before the map got too big to manage. That isn’t the case anymore, and managing the Map is where much of the interesting work in SLAM is going on (along with Deep Learning & CNN AIs). ARKit uses a “sliding window” for it’s Map, which just means that it only stores a variable amount of the recent past (time and distance) in the map, and throws away anything old. The assumption is that you aren’t going to ever need to relocalize against the scene from a while ago. ARCore manages a larger map, which means that the system should be more reliable.
AR researchers and industry insiders have long envisioned that at some point in the future, the realtime 3D (or spatial) map of the world, The AR Cloud, will be the single most important software infrastructure in computing, far more valuable than facebook’s Social graph or Google’s page rank index.