Hitting it where it matters
ARCore is Google’s augmented reality software development platform to enable developers to build their own AR apps. ARCore is designed to work on a range of Android devices running Android 7.0 (Nougat) and later updates. A complete list of all supported devices is available here.
ARCore uses various application programming interface to let the phone sense the environment around it, understand the surroundings and interact with the world with the help of the information obtained. To enable shared AR experiences, some of the APIs are accessible across Android and iOS.
ARCore is built on the concept of Tango, Google’s first experiment in the field of augmented reality. However, unlike Tango, ARCore does not require the robust list of complicated hardware to function.
ARCore works with three key technologies to merge the virtual content with the real world as we see through the phone’s camera:
- Motion-tracking — enables the phone to understand and track its position in context to the real world.
- Environmental understanding — enables the phone to perceive the size and location of different types of surfaces, i.e., horizontal, vertical or angled surfaces.
- Light estimation — enables the phone to evaluate the lighting conditions of the current environment.
How does ARCore work?
ARCore has 2 main functions: to track the position of the phone when it is moving and to build an understanding of the world of its own. It uses the phone’s camera to locate feature points and track how those points are moving over time. After integrating the reading of the phone’s inertial sensors and the movement of the feature points, ARCore determines the orientation and position of the phone as it is moving through space.
Besides feature points identification, ARCore can distinguish flat surfaces like floor, table, book, etc and also gauge the lighting condition of the area around it. These abilities combine to allow ARCore to build a perception of its own of the world around it.
ARCore’s comprehension of the real world lets the user place virtual annotations or objects in a way that seamlessly merges with the real world. The user can place a virtual painting on a wall, a coffee cup on a table, and so on. They can move and turn around, and view the object from different angles. They can even leave the room and find the object exactly where they have placed it when they return.