Using Computer Vision to Identify Positioning for Navigation With ARKit

I’m building a product called WaypointAR, a navigation platform for indoor locations that uses augmented reality(AR) to make this more intuitive. One of the major problems that I have been encountering is finding my current location accurately. GPS and Wifi can only do so much and does not provide the accuracy required for such indoor locations. I have explored tons of options on how to mitigate this issue and allow for these waypoints to be placed accurately based on the user’s current location. After experimenting with platforms like Google Floor Plans, Mapbox, Wifi Triangulation and Beacons, I have been building a solution that involves Computer Vision and Image Recognition.

Image Detection: Placing ARKit planes on images

I am working on an Image Recognition based system for WaypointAR to deliver accurate location data to the user’s phone. The application is able to identify custom WaypointAR codes and perform an action or return a value when recognized. This demo is able to detect the code and anchor an ARKit plane to the image. This is a simple way of visualizing the ability for the application to detect the image at multiple angles and perform a action(placing the plane) based on this. Within the AR application, the images will return precise coordinates which will be used to determine the positions of the AR Waypoints.

Multiple Image Detection: Differentiation of codes

The next step to allow WaypointAR to work within any part of an indoor location is to allow for it to differentiate between various WaypointAR codes. This demo is able to differentiate these two WaypointAR codes. This differentiation is visualized by the color of the ARKit plane. The AR application will be able to scan a code from anywhere in the indoor location, receive data and display waypoints to guide the user to their destination.

Computer Vision for data transfer

The end goal is to create a network of WaypointAR codes within indoor locations to allow for navigation within their location. These codes will serve as starting points and trigger the necessary data(precise location data). This data will then be used to run calculations within the 3D environment and place waypoints.

Learn more at

A little about me…I’m a 14 year-old Virtual Reality (VR) and Augmented Reality (AR) developer. I’m super passionate about creating new realities. I’m the youngest AR developer in the world to be sponsored by Microsoft to develop for the Microsoft HoloLens and I’ll be keynoting at this year’s Augmented World Expo and C2 Montreal. In the future, I want play a major part in shaping the XR industry and changing the world. Feel free to connect with me on LinkedIn!