Augmented reality is the technology that expands our natural world with digital information to enhance our senses. What we get is a view of the physical real-world environment with superimposed computer-generated images. The term was coined in 1990, yet it was only with the rise of the Internet and smartphones, that Augmented Reality started having a real impact on our habits, social life, and the entertainment industry.
Augmentation is happening in real time and within the context of the environment, which means that users are aware of being in the real world which is advanced by computer vision. For AR a certain range of data (images, animations, videos, 3D models) may be used and people will see the result in both natural and synthetic light.
To better understand what augmented reality is, let’s go inside its technology to see how it works.
How AR works
In most augmented reality applications the user sees both worlds, i.e. synthetic and natural light, simultaneously. To put it simple, this is achieved by overlaying projected images on top of a pair of see-through goggles or glasses, which allow the images and interactive virtual objects to layer on top of the user’s view of the real world. AR can be displayed on various devices, including screens, glasses, handheld devices, mobile phones, and head-mounted displays. In future, with advancing of AR, devices will require less hardware and start being applied to things like contact lenses and virtual retinal displays.
Key Approaches to Augmented Reality Technologies
Depending on the technologies, like SLAM (simultaneous localization and mapping), depth tracking (sensor data calculating the distance to the objects), or NFT (natural feature tracking), to mention a few, we can speak of three main approaches to Augmented reality:
SLAM (Simultaneous Localization and Mapping) is believed to be the most effective way to render virtual images over real-world objects. It simultaneously localizes sensors with respect to their surroundings, and maps the structure of the environment. The SLAM system is, in fact, not a specific algorithm or software, but rather a set of algorithms aimed at solving simultaneous localization and mapping problem.
2. Recognition based:
Recognition (or marker) based approach uses a camera to identify visual markers or objects, such as a QR/2D code or natural feature tracking (NFT) markers, to showcase an overlay only when the marker is sensed by the device. Once recognized, the marker is replaced on screen with a virtual 3D version of the corresponding object. The device can calculate not only the marker image but the position and orientation. Therefore, rotating the marker would rotate the virtual replication as well. In this way the user can observe the object in more detail and from various angles.
3. Location Based:
The location-based approach relies on a GPS, digital compass, velocity meters, or accelerometers to provide data about the location. The augmented reality visualizations are activated based on these inputs. With smartphones fitted with location detection features, this type of augmented reality technology becomes quite popular for everyday use. Some common uses of location-based AR include mapping directions, finding nearby services, and similar mobile apps.
Key components to Augmented Reality devices
To work effectively, Augmented Reality devices should have the following components:
Cameras and sensors
Both cameras and sensors are usually on the outside of the augmented reality device. Sensors are used to gather the user’s real world interactions for further processing and interpretation. Cameras on devices scan the surroundings visually for the device to locate physical objects and generate 3D models. It may be special duty cameras, like depth sensing cameras in Microsoft Hololens that work in tandem with two “environment understanding cameras” on each side of the device. Still, many AR devices use common smartphone cameras to take pictures, videos and sometimes more specific information to assist with augmentation.
AR devices act basically like mini-supercomputers packed into tiny wearable devices, something we see with modern smartphones. Similarly, they require a CPU, a GPU, flash memory, RAM, Bluetooth/WiFi microchip, a GPS, etc. to be able to measure speed, angle, direction, orientation in space, and so on. Some advanced AR devices may additionally utilize an accelerometer to measure the speed of the user’s head movements, a gyroscope to measure the head tilt and orientation, or a magnetometer to detect the direction of the head pointing for extra immersive experience.
You can easily write a book on projection in Augmented Reality, but in a short post, let’s stick to only one aspect, or, rather, a unit: a miniature projector often found in a forward and outward-facing position on wearable augmented reality headsets. It takes data from sensors and projects the resulting digital content onto a surface to view. The projector can turn any surface into an interactive environment, be it a wrist, a wall, or another person. Projectors will eventually replace screens: in future you won’t need a tablet to play chess online; you’ll have a virtual tabletop in front of you. However, the use of projections in AR has not been fully invented yet to use it in commercial products or services, so this future is still to come.
Some AR devices have mirrors to assist human eyes to view virtual images. Besides, some devices may have an “array of small curved mirrors” and others may be equipped with a double-sided mirror with one surface reflecting incoming light to a side-mounted camera and the other –light from a side-mounted display to the user’s eye. The goal of all such reflection paths is to perform a proper image alignment to the user’s eye.
Key Software Development Kits
The popularity of AR applications has given rise to creation of tools for developers. At present developers planning to get into AR have a large variety of software development kits (SDKs) to choose from. Some of them are device specific, while others are targeted at specific applications, and some of them are completely open source. This topic is so huge and exciting that we’ll speak of the AR application development in a separate post, but for now let’s just have a glimpse of some of the best SDK for Augmented Reality:
Supported platforms: Android, iOS, Windows for tablets, smart glasses (Epson Moverio, Vuzix M100, ODG R-7).
Wikitude offers image recognition, object recognition, 3D markerless tracking, ARKit and ARCore support, scene recognition, save-and-share instant augmentations, and Unity live preview.
Supported platforms: Android, iOS, UWP and Unity Editor.
Vuforia is one of the most popular platforms which provides APIs for C++, Java, Objective C++, and .Net. It supports iOs, Android, and Tango devices as well as Unity-based AR apps. Its key functionalities include recognition of the different types of visual objects, text and environments recognition, VuMark (a combination of picture and QR-code), and Vuforia Object Scanner that allows scanning and creating object targets.
Supported platforms: iOS 11/12.
ARKit supports 2-dimensional image detection and tracking, meaning the ability to embed objects into AR experiences. It allows developing apps that would recognize spaces and 3D objects, as well as place virtual objects on surfaces. ARKit works with third-party graphics engines including the popular Unity and Unreal Engine.
Supported platforms: Android 7.0 and higher, iOS 11 or higher.
Google’s response to ARKit, ARCore comes with 3 major capabilities to merge virtual and real worlds: motion tracking, environmental understanding, which means detection of the size/location of horizontal, vertical and angled surfaces, and light estimation.
Supported platforms: Android, iOS, Linux, Windows, Mac OS and Smart Glasses.
ARtoolKit is an open source tracking library for augmented reality. ARtoolKit implements the following functionalities:single-camera or stereo-camera camera position/orientation tracking; tracking of simple black squares, tracking of planar images, camera calibration and optical stereo calibration. What is important about ARToolKit is that it is completely free and open source, and at the same time fast enough for real-time AR applications.
Supported platforms: Android, iOS, UWP, Windows, Mac and Unity Editor.
Another free library, EasyAR, is a free and easy to use alternative to Vuforia, which supports 3D Object Recognition, environment perception and., cloud recognition. The fact that registering for the service grants immediate access to a library of free AR tools including QR code scanning, video playback, Unity support, and planar image tracking, makes it a valuable tool for developers who want to try their hand on AR applications.
We hope that with this overview you will get the idea of central concepts of Augmented Reality and even try yourself in development an AR applications. What are the spheres where AR may be applied — this we’ll discuss in the following post.