Top 5 learnings for visualizing data in augmented reality

The experience is great but AR still doesn’t have a killer dataviz.

Apple to Google to Facebook to startups like Magic Leap, there are more and more ideas about how augmented reality (AR) could not only replace the smartphone, but every other screen we use.

Following this trend, could AR lead soon to a massive spread of 3D visualizations? Even “become mainstream with Augmented Reality”
To explore this new field, I’ve developed three 3D data visualizations for AR.

3D charts are mostly unreadable

So far, 3D charts are everything but mainstream and frowned upon in the datavis scene. There is this cliché that 3D visualizations are good for some eye candy “infoporn” or to complicate complexity — main thing you get more press coverage. They don’t really help to provide more insights or a better visual representation of complex data than in 2D.
Nevertheless, there are a few good 3D charts which also work in 2D quite well such as the NY Times 3D Yield Curve by Gregor Aisch.

Yield curve by Gregor Aisch and Amanda Cox for the New York Times

Humans are fully trained to live in a 3D world

Combining AR and data visualisation or data analytics isn’t a new thing and startups such as Virtualitics or 3Data are already working in this field mainly focusing on analyzing data, presenting and discussing insights in AR/VR.
A team at IBM is working on analyzing data using AR which tries to “develop an improved intuition on the relationships between entities in high dimensional datasets.”

Furthermore, there is a lot of ongoing research about 3D data visualizations in AR especially in the field of Immersive Analytics.

A research group at UFRGS tested the perception of desktop-based 2D, 3D and VR scatterplots and were able to confirm their hypothesis that an immersive technique surpassed the desktop-based 3D in every aspect. Better comprehension of distances and outliers, more natural interaction and an improved engagement and accuracy. Conducted experiments at Caltech and JPL came to similar results. Nevertheless, the subjective results didn’t correspond to that and users still perceived the 2D technique to be more intuitive and faster.

Does that mean that there is a great potential for data visualization and analytics in AR/VR but we are just not ready yet?

To find some answers to those question, I developed three experimental 3D visualizations in AR for the iPhone X and summarized the results to five learnings.

Data visualized on maps in AR, read more about the projects here.
  • Terrain-based map with GPS locations of the Tour du Mont Blanc and various markers for the mountain huts along the route.
  • All buildings and the three most common street trees (~24k) in Manhattan. The maps have been created with Mapbox, data visualized in Unity and converted to an native iOS app running ARKit.
  • 3D scatterplot visualizing ~10k soccer players in an AR web experiences (WebAR) with the help of an experimental browser by Google.

Read more about the side projects here.


Top 5 learnings for data visualizations in AR

  1. Finding meaningful use cases is difficult.
    It’s hard to come up with some great new use cases which use the biggest advantage of AR: the physical space around you. There are some great examples of using GPS in gaming apps like Pokémon GO or the physical space in retail apps such as IKEA Place but meaningful use cases for data visualizations in the physical space are still missing. 
    I could image a lot of potential in data journalism, e.g. accessing data by looking at 2D images. It’s good to see that the NYTimes is already experimenting in this field.
    To speed up the process, I’ve choose simple projects which don’t use AR to its full potential and could also have been visualized in VR. But still they helped to come up with the next learning.
  2. Interactive experience in AR is fun. 
    The interaction with content in AR is more fun than expected and provides new interactive possibilities with touch, speech, proxemics, gestures, gaze, or wearables, e.g. to access more and more detailed content by moving closer to objects (similar to zooming-in).
    To immerse yourself into the data in your comfortable environment, share and discuss the data and visualization with others in the same room was not only a great experience but enhanced the motivation to talk together about the data. Virtualitics and 3Data have also realized this potential: collaboration.
  3. The environment matters.
    Object detection only works in bright rooms or at daylight, it won’t work in dark rooms or at night. To save the position of an virtual object is sometimes tricky. Apple’s ARKit works with the gyroscope and accelerometers to determine how you have moved in relation to that virtual object. If you move a lot or even worse if you are inside of a metro, the virtual objects sometimes just fly away literally.
  4. Performance is awesome.
    More complex data than expected can be visualized in AR. The implemented experiments have shown that walking around a landscape of more than 24k data points between 3D buildings (build with Unity) or 10k data points in a 3D scatterplot (developed in WebAR on a native ARKit app) works well and that is quite promising.
  5. WebAR isn’t really useable, yet.
    The current status of WebAR is making progress but is not ready for mass-adoption, yet. You still have to download an experimental web browser by Google, called WebARonARCore or WebAROnARKit or Mozilla’s recently released WebXR Viewer. There is also AR.js, a marker-based AR for the web by Jerome Etienne that doesn’t need any plugins and can be used right in your browser. So let’s get started!

Let me know what you think about this topic and get in contact on Twitter.
If you are interested in this field, make sure to read the post
Can Augmented Reality solve Mobile Visualization? by Dominikus Baur.