AUGMENTED INFOGRAPHICS

Creating playful and meaningful content experiences with AR

sven ehmann
Infographics Next

--

When a new technology enters the mainstream, we tech-savvy humans are excited, curious, hungry as if no new technology has existed ever before. As a result, we tend to chronically over-explore it. Searching for its potential as well as its limits. Hopefully ending up at a point where things make sense again and make a relevant contribution.

One of the strongest images of a potential augmented future reality might have been Keiichi Matsuda’s 2013/2014 speculative video vision “hyper-reality”. The 6:15 minute-long concept film was a blast. Full of colorful, intense and innovative ideas, layers, images, interfaces, stories, and decoration. Some of it picked up from where the original Blade Runner dystopia left us. Other elements went far beyond. It was inspirational. And scary. That it inspired both reactions was probably intentional.

Our reaction was: No one needs more input or more information. But better information? Yes, please!

So what makes information better? It should be approachable, comprehensive, logical — so that we can understand. It should be attractive and entertaining — so that we want to understand and recall it. And: It should be there when we need it, where we need it. That last point, in particular, is a key promise of augmented reality (AR).

Augmented reality arrived as a visionary technology when it first appeared on the lower left of the Gartner’s hype cycle and steadily made its way up and down the curve before it finally moved up again. It landed as an integral part of recent OS generations on practically every smartphone and now allows us to see additional, digital information built to digitally exist in the real world, just as Matsuda envisioned it.

While a bigger part of the tech- and performance side of AR seems to be solved — for now — the content and UX/UI side are still to be fully developed. When would we appreciate an AR offer? How would we approach it, use it, read or explore it? What type of content or storylines make sense? How much is too much, and how little is too little? And from our very own backgrounds: How could infographics or data visualization play a useful role in AR — or the other way around? In a recent Infographics Next project, we looked at three questions and user scenarios to contribute to this ongoing conversation:

  1. What is an interesting AR-based offer that allows museum visitors to explore and learn about an exhibition on a deeper level?
  2. How can we allow kids to explore and learn storylines that are different to what engages their parents while visiting a museum exhibition?
  3. How can we allow kids to explore and learn more about their existing toy collection?

SKETCHES

Here is what we found as we explored three different ways to engage users with Augmented Infographics:

CRANACH

For the first scenario, we augmented the 1529 painting “Law and Gospel” by Lucas Cranach the elder. The core target was to establish and explore two storylines. One about the image itself (its story, history, setting, metaphors and relationships inside the composition) and the second one about its context (inside the exhibition, the work of Cranach, art history or general history).

Adding information inside and around an image.

While exploring the inner story of the picture, users could focus on key elements, some of which would respond visually e.g. through small animations or lines drawn to show relationships. Others were enhanced by zooms or even the presentation of 3D models of objects with deeper information (text, maps, etc.) provided on an additional content layer.

By virtually slicing the imaging, we created a stronger sense of depth (parallax effect), a slight three-dimensional impression, that gave users an incentive and a moment of surprise when moving around. The same is true for some funny animations that might be a provocation for the art historian, but a nice easter egg for the visitor and motivation to explore further details in the painting.

When zooming out we offered a wider selection of related images and a rough timeline. Other images in the exhibition or in the collection (even when hidden in the archive), other images by the same artists or by contemporaries, other images addressing similar subjects. The information added on the context level could be infinite. For our prototype, we kept it brief since our main aim was to explore if and how moving into and out of the image works from a UX / UI and storytelling point of view.

HOKUSAI

In the second iteration of the project, we developed an additional content layer on top of Katsushika Hokusai’s woodblock print “Fine Wind, Clear Morning” (1830–32) to engage kids in the artwork. Our initial assumption was that parents have a hard time attracting kids to art exhibitions and to keep their interest up over a longer period of time. We wanted to provide an experience that kids enjoy as much as their parents and that allows them to explore their own story as well as a joint experience with their parents. In other words, while parents would enjoy the wood print, kids would learn about why mountains sometimes explode.

Adding an additional content layer.

The AR app would provide a short intro to the artwork and artist but then addresses what type of volcano we are looking at and what causes a volcano to erupt. Picking up on the idea of “Walking is the new scrolling” by New York Times director of immersive, Graham Roberts, the story would guide the user to move the smartphone downwards (into the ground) to explore the inner structure of the earth. They get an explanation about layers, magma, and tectonics. A gamification element would invite the kids to swipe and start the tectonic motion that would lead to the magma rising up. When moving the phone upwards again (following the magma) more information would be provided on the way before the volcano erupts and the vibration of the phone goes off.

As in the Cranach prototype before we added small animations (clouds, birds, planes) to attract and surprise users and to keep them curious. The visual style was directly inspired by the strong aesthetic of Hokusai to assure an immersive and seamless experience.

CROCODILE

In the final iteration, we explored augmenting three-dimensional objects. While this could certainly work for historical museum exhibits, as well as for machines in a factory or products in a retail environment, we focussed on toys. A child’s toy collection is an expression and a manifestation of interest and passion. If your child is into horses, superheroes or dinosaurs, it is likely the most curious and soon the most knowledgeable member of the family about that subject matter. An interest that might have started from a toy has potential to grow. Other toys and books are gathered, films watched, games played, t-shirts worn… But r wouldn’t it be magic, if that toy animal that kicked the interest off, could itself tell your child (and yourself) more of the story?

Augmenting objects

Our crocodile “ARchi” now triggers bite-sized bits of knowledge about crocodiles. You can look at it from above and learn about its natural habitat, about locations, nutrition, friends and foes. Or you can point your phone at the belly to see the skeleton and how the unique crocodile heart works. Or you can map a life-size virtual crocodile into your living room to see its impressive real-life dimensions.

This story could easily go on and on, e.g. with other augmented toy animals entering the picture (think food chain), but we kept it brief to explore the first set of challenges — for now.

VISUAL AND INTERFACE DESIGN

Across the different prototypes, we tested a variety of different design approaches — from neutral, realistic to abstract or playful. From diagrams to 3D models, from maps to timelines. From longer to shorter text elements. From reduced augmented content to more complex, even interactive and gamified elements. In the end, we focused on optimizing the user experience and readability. But we also realized that increasing the visual contrast between reality and augmented reality clearly helps the users navigate and understand the interface and content better.

SOUND

All three prototypes used sound in different ways. From atmospheric sounds (around the crocodile habitat) to a narrator’s voice. We learned that sound can play a major role in supporting the experience (on headphones, of course) up to a point, where it might even replace most of the onscreen text.

We even did a first test on navigating the content by voice command using IBM Watson. This points towards future scenarios which include smarter, more conversational interfaces. For now, we simply ran into performance issues.

TECH CHALLENGES

There certainly are a couple of powerful AR tools on the market. We looked at ARkit, ARcore, Vuforia, Wikitude, VisionLib, and EasyAR. After some research and testing, we focused on options for the super powerful game engine Unity and needed a tool that would integrate fast and easy and would also be accessible for non-developers. Unity and Vuforia ended up being the best choice for us since Vuforia was already integrated, easy to set up and extendable. It was not the perfect solution but clearly our best option when we started in winter 2017/ 2018.

Unity or Unreal offer a great basis to build mobile 3D applications, even when you are not building a game. Using one project across devices and operating systems is one key feature of these frameworks. We were focussing on iOS and Android, on smartphones and tablets. With the bandwidth of devices and software versions, support and adaptations remain an issue. But the game engines have recognized the potential and provide support and reference projects that can be used to explore, edit and even test particular setups. Integration with a wider-range of available third-party AR-Plugins appeared to be easy. Usually, you pay for better maintenance, support integration, and performance, but it should be mentioned that Apple and Google are pushing updates to their freeware on a regular basis on a level that they can clearly compete with the pricier offers.

A challenge to keep in mind is the release via iTunes or Google Play, which requires validation and a release decision that can take a while (potentially weeks). This needs to be scheduled in your project plan and leaves no room for spontaneous last minute releases. Bear in mind that the app might not get approval if everything works fine, but a plugin has not gotten its necessary approval yet.

We also discussed whether an app was the best final outcome since some users and clients prefer web-based offers, which is partly possible by using libraries such as ARjs, A-Frame AR, A-Frame XR as a basis. (More insights). But be aware, the necessary streaming of data will require a stable enough internet connection, which turned out to be an issue in some museums, e.g. in older buildings with thick walls.

The core challenge in the Crocodile Prototype was the 360-tracking of a 3D object. We tested different approaches, created a model based on photogrammetry, created a point cloud of tracker points in a softbox, before learning about the impressive object tracking by Wikitude at the AWE conference in Munich.

As part of the project, we also developed a rough AR CMS, that allows us to quickly adapt, extend and update contents in the scenarios mentioned above.

ADDITIONAL LEARNINGS

We presented our prototypes in various contexts and tested them with approx. 50 users of different backgrounds. It was stunning to see how easily kids, as well as elderly people, pick up on the ideas and how much they enjoyed exploring the world around the objects.

We also tested augmenting a large scale wallpaper-image from the Hubble space telescope in our office. It even convinced a rather skeptical professional, that there was potential to explore this technology for educational purposes.

The key learning — and challenge for the next stage of the project — remains that we need a stronger integration of UX/UI and storytelling and design. We are currently looking at extending our skill set in this direction. This will also include a more detailed user testing (e.g. regarding narration, story structure, depth of information, interaction, but also basics such as type, size, colors. But we already learned that a set of shorter AR experiences works better than a single longer one. Not at least because the exploration itself should be fun and holding a device in one position can be tiring.

Aspects we are exploring now in more detail include the scale and distance from the triggering images and objects. The use of AR content with other people around (e.g. people crossing the path between artwork and device). The pros and cons of using a live view vs. using the real image only as a trigger and replacing it with a locally stored version. Informal conversations with experienced museum experts have already revealed a couple of relevant practical issues that need to be considered.

WHAT IS NEXT?

Based on the existing prototypes we established a short-list of new challenges to explore in future iterations. We are also currently talking to a few museums, other cultural and educational institutions and some commercial clients to do user testing and explore the different approaches further.

If you’re also interested in collaborating with us, get in touch.

The Infographics Next Team (Nicolas Bourquin, Philipp Hafellner and myself) would like to thank Jakub Chrobok, Wibke Günther, Barbara Mayer, Daniela Scharffenberg, Taisia Tikhnovetskaya and Friedericke von Polenz, who were involved in the project from the Infographics Group side.

--

--

sven ehmann
Infographics Next

data driven experience, creative direction / head of design strategy at polypoly (formerly: head of think tank at Infographics Group / CD at gestalten)