Visualizing Data Beyond Flatland

Exploring how data graphics should adapt to augmented reality

Peter Andringa
May 13, 2020 · 11 min read
Screenshots from the AR data visualizations in this research study.

At the Reese Innovation Lab, we’ve been working on multiple storytelling projects using device-based augmented reality over the past few years. Along the way, we’ve taken note of AR designs that feel good, and ones that don’t — but we haven’t established a strong set of guidelines for how to design for AR.

I took on these questions in my Senior Honors Thesis in the Hussman School of Journalism, conducting a usability testing study to see how real users interact with augmented reality. Seeking feedback directly from users would allow the Lab to produce better design guidelines for our AR projects and to identify ways to make sometimes-clunky AR experiences better.

I focused specifically on my own professional background in news data visualizations since newsrooms are one of the earliest producers of widespread AR content. While The New York Times, The Washington Post, Quartz, Time, and many others have experimented with AR, most newsrooms have focused on photogrammetry (a process of constructing a real-life 3D model from photos) rather than abstract data visualizations and graphics. Since abstract 2D graphics are already common elements these newsrooms’ print and online coverage, testing them in an AR context can help discover possible opportunities in a new format.

While there is well-established literature in 2D data visualization from academics and influential thinkers like Edward Tufte or Jaques Bertin, there is relatively little research into 3D data visualizations in AR. Existing studies focus on relatively narrow, task-driven AR designs (like data dashboards) as opposed to the style of visualizations most newsrooms use in storytelling.

To get a better understanding of user behavior, I designed a series of AR data visualizations about campaign spending during the 2020 primary elections. These visualizations, embedded inside a traditional reported article, were then loaded on iPhones and iPads and given to 23 participants in a user-testing study. By observing their behavior and interviewing users about the experience, I have identified five guidelines for the Reese Innovation Lab and other AR designers to consider in future projects.

Below you’ll find a summary of my research — but if you want to see all 71 pages of details, you can download the whole thesis as a PDF.

Initial Design Process

Using a database of campaign spending on the 2020 primary that I had already collected for previous stories in The Guardian, I designed three data visualizations to go inside a story summarizing fundraising from the cycle, which was well underway when the user tests were conducted in February.

The first graphic was a simple bar chart comparing the total fundraising of each Democratic primary candidate since the start of 2019. A second set of bars showed each campaign’s RealClearPolitics poll averages, which took advantage of the extra dimension to enable comparisons between campaigns’ finances and popular support. The second set of bars was positioned behind the first, at an angle visible when looking from slightly above.

A screenshot of the bar chart graphic during the user testing study. (Note that 2D screenshots don’t fully capture the 3D experience: this image is affected by perspective distortion that doesn’t exist in AR.)

The second graphic was a map of fundraising by county, showing the top candidate and the total amount of fundraising from residents of the county since the beginning of 2019. This map took advantage of full 3D space to show a large amount of information, since it could layer a full dimension of data on top of a 2D map of the country. This made it feel explorable and allowed for the reader to look both at small details (like their hometown) and big trends. Some of those trends were also highlighted using short paragraphs of text overlaid on the bottom of the screen.

A screenshot of the map graphic during the user-testing study.

The third graphic was much more abstract, using spheres to show the relative size of different types of donations to a given campaign. While this graphic made it difficult to precisely read the relative size of each sphere, it served to illustrate a broader point, showing the massive disparity between Mike Bloomberg’s self-funded war chest and the relatively smaller campaigns that relied on either small-dollar or large-dollar donors. To give an approximate measure of size, two grey spheres sat just to the left of the graphic with labels displaying their volume.

A screenshot of the spheres graphic during the user-testing study. (The colored paper on the table was placed there to aid camera tracking, and was not part of the experience itself.)

All three of these graphics were placed in a traditional text-based news story, each one alongside a written section giving more context to the visualization. When a participant opened the article, the sections were shuffled into random order to prevent familiarity or short-term recall from affecting the results.

Once participants had read the whole article and viewed all three visualizations, they took a brief written survey quizzing them on comprehension of the three graphics and participated in a brief interview about their reactions to the experience.

Some technical details

The Lab’s previous work in augmented reality has highlighted the difficulty in tools and technology for developing AR content: we’ve frequently had to roll our own tools or hack together a workflow across applications and filetypes. This project was no different, and required its own workflow and some custom tools to design and deploy the AR graphics.

  • To design the graphics in a programmatic, repeatable way, I built a Three.js- and D3-based environment for designing and exporting 3D models into Apple’s preferred USDZ format.
  • Apple’s new Reality Composer application was useful for fine-tuning the overall experience by combining USDZ files and adding effects like labels that rotated to face the user, exporting all of them into Apple’s .reality files that can be displayed in iOS.
  • In previous work for the Lab, I’ve built an internal AR viewer app which can render .usdz and .reality files with WebKit-enabled interactions, similar to how the New York Times renders AR in their apps. I deployed the prototype for this research to the same app, which allowed users to tap through text sections and also view the AR components in between them.

User Testing Highlights

Users overwhelmingly liked the augmented reality experience, describing it as “more engaging” and “really cool” compared to traditional 2D graphics they would otherwise read in a newspaper or textbook. A number of students noted that they felt more engaged and better able to remember the point of these visualizations, and 83% said they wished AR content was available in more places.

However, the users also diverged from expected behavior, highlighting pain-points with AR interfaces that designers should be aware of.

A screen recording of one participant placing an AR graphic on the table.

One of these early complications was the object placement process, which seemed to initially perplex many of the users during the testing. Eight of the 23 described it as “confusing” during interviews after the experience, saying they didn’t understand the shadow, or placed the visualization too close or too far away.

Placement time is one way of measuring user confidence with the process, since it reflects the time delay before users get to the actual content of a visualization. Users took an average of 26 seconds to place their first graphic during the study, but those with previous AR experience completed the task in just 20 seconds on average. This suggests that there’s a slight learning curve for placing objects in AR — and that first-time users might especially need a helping hand.

Once users actually placed the visualizations on the table in front of them, there were other unexpected behaviors affecting their experience.

I had designed the visualizations with spatial interaction in mind, thinking that users might stand up or walk around to see different angles of the data. This turned out not to be the case, since 44% of users chose to stay seated and never moved around. Even among users who did look around, many noted they would not ordinarily be inclined to do so. “A lazy person wouldn’t want to get up and see it,” one told me. “I would rather move on instead of stopping to do AR in certain situations.”

The users’ lack of movement impacted their ability to explore and understand some of the graphics, since some objects occluded one another from the seated angle. Many participants complained that the yellow second set of bars in the bar chart graphic was hard to see behind the blue one, or that they couldn’t read the values on the more distant bars. Some also complained that the map felt overwhelming because it was so large, and they didn’t realize they could step back or get closer to focus on one section.

Interestingly, participants who had previous experience with AR moved around slightly more frequently than others, suggesting that first-time users might need to be prompted to move around the AR — otherwise they might not even realize that spatial interaction is possible.

Overall, users rated the map graphic the clearest and most interesting, mentioning that it felt simultaneously familiar and explorable. “It’s a little personal,” one explained. “You’re from a certain state, and you want to see how much your state gave, or the surrounding states.”

When asked what made the map effective, a number of users mentioned its complexity. Interestingly, the users weren’t scared by complicated graphics: many said they appreciated the chance to explore. Afterwards, 65% of participants were able to name a specific fact they learned from the map, showing that many actually engaged and remembered the details they noticed.

Users tended to rate the bar charts less highly, noting that they felt simplistic or would have been just as effective in 2D. The spheres graphic was polarizing: some loved the simplicity of one main point, while others claimed it was too abstract. Still, 70% of users correctly answered a multiple-choice question about the main takeaway of that graphic (that Mike Bloomberg self-funded has massive campaign), suggesting the abstract format was still an effective form of communication.

Design Recommendations

All the user-testing results were used to generate five best practices for AR data visualization designers to consider in their own work, hopefully providing tips to avoid some of the pitfalls of early design mistakes.

1. Try to keep all items in view at the start

Users frequently expressed confusion around where they were able to look, not realizing that the graphics stretched beyond the bounds of the device screen. If all the items are in view from the beginning, it makes it easier for users to get their bearings and realize they can zoom in for a closer look at specific details.

This also applies to occluded objects, like the horizontal poll numbers in the bar chart graphic in this study. If objects must be placed behind another, try to ensure that the viewer’s initial angle is such that they can see at least a piece of what they are missing. Or, in a worst case scenario, use a system of highlights or pointer arrows to help users learn which way to direct their attention.

Another possible approach is that of Apple’s QuickLook AR software: when an object is placed into AR, it “grows” into place a using scaling-up animation so that the viewer gets a glimpse of the full scope of a graphic that normally extends beyond the device. This allows users to build a mental map of the space, even if they can’t see it all at once.

2. Optimize for “micro/macro” compositions

In Edward Tufte’s books about 2D graphics, he identifies a useful “micro/macro” style of graphics that allow for both a close and a distant reading. This holds true in AR: the best graphics offer a big-picture takeaway, but also provide explorable details so users can dive in deeper on their own. The map in this study appeared to do so well, but the bar chart and the spheres were less successful in this regard.

Because the simpler graphics didn’t offer a deeper level to explore, participants often said they felt unsatisfied with the result. Since AR takes so much time and effort to set up, you don’t want a user to feel like they’ve wasted their time on something they could’ve learned through a much quicker 2D chart. AR gives viewers a sense of agency to be able to explore a visualization themselves—so good designs should offer up those details and invite the user in.

3. Assume users won’t move around

Many participants in this study explained that even if they stood up and walked around in the lab setting, they were unlikely to do so on their own. Maybe the most passionate viewers will really engage with an AR scene, but most will stay sitting where they were and merely shift their phone camera to look in different directions. As a result, AR experiences should be designed so that all the information is the right size and position to be viewed from a seated starting view, without assuming that users will spend time exploring every corner.

Sometimes, it might be useful to offer the users choices of graphic sizes, or the ability to scale to fit their context. Graphics often require different sizes to feel natural on a table instead of the floor, or an indoor space might require a different scale than an outdoor one. As AR technology improves, cameras might soon be able to automatically evaluate the room and serve the best size and shape for a graphic — sort of like responsive design for augmented reality.

4. Use labels sparingly, and be sure they always face the user

Labels are an essential part of most abstract graphics, helping tell the user exactly what they’re looking at. This becomes significantly more difficult in AR, where you can’t make assumptions about what is in or out of the users’ view. Furthermore, occlusion from other objects in the scene and the ability to view from multiple angles means that text is sometimes illegible, even if the label itself is visible. Therefore, labels are best used only when necessary and should be located in a way that is constantly visible, like above the entire scene, instead of in the middle of everything.

On-screen labeling is another technique that can solve the occlusion and scale challenges, although designers must be sure that those labels don’t obscure too much of the viewing area on the device screen. This might require interaction patterns that show and hide on-screen labels depending on the viewer’s position, although such spatial interactions sometimes come with their own discoverability and design challenges.

5. Allow opportunities for users to learn and fail

One of the recurring themes in my research found that users who had previously used AR were able to load graphics faster, move around more, and retain more of the story after they finished. AR designers shouldn’t assume everyone is an expert, so they should provide tools for users to get situated and comfortable in simple AR scenes before dropping them into a more complex experience.

Clear and frequent textual instructions are important for first-time users, as are tools that help with placement. A number of users in this study were delighted to find a “reset” button that let them re-place graphics in a different location, saying it gave them a sense of control and helped them correct early mistakes before they understood how AR worked. This improved the user experience, helping them learn faster and better enjoy the visualizations.

These guidelines are just the beginning: as AR applications become more and more widespread, designers will need to expand and adapt these guidelines to work on new AR technologies. In a fast-moving field like this, almost no rules are hard-and-fast — the best experiences are developed through repeated iterations and user feedback.

As augmented reality devices get better and better, it’s only a matter of time until a larger number of consumers want to consume news content and data visualizations in AR. By studying best practices and improving designs today, we can improve the user experience of this new technology tomorrow.

Thanks to my advisor Steven King and reader Laura Ruel for their advice and support throughout this research, which would not have been possible without their invaluable input.

Reese Innovate

Finding new ways to engage audiences with emerging technologies and strong storytelling.

Peter Andringa

Written by

Student journalist from Washington DC, currently living in Chapel Hill + Durham

Reese Innovate

The UNC Reese Innovation Lab uses emerging technologies like VR, AR and AI to engage audiences in new ways.

Peter Andringa

Written by

Student journalist from Washington DC, currently living in Chapel Hill + Durham

Reese Innovate

The UNC Reese Innovation Lab uses emerging technologies like VR, AR and AI to engage audiences in new ways.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store