Multidimensional Data Display and Manipulation in a Spatial Interface

Holographic Interfaces
10 min readJul 19, 2017

--

Tl;dr
VR and AR have difficulty helping us view data because for the most part our viewing/focal plane is still 2D, just like in the real world. Manipulating that plane via natural spatial user interfaces gives us access to extra dimensionality of data.

Intro

Most VR and AR discussions center around simple user behaviors, like shopping, socializing, and watching movies. What about the difficult question of spreadsheets and data? There are some interesting data visualizations out there, but they’re specific types of data rather than arbitrary and generic data. Perhaps in time an Artificial Intelligence will be able to select the ideal way to visualize a given data set, but that’s a ways off from being useful.

To benefit from VR and AR technologies in any area, we need to understand the potentialities of using them for these ends: to make digital content more organically accessible to us (make it conform to how we think and act, rather than vice versa), and to let us manipulate larger and more complex data sets more easily. There are many demos out there of 3D graphs and visualizations, however most of them provide worse experiences than their 2D counterparts. What follows are some thoughts of nascent possibilities of user interaction models for larger data sets.

The goal: to get people thinking of new ways to manipulate what are already good presentations of data.

Although we currently manipulate numerous dimensions of data, we are usually forced to do so through two-axis charts. This is largely due to our being constrained to three dimensions; we cannot see a third axis of numeric data cells concurrently with the other two. Our focal plane doesn’t allow for it. For the most part this works well. We utilize various techniques to divide our data into combinations of dimension, be it via pages of charts (like Excel), database tables keyed together (mySQL), some combination of chart and graph (Numbers), or any number of more advanced (and consequently abstracted) visualizations. Now that we have usable consumer level spatial interface hardware, it behooves us to reexamine the possibilities of viewing and manipulating higher dimension data sets with these devices.

Take a look at the following image that shows a currently empty data set representing Sales numbers for various locations, years, sales clerks, and items.

This layout presents our problem pretty clearly. We can compare Sales in: Location vs Year, Location vs Clerk, Item vs Year, and Item vs Clerk. What we can’t do in this presentation is compare Year vs Clerk or Item vs Location, despite having that data in a pure math sense. Also, we’d have to “lock” three of the variables to see the list of results in the fourth. Now if this were four dimensional geometry, every point on, say, the Year spectrum, would attach to every point on the Clerk spectrum, allowing for visualization of this more complex set. Although we can’t really visualize this in a useful way for data, we can let those connections be displayed in sequence based on various criteria. This concept can be seen better using a Hopf fibration, like this one, pulled from http://nilesjohnson.net/hopf.html or as a suspension, as explained by Clayton Shonkwiler here: https://youtu.be/krmV1hDybuU?t=23m4s.

Why is This a Problem?

There is a serious question with regard to the validity of using VR and AR for non-specialized data visualization. Many say that it’s not going to offer any advantage due to the viewing plane issues, which is probably fair for many types of data. Personally I tend to believe that there are still visualization models out there waiting to be invented or discovered, but there’s currently no getting around the fundamentals of how light works. I would argue, however, that it’s not entirely the visualization part of the puzzle that benefits from these new technologies but rather the behavioral interaction modalities. Allow me to explain.

If you’re using a reference book, it doesn’t help visually that the book is a 3D stack of papers. You’re only viewing one at a time.

However, using your fingers you can flip quickly between pages to compare information found on each, in different sections of the book. You can insert tabs to help. You smell and feel the book. You have more senses involved. The information can be brought to you, rather than having to view it through a little glowing window. It’s this ability to manipulate the physical book and pages that makes it a better experience than using a PDF on a screen. It’s the interaction, not the view, that matters. It’s the ability to control the viewing plane and that plane’s content. Let’s think about this concept and try to apply it to multidimensional data.

Simple Multidimensional Data Arrays

The most simple multi-dimensional can be thought of as a cube comprised of cubes. Each axis is a value category, so you can compare, say, sales of different items at different store locations in different months. It’s fairly obvious that due to the line of sight problem we can’t view all of the data at one time. What we can do, however, is rotate dimensions and fold dimensions flat. We do this already using folder navigation on computers. A folder is just a set of data with various associations that we display as “folders.” This file is in this folder, which is in THIS folder. This data item is from this week, which is in this month. We can think of it also as dimensions of data.

Let’s take our example data cube above. Initially it’s showing us the sales numbers for different items at different store locations in only one month. But what if we fold, or lock, either the store location or the item? We can then view all the months for that item at the various locations, or all the items at one location. The z-axis can now be yet another dimension of data, say a category for time of day, or for purchase size, or for demographic. By nesting the data dimensions in this way we can drill down more and more in a completely organic manner. We can then move the original “axis” of data to display the changes in the subset, much like how in 4D geometry simulations you can use a slider to move through the 3 dimensional slices of 4th dimension geometry (as, for example, in Miegakure’s 4D Toys: https://www.youtube.com/watch?v=0t4aKJuKP0Q).

How can this work in a spatial interface? It can work a bit like Amazon’s shopping cart filters, mixed with a Rubix cube: we can rotate the entire data set, or pull an entire row or column out, turn our wrist to change axes, and physically position the data in the manner we like.

The virtual cubes can adapt to what we’re doing to dynamically fold the data categories flat, essentially moving down through folders of data.

An “address filter bar” at the top can show what dimensions have been folded, and one could even offer a drop down from any of the levels to change the locked selection at that level, or pull one of them out into a timeline of sorts to scrub through, as described above. We can offer up other potential data categories to add to the mix, like shopping cart filters, shortcutting to more focused data output in the active area. This physical relationship between the data creates a sense of place that is easier for your brain to process and recall, since it builds a more complex neural network around it. It also lets you move unseen dimensions of data that affect the displayed data.

The important thing is that the data shown adapts to the user’s manipulation in a predictable and efficient manner, while maintaining relative visual simplicity. Processing large data sets is an endeavor that has, by its nature, a high cognitive load, so anything we can do to reduce this is a must. Showing more than one face of these “cubes” at a time would be a mistake.

Piercing the Z-Axis

Yes yes, you say, that’s all fine, but really we could do this via a simple web interface in d3 as well. Going back to our book example, perhaps there is an easier way to view the third axis of data: eye tracking. Using the convergence point of your eyes we can calculate the depth at which you are directing your gaze, assuming proper calibration and precision on the hardware/software side. Naturally we’d have to account for pupil size changes over time so as not to throw off our IPD measurement.

By using three axis focal location to control visibility of data cubes we can allow the user to look through the z-axis levels very quickly and intuitively.

Want to compare month-to-month for a couple of store locations and a single item? Just look at those cubes and then look past them to see previous months. Now to view the data differently we need just rotate the cube and look where we want. Imagine reading a book, and instead of turning the pages you just look through them to the next page; how quickly one could find his place, or find that one memorable bit of information!

In the above image we have the Z-axis set as the Month. However, each of every cube’s six faces could be a different data view. So the Z-axis viewed from the opposite side of the data array could be a different value, like Sales clerk or something else. Since everything is contextual to the user’s perspective, and since we’re actually dealing with the faces of a cube rather than a simple 2D slice, it makes sense that the two directions would be different.

It is not a coincidence that the graphic at the beginning of this article looks a bit like the print out of a paper cube. One could also place the user in the inside of the cube, with the Z-axes stretching out in each direction.

In our example every side shows sales, but one is location vs clerk, one is month vs item, etc. Let the user select filters, or “lock” certain data categories and send them to the “address filter bar” at the top, then use the filters to add another data category to replace the one that was locked (effectively removing a face of the cube and then replacing it).

With certain types of data we could even utilize other signifiers, like color, size, shape, etc, to deliver more information in a single data cube. Interestingly this functionality could not be obtained with a normal computer monitor using eye tracking, since when you looked “past” the front data you’d be looking past the monitor itself.

An unwrapped cube and hypercube

Conclusion

In this article I believe we have identified a critical advantage of stereoscopic displays that I have not seen expressed elsewhere. The criticism of the fancy and complicated looking interfaces of film seems like it might apply to this as well, from the outside perspective, but in reality the user will only see the data he is focused on, all the rest either fading away or simply being out of focus or in his periphery.

I won’t pretend that this is the eureka moment for presenting and manipulating data of greater complexity, but I would argue that it demonstrates that there is potentially real value to spatial interfaces and equipment in this field, and that for many types of users it could be a more efficient and revealing modality. Certainly enough to warrant investment into further research and development.

In the end I believe that there is much experimentation to be done in the future before we can say one way or the other whether this is a useful thing or a wasted endeavor. Let us know your thoughts, and if you or someone you know is interested in collaborating in this space.

If you are a hardware OEM and would like to work with us on exploring some of these ideas, send us a line and we’ll see if there’s an opportunity to do some good work together. We’ve been testing our concepts on a hardware platform that we cobbled together (a HTC Vive using Leap Motion and 7Invensun’s aGlass eye tracker). It works pretty well already, but we don’t yet have a Meta2, HoloLens, Fove, or Magic Leap headset, or the Tobii or Pupil Labs eye trackers (anyone in those companies listening? Hook us up! We’ll make you look good). Unfortunately, the aGlass only handles one eye, limiting depth tracking capabilities. For the time being, we are emulating depth tracking with manual controls.

We hope you’ve enjoyed the read and have a thousand ideas exploding in your mind. We also hope that hardware folks realize the power of eye tracking and incorporate it into the next generation of devices. Please contact us at holographicinterfaces@gmail.com

--

--

Holographic Interfaces

Holographic Interfaces is a boutique research, design & development shop specializing in Augmented and Mixed Reality experiences.