Designing for VR — spatial properties
Here at Masters of Pie we put our VR/AR software through many phases of testing with our customers. We are constantly asking the question of “Why VR?”, “Does this add value to a current process?” or “does this provide access to a new valuable process?”. Virtual reality has a real job of proving itself if it is to become a legitimate platform for next generation software and because we have to give users a real reason to want to strap an expensive piece of cutting edge equipment onto their faces!
So just as a blacksmith beats a piece of metal to test the quality and durability we must apply the same rigor to our own designs. Through this rigorous process, and from working across multiple industries we have at Masters of Pie slowly started to uncover what we think are some of the properties of VR that make it unique. It is still very early for this industry and very much like wiping sand away bit by bit until you find something interesting and potentially valuable.
In what will be a series of blog posts, here are some properties of VR we feel have the potential to provide new opportunities for developers and users.
This is not something you tend to think about in a game or interactive experience (unless of course you need to throw something at a target) but distance in VR operates much closer to what we are used to in reality. Traditionally in 3D software such as CAD or 3D modelling you would need to draw 2 points between objects to measure.
Why? because the information is abstracted onto a flat screen. Having used 3D software for years, I personally need tools like snapping to a grid in order to stop stray vertices (points in 3D space) from being placed miles away from my current data set. Because you are looking at the data from one specific viewpoint or perspective, even seasoned designers, when rotating the viewpoint, will accidentally modify the 3D data in completely the wrong axis without realising it and thus requiring annoying rework.
With more vertices comes more complexity over understanding the 3D surface.
To combat this, we constantly need to tumble the CAD in an effort to see the data from all angles. It’s the equivalent of trying to screw a bolt onto an engine whilst holding one hand over your eye and attaching your wrench to a stick. This is a handicap that we have overcome by mastering the tools of mouse and keyboard and therefore consider normal but why should this be the accepted norm?
During development of some of our own CAD tools in Masters of Pie we have started to explore how we can strip away this control abstraction and allow users to review simulated data effectively in a collaborative environment just by simply walking around the data set. The next challenge is in building comfortable editing tools that push CAD design into new territory.
This is an obvious property and is very much involved with the illusion of presence in VR. What distinguishes VR (and to some extent AR) from any other platform is our appreciation of scale. Totally impossible on a flat screen, but placing 3D CAD data into a virtual environment ,for example, allows us to walk around the data as it was meant to be seen and understood. For CAD designers, the classic response we hear (when using our prototypes at MoP) is that they can finally see their designs in the correct scale. Some CAD designs are often worked on for over a year or longer, with 3D printing and other tools used to understand the physicality of the data even though it is often not to scale. Now we have the ability to jump into VR and really understand the scale of our data and whatever implications this brings to the design.
Points of reference are required to understand the scale of unfamiliar objects.
The benefits are obvious but not without their challenges. We have found during our development that scale can sometimes be misleading in VR due to the warping effect of the lenses and the image correction algorithms that bend the output and create the illusion of depth and persistence (Here is a great article on scale in VR). Therefore in order to maintain that all important precision that we demand from CAD packages, in our VR software it is recommended to always be aware of how things like IPD & virtual IPD can manipulate our image output, use points of reference wherever possible and as a backup provide measuring tools for the users to give them peace of mind.
Mapping data spatially
Data visualisation is a hot subject for VR as many see the potential to push this field into new territory, adding value to current processes or even inventing new tools & processes entirely. However, through our work with both CAD and abstract data sets it is not enough to just take current visualisation methods and present these in a VR environment and call it a day. In order to generate value from the platform we have to see whether we can translate 2D visualisation into a spatial environment. This translation or mapping does not always work for all data types. This is why CAD data is an obvious choice for the technology as the data is intrinsically spatial in nature along with other examples like geographical and architectural. These will always work well in a VR environment and will be reflected in the VR/AR usage growth in these industries.
The Hololens was used by Nasa to better understand the “spatial” Mars surface data.
Ideally when displaying data, we want to utilise the spatial property of VR and not flatten everything as we do on screen based platforms but if we do only have “flat” data there are options. We have developed 2 approaches (so far) that utilise the benefits of the spatial VR environment with different data sets. If your data set does indeed contain flat images, graphs, walls of text etc you can utilise the VR space as an organisational tool, much like assembling a working area in your office.
Looking beyond our small glowing screen. Some concept artwork.
Just like our office or studio space, a VR environment can become very cluttered and confusing . If you combine this with high dimensional datasets it can be hard to keep track of our journey through the data set. We feel VR/AR can be used to capture our thinking as we work and present this effectively to 3rd parties even if the data is still flat like graphs for example.
Moving forwards, we believe new opportunities for visualising data will emerge as we transition from flat screen to spatial computing over the next decade. There is a big opportunity for new visualisations to represent data in new spatial ways. But why? Well data visualisation’s goal is normally 2 things (and there are arguments on both sides to which is more important). The first is to compress the information, especially if this data is highly dimensional for example (House prices, number of years, number of rooms etc) and represent this as effectively as possible so a user can look at the visualisation and be more informed instead of poring over excel spreadsheets. This leads us to the other side of visualisation which is engagement and telling the story of the data. You commonly have a disconnect between different users who need to collaborate with a dataset, one is often the expert and the other the stakeholder (the one with powers of signoff). In order for the expert to be able to speak a common language with the stakeholder and impart the value of the dataset she needs tools to visualise. If this data can be presented more effectively in a spatial manner then we can potentially increase comprehension and engagement.
We at MoP have begun exploring ways to translate what was originally tabled or “flat” data into a spatial representation so as to effectively utilise the VR platform, allow users to better understand the data and to hopefully engage them for longer periods of time. But we will go into these in more detail in a later blog post.
Originally published at MASTERS OF PIE.