A few years ago, when the New Media Innovation & Entrepreneurship Lab at the Walter Cronkite School of Journalism at Arizona State University first started down the VR path, we were intrigued with the idea of presenting data in VR. An advisor who helped us get up to speed painted a picture of the user being able to walk through data while immersed in VR.
A few months later, my students created an award-winning VR project on immigration that had several bells and whistles, including a field of crosses meant to symbolize the hundreds of migrants who died trying to cross the Arizona desert to come into the United States. The result was underwhelming. The crosses were animated assets that sort of sat there on the cracked terrain of the Sonoran Desert. When you hovered over a cross, you got a bit of information about the deceased individual. It was not the immersive vision I’d had in mind.
I knew we could do better.
Since then there have been several improvements but mostly for higher-end VR systems such as Oculus and the HTC Vive. You can walk through cells in the human body or zoom through neighborhoods via Google Earth. The Microsoft HoloLens allows skilled developers to create beautiful data visualizations superimposed on the real world, as this one from developer Michael Peters shows.
But what about us pixel-stained mortals who work for average newsrooms, I wondered. Is there a simple way of creating data visualizations, especially of local data — easily the bread and butter of most newsrooms — using the Unity engine so that a neighborhood does not come out looking like the map in Pokémon Go? Could a multimedia producer of average skill with a little bit of familiarity with Unity be able to clean a data set (say, of local school stats or houses for sale), upload it as an asset and create a visualization that could be seen in a Google Cardboard headset or a Gear or Daydream system?
We’ve been working on this problem for a few weeks now and have the first iteration completed. In a handful of steps, a producer will be able to load content in our Unity tool and create a VR scene with local data. A gaze or gesture brings up the data, and the user can transport around from street to street.
We still have more work to do to make the Unity tool as simple as possible and to develop the information display, but we hope to get there by the end of the year.