The Vizor 360 editor UI

Prototyping a 360 Museum Tour with React VR

Matt Innes
Updates from Vizor
Published in
8 min readOct 9, 2017

--

Here at Vizor we’ve just released a new WebVR app, Vizor 360, specifically for working with 360 based media. We’ve built it on React VR, which will give us the ability to publish to Facebook and opens up a whole new audience to immersive web experiences.

But our usual tools (Sketch, Atomic.io, Framer and our own visual programming tool Patches) didn’t quite give us all we needed to prototype features for the app, so in collaboration with Jouko from the dev team, we decided to design and build a series of prototypes directly in React VR. We wanted to discover how the various elements from a typical 360 project—text, 3d objects, animation and interaction—would work across a range of viewports and devices, from 2D desktop web to Facebook Newsfeed, mobile Newsfeed, and WebVR devices like Google’s Daydream.

One of the issues of concern was the ability of React VR to deliver good typography on desktop and 2D mobile views, as it was going be using SDF text for its VR and desktop views. If you’re not familiar with SDF text, it is a way of keeping type sharp when it is drawn in 3D spaces, devised by Valve in 2007 for use in games. It is a better way of handling type in 3D space than simply rendering it to pixels on a plane.

Web fonts on a 2D overlay (L) vs SDF text on an angled plane (R)

For type in VR spaces, SDF text is the current best way to go, until we have web fonts rendered natively in 3d space, anyway. But for desktop or mobile 2D views, it is quite inferior to sharp, selectable, searchable, undistorted web fonts set on a flat plane parallel to the viewer. So we wanted to see if a hybrid approach using web fonts for 2D views on mobile, desktop and Facebook Newsfeed but WebGL for VR views could work well.

One reason to do this is that our user stats for Patches showed us that around 90% of views for any given project were typically on desktop or mobile, while only around 10% were actually on VR devices. Of course we are looking to the future when this balance will inevitably begin to shift,
but we still have to provide a great experience for as many of our users as possible right now.

Prototype One: Type & 3D Models

With this prototype, we were trying to solve two main problems; how to make text, image and 3D models look great in VR, mobile and desktop, and also how to allow users to view 3D objects easily on mobile and desktop.

We had previously made various experiments in trying to integrate 360 media with 3D objects (such as this one), and in the course of designing and building Patches we’d built and viewed many 360 projects. One of our conclusions was that, until we could use machine learning to analyse the 360 scenes in the way ARKit does, we’d be better off keeping 3D objects isolated from the 360 media in visually separated spaces, of which the simplest is a dark overlay.

First pass lightbox design with 3D model & navigation. Miserably unreadable on mobile!

Discoveries

Rendering content on billboard-like panels in 3D space works well in VR, but is awkward to read, being angled and disorted on mobile and desktop. Also this method means text is not responsive and so is too small on a mobile screen. Rendering content in a static 2D layer on top of the 3D world in mobile and desktop works much better. As we’re just using regular web technology, we can make the layer layout responsive, so it’s usable at different screen sizes. Another discovery was that React VR’s handling of line height is a bit buggy still, and does not support letterspacing.

(L) React VR lighting model vs. (R) baked light

Also 3D models don’t look great when rendered using React VR’s current lighting model, so best results for now can be had by baking the light into the model’s texture and rendering the model with the texture. Restricting 3D models to a turntable-style single axis rotation makes handling them a lot easier, it does reduce the user’s view, but makes for super simple handling with swipes.

View prototype one

Note: these prototypes are best viewed in Chrome

Prototype Two: The Newsfeed

Our next goal was to prototype how to make a 360 tour usable when embedded in a Facebook newsfeed without forcing users to go full screen. Obviously projects would be much more immersive viewed fullscreen, but as many users will not bother with that, the experience still needs to be as good as possible.

Thinking about how to squeeze content and UI into a Facebook newsfeed

Discoveries

In a Facebook newsfeed, you have fierce competition for eyeballs, so unless the the initial view is compelling and invites, teases and has a compelling call to action, you’ve already lost them. We also can’t have scrollable lightboxes, because vertical scrolling is reserved for browsing the feed, so we need to ensure text content is kept short.

Magic window-style mobile viewing does not work in a feed. When browsing a feed, the phone is usually held at an angle causing the floor to be shown inside the magic window. Plus the user has to turn to awkward positions to see the areas of interest (not socially acceptable in subways etc.) So it is better to handle it as Facebook does it.

Floors are boring to look at in 360, and space is very tight in the Newsfeed for text.

When the tour loads, point the camera to the area of interest in the 360 photo. Allow the user to turn the camera freely otherwise, but ignore rolling motion around z-axis. Allow the user to pan up and down by tilting the phone. Ignore any z-axis rotation from the user. This way the user can view 360 content normally on a phone whether sitting on a subway or lying down in bed.

View prototype two

Prototype Three: Improving Click Through

We’d had some discussions with the Oculus WebVR team, who had liked our initial two prototypes, and their suggestion to focus on increasing title screen click-through led to this prototype. We also wanted to look at how to make those screens work well across devices and Facebook. Finally we wanted a quick startup time by only initially loading assets used for the title screen.

Button to start (L) vs hover state to peek inside (R)

Discoveries

The image above compares our inital button-based start screen and our final hover state one.The hover state direct link into the project seemed to work better in comparison to the Get Started button in the previous prototype, although we only did casual user testing to confirm this. Of course on mobile we couldn’t have a hover state, but we substituted a gentle reveal animation for the hover state.

Peek inside hover state

Laying out content on top of a pannable 360 image though, is quite difficult, especially on mobile, where users can freely pan the background by rotating the phone. It turns out that a 2D background with faux-360 parallax effect actually works better for title screens, though it doesn’t resolve the problem for the VR views.

The title screen layout is awkward when using 360 images for backgrounds

We found that with deferred loading, the browser freezes a few times after clicking the link pin when it loads the rest of the code and assets. This is hard to fix if we have a constantly moving background, so this is another reason for ditching the pannable 360 photo in the title screen and instead use a faux parallax effect.

View prototype three

Final Prototype: Real Content

With this last one, we wanted to pull everything we’d learned together into an actual project, with real assets. As you can see from the previous prototypes, we’d had the Helsinki Ateneum art museum in mind, and we got permission to come and shoot a small tour of the museum with DSLR-based high quality 360s. We would then convert to cubemaps to avoid the pinching issue common to equirectangular images, which is especially noticeable when shooting architecture indoors.

(L) A 2D image worked better for our title (R) 360 DSLR shooting

We brought Helsinki-based 360 photographer Pentti Sairanen to shoot the interiors as we’d admired his great images & strong 360 technical skills. We also spoke to creative agency Fake to help out with some photogrammetry scanning of one of the statues, in order to get a high quality 3D model to include in the project.

Lightbox with photogrammetry scanned statue 3D model from the Ateneum

Discoveries

One of the main issues we had with the final prototype was syncing the frame rate of the DOM elements with the WebGL elements on iOS. This led to the link pins wobbling around the scene somewhat. We haven’t managed to solve that one to our satisfaction yet.

Another issue was getting satisfying transitions between scenes with link and info pins, as the pins appear to jump between scenes. We didn’t manage to solve this one either. Maybe some subtle fade or movement transitions on the icons is the solution. We might try this on our next prototype.

Floor and info pins jump annoyingly on scene transitions

Due to the fact that moving between 360 scenes can sometimes be laggy, scene link interactions seem to need a bit more feedback than would normally be required on the 2D web, to ensure that users know they have definitely clicked or hovered. In this prototype our hover states changed the rotation speed and opacity of the floor links, and our clicked states have slight audible clicks.

Go to prototype

Overall we learned a lot from designing and building these prototypes. Our main discovery has been that our app needs to work with a DOM overlay for all 2D web content, and then be able to auto-generate similar results in WebGL for all the VR views. This has been a fairly significant discovery, and has made all the prototyping worthwhile, as it will all improve our app.

The Oculus WebVR team has incorporated this approach into their workflow now too, as you can see from their demo tour of the British Museum. And if you’d like to use Vizor 360, you can do that today right here.

--

--