Night at the Museum, a project to outline a research about Google VR related technologies

As a part of my Udacity’s Virtual Reality Nanodegree, I created a Cardboard / GearVR application, to enable the user to read and watch some news, by teleporting to different ‘information booths’, containing virtual displays for read and screens for watch videos.

You can watch a gameplay experience with sound here: NightAtTheMuseum

A Samsung GearVR, Unity (5.6.1f1), the GoogleVR SDK for Unity (v1.60 June 2017), including the GVRVideoPlayer unity package and a 3D model of an apartment, obtained as a part of the Udacity VR Developer Nanodegree were used in the process.

The game was developed in around 24 hours.

The results

The booths showcase five different topics about Google latest VR/AR-related technologies: Seurat, Tango, Haptic feedback, WorldSense and a summary of the last Google I/O 2017 conference.

Users are able to travel back and forth between booths, approximating to the display areas inside them and control the movie screen located on each of them.

The actions that the user can do on the screens is to power-on (loading the video), play / pause (by toggle control), jump to any time on the video (using a slider) and control sound (using also a slider).

These are the highlights on which I focused to make this prototype remarkable:

  1. The booths was created completely with Unity native cubes, using the proper Illumination parameters to create an optimized environment for VR;
  2. The creation of an editor script, accessible as a context menu, for automation of the next tasks:
  • Assignation of the textures on the displays;
  • Resizing and positioning of the displays to preserve the original scale of the textures and to be optimized for be centered at the user eyes height;
  • Generation of the needed materials;
  • Marking dirty the related prefabs, in order to preserve the functionalities of them without changing the parameters set by this editor script; and
  • Assignation of the movies URL to the respectives GvrVideoPlayerTexture components.

Story of the process

The starting point was reading the requirements for the project, which was doing a research on a VR company/technology, or an industry that could be impacted by VR, and create a mobile virtual reality experience with booths which include both visual and audio feedback for the users.

I made a research on the latest technologies that are being developed by Google, mostly because Google:

  • Is developing / researching on a very wide related fields;
  • Is very interested on Deep / Machine Learning.

In my case I’m currently learning to apply Deep / Machine Learning on VR games and applications.

In the other hand, I wanted to create something visually appealing and with a smooth user experience.

With this in my mind, I create an environment with 5 main teletransportation points to each of the booths and with some other teletransportation points inside of each booth, to place the user in the best possible position to look all the information inside.

User testing and iteration

After finishing and testing the first version, I realized that the next modifications was needed for improving the readability of the displays:

  • Adjusting the position of the displays;
  • Inclusion of more waypoints on the bigger displays.

Conclusion

I enjoyed the process of creating this small prototype, in where I could research more consciously about the latest VR technologies, but also analyze and learn about playing back movies in textures on an Android device, and also to research about the way of having dynamic text affected by the lights of the environment.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade