Unity Vision Summit 2017: Thoughts and Impressions on the future of VR/AR/MR/XR

Multiplayer Racquetball in VR, via Israeli-based One Hamsa Studios

Earlier in the week I had the fortune of attending the 2017 Unity Vision Summit, an annual get together of Unity users and creators in Los Angeles, on behalf of my employer, the Florida Interactive Entertainment Academy. The purpose of my trip was to document the various goings on in the budding field of Virtual and Augmented Reality, including Mixed and Extended Reality, and how these new technologies are being applied not only in the arts and entertainment fields, but also to diverse cross-segments of industry, including space exploration, toy manufacturing, medical simulation, architecture + interior visualization, professional sports and automotive manufacturing. The conference was a veritable who’s who of immersive entertainment, with top companies like Google and Facebook sharing the stage with indie/experimental startups clamoring for the projected $100 billion marketshare of VR and AR investment. With only 2 days for the entire show, it was often a mad scramble at the last second to decide what sessions to attend and what ones to let slide (there were 3 breakout rooms with multiple sessions happening simultaneously, so it was impossible to see/hear everything). The following is a digest of what I was able to take in over the 48 hour brain dump.

Keynote

The opening keynote featured an dazzling lineup of speakers. The key word here is “diversity” and Unity really nailed it. Sure, the focus of the conference was on Unity but most of the presenters put together interesting enough material to keep the sales-y stuff in check. They lead off- with evolutionary biologist Richard Dawkins- was great.The Selfish Gene author gave a pretty heady presentation on how virtual reality relates to the human brain, and how our brains/eyes have actually been pre-wired with the “software” to “see” virtual reality pictures for literally thousands of years. Included in his talk was a reference to the star nose mole, and also a 360 turntable of a mask of Einstein’s face (see below).

Richard, John and Albert

Next up was Microsoft, who showed off some cool Unity-based Hololens products, as well as an awesome looking Mixed Reality headset coming out in June, which everyone in the audience will be receiving a dev kit for (!). Google showed off their new Daydream devices and software, which segued into an awesome demo of Tango-enabled augmented reality devices, using the new smart terrain features to map floors, tables and walls. Vuforia took the stage to showcase updates to their popular augmented reality app, which was cool, but what followed really blew my mind. To showcase what’s coming in Unity 5.6, the Unity dev team brought out realtime VR demo of an RTS game called “Mighty Kingdom”, which showcased some of the new timeline features for crafting realtime cinematics in a live gameplay scenario. This was awesome stuff and I can see lots of potential for gameplay and cinematic prototyping.

Smart Terrain in Vuforia
Timeline editor in Unity

Comic books, Animation, Sports, Mars

Within and Baobab Studios were on hand to show off how their latest animation projects are using Unity to create cinematic experiences in VR. Both companies are rooted in music video, film and animation production, and view VR as primarily a storytelling medium. It was interesting hearing about their challenges and pitfalls after a year and a half with the technology. Clearly the automotive and pro sports arenas see the potential for AR/XR to become a huge deal for their marketing arms, and both are heavily investing in the tech. A company called Rewind showed off a cool HoloLens project they worked on for Red Bull Air, where they are brought a mini-airshow experience into people’s living rooms. Both Audi and the NFL have big investments in the tech. However, one of the coolest (and most practical) parts of the keynote involved what JPL brought out to the stage. Three engineers working on the Mars Rover 2.0 project took to the stage with HMDs and showed off how they’re using XR (via a custom HoloLens rig) to help prototype and problem-solve a full-scale, scenario-ready, virtual rover model. How cool is that?

JPL kids doing their thang

Oculus

A big part of the keynote centered on the changes going on at Facebook’s Oculus. Brendan Iribe, co-founder of Oculus recently stepped down from a leadership role, so the keynote ended with a 20-minute fireside chat between Iribe and Unity’s CEO John Riccitiello talking about the future of VR. Bottom line is they both agree that we are still a ways out from it being something we use daily. The Facebook team came promoting their new social app, Facebook Spaces, which I personally was not too impressed with. In addition to the avatar designs feeling ripped off from XBox live, I did not think this was something that was necessary or all that useful (a social space where you interact in social-like ways in VR). Sony tried this with Playstation Home, and we all know what happened there. But the Oculus is still getting a lot of hype, so I suppose they need to keep building out their app store, and Snapchat generation will probably eat this stuff up.Still, this whole simulate everything in your mundane life movement freaks me the hell out.

Breakout Sessions — Day 1

There were only a couple sessions on Day 1 that I chose to attend. This is due to the fact that A) I was exhausted from traveling cross country (plus sitting in a 2+ hour keynote), B) many of the Day 1 sessions were highly technical (i.e for developers in the midst of real projects), and C) there were hundreds of things to check out in the Expo hall, and I had limited time to see everything, so I wanted to dedicate some time to that. The first session that I went to on Monday was called “BiG AR Talk: 3 Innovators Discuss Unlocking ARs Potential” and this was really informative, because I got to learn about something called “Light Field” technology. In summary, Light Field is a mixed-reality platform that enables the visualization of objects at multiple focal planes. It makes virtual objects that typically feel fake or pre-rendered (when shown in AR under normal circumstances) appear more realistic. This is due to technology that can detect light fields in the environment and therefore render out more accurate of depth of field on a 3D model. It blew me away. If you want to learn more about Light Field, I recommend checking out this video: https://www.youtube.com/watch?v=LrNwP_eTC9c

Out on the expo floor, I got to check out some neat AR toys being developed by Merge VR and Vuforia. Merge has a really cool concept called the Holocube that has not yet been released. It’s geared towards kids mainly and uses AR VuMarks that are fancily glued onto a 6-sided cube. The results are pretty neat:

https://mergevr.com/assets/img/cube/grid/girlcube.mp4

https://mergevr.com/assets/img/cube/grid/cubemove2.mp4

https://mergevr.com/assets/img/cube/grid/cubemove.mp4

It was also great to see classic toys like the View-Master being kept alive and reinvented, thanks to this modern technology. I also checked out a really cool VR comic book experience called Nanite Fulcrum created by Spiraloid Workshop. Spiraloid was founded by Bay Raitt, a veteran 3D artist who’s credits include the Lord of the Rings series, as well as Valve’s Source Filmmaker and TF2. The experience was really immersive- the project is part game, part graphic novel, and part treasure hunt. The 3D art was fantastic, and they are really pushing the boundaries of what you can do to make engaging and cinematic story-driven content in VR. And the overall experience is not too long, which is mandatory when you are dealing with presenting written content inside a VR headset.

Bay Raitt at the Keynote

In the afternoon, I attended another great session, this time on Location-based VR, aka “how to create immersive experiences in a public setting and not lose your f&*king mind”. This was helmed by three groups working full time in this space: 2-Bit Circus out of LA, VRStudios out of Seattle, and XStudios out of Orlando. I was really floored by the work that all these guys are doing with some really high-profile clients, and how challenging the projects seemed. They all have taken on projects blending VR with immersive spaces, including projects with the Rio Olympics, NFL, Knott’s Berry Farm, and Universal Studios Halloween Horror Nights (to name a few). The experiences of creating huge themed immersive experiences with tiny teams has several challenges. For one, as developers you don’t have the luxury of time to explain exactly how things work- it’s literally a time-based system (ticketed experience). So the user needs to know exactly what to do when they pick up the headset. There’s also the issue of interactors and going through a lot of the same pain points that iMyth faced, in terms of getting the virtual and real-world experience to sync up and work (near) seamlessly. All these companies have the same end goal in mind, that is to blur the lines of what real-time, physically based entertainment can be.

2 Bit Circus (land of pink mohawks)

Breakout Sessions - Day 2

I began the second day by attending a session titled “XR for Kids” which featured representatives of Swapbots, SpinMaster and Mattel. It was really interesting to hear them field questions about designing AR experiences for the consumer toy market, and how they need to adjust to the user interface problems (aka small hands), and what are some of the concerns about children getting into this technology (AR/VR)in general. As a parent to an almost 2-year old, this was important to hear. After that I attended a session on Vive Tracker: Best Practices. I found this talk to be one of the most useful to me personally, as we are in the midst of transitioning the FIEA mocap stage into more of a mixed use, dual purpose VR/AR training facility (with the help of the HTC Vive and it’s companion trackers as the focal point). The first part of the talk was mostly about how game development has lead the way with innovation in the VR market, but how other verticals are now opening up the floodgates for innovation, such as product dev, medical, defense, MECs, etc. One thing that is great about the Vive tracker is that is a low cost solution for rapid prototyping in VR, and is also a free open platform for development. The talk transitioned to a show-and-tell when the product guy from HTC was joined on stage with members of LA-based design studio, Master of Shapes. MOS has successfully been building tracker-based content for the OpenVR community. One of the coolest things they showed was a virtual reality graffiti concept, where they built an actual working spray-can controller using a tracker, a homemade haptic device, and an empty spray paint can. Think of it as an inner city Tiltbrush, buy you won’t get busted by the cops if you use this set up.

If you are interested in building your own, they’ve posted detailed instructions as well as all the project files over on their blog https://masterofshapes.com/thelab.

In the afternoon I attended two different sessions on VR in the web. I am intrigued by this as I feel that VR won’t become omnipresent in our world until the content can easily be accessed, away from high end expensive computers. I thought the work that Facebook was doing with React VR was intriguing, and companies like JanusVR and Sketchfab seem to be really pushing the limits of what 3D content will ultimately look like on the web. There is also a lot of R&D going into holographic technologies, and how these could become more accessible on lower end devices that support VR web streaming. But are we any closer to the Metaverse becoming a reality? Eh, who knows.


Takeaways

Overall I was glad I attended the Unity Vision Summit. Although I am not actively developing an AR or VR project, it was good to get out and see some of the companies/players that are pushing this industry ahead. I think the opportunities that this field is going to open up for content creators is staggering. Several people I talked to are coming from more traditional backgrounds- architecture, aircraft industry, restaurant/retail, education- it definitely was a mixed crowd that is engaged in the technology, and not just games and film industry people. It feels like the dawn of a new technological era, sort of when the smart phone first appeared and everybody started taking notice. With things like SLAM (simultaneous localization and mapping) being baked into the next generation of smart phone cameras (via 3D depth sensors) we will soon start to see more and more implementation of this technology in our daily lives. Certainly working in the education field, you can already see lots of practical application for AR/VR. As a lifelong digital media consumer and technology enthusiast, that is surely something to be excited about.

360 view outside of Breakout room 2