Immersion at Scale: SIGGRAPH VR Theater 2019

Carlos Diaz-Padron
11 min readAug 6, 2019

--

Entrance to the SIGGRAPH 2019 VR Theater

For the past couple years, the annual ACM SIGGRAPH graphics conference has hosted a lineup of some of the best narrative virtual reality experiences of the year in the VR Theater program. As part of this program, a jury selects a set of shows from submissions to present in sequence to attendees of the conference, and give them a taste of the latest in immersive storytelling. Past programs have slowly scaled up, to about 30 “seats” per showing.

This year, we had 54 seats. This is how we did it.

Statistics

First, some stats to give you an idea of the scale we managed to achieve this year, the scope of the program, and its unmatched ability to introduce large groups of people at SIGGRAPH (many of whom have never tried VR) to the experience line-up.

Mainstage

This year, in the Mainstage, we had 5 narrative experiences, and a networked multiplayer “lobby” experience that controlled the play order of the show, for a total of 6 VR experiences.

There were 54 seats in the theater, in a semi-circle layout facing the center of the circular space, featuring Alienware PCs sporting Nvidia RTX 2070 graphics cards, brand new Oculus Rift S headsets, and Bose noise-cancelling headphones. The space also featured a circular particle projection, lighting, and custom music and sound design for ambiance when entering/leaving the space.

We hosted a total of 30 showings with 50 minutes of content over 4 1/2 days of the conference. Every showing sold out within 30 minutes of ticket sales opening each morning at 8am. That’s 1,350 hours of VR content served to 1,620 attendees over 4.5 days.

Kiosks

VR Theater Kiosks in the Immersive Pavilion at SIGGRAPH 2019

In addition to the Mainstage, we hosted 10 additional curated experiences in 6 separate VR Theater Kiosks.

The Kiosks saw 170 people on average per day for another ~765 attendees (give or take for any overlap in attendees) total.

In total, this year we served 10,485 narrative VR experiences to ~2,385 people in 4.5 days (~12% of total conference attendees), making for the largest location-based narrative VR experience on the planet. In short, at present, VR Theater is the best way to get your narrative VR piece seen by as many people as possible, very quickly.

The Theater Space

The outside of the 2019 VR Theater, humans for scale
The inside, looking from the entrance

The “real world” theater space consisted of the 54 seats setup in a semicircle inside of a tall circular, velvet curtain-wrapped theater space (a towering presence, easily visible from anywhere in the experience hall or exhibition), with an interior circular particle projection and ambient music. The desks were designed to mostly hide the computers and monitors from the attendees so they wouldn’t get too distracted by the implementation, and get immersed instead of the world they were about to enter.

You really can’t miss the VR Theater space, wherever you are in the experience/exhibition hall

As attendees entered the space, welcome music and voice over was played, and attendees were helped into their seats by student volunteers.

Entrance sequence
A SIGGRAPH Student Volunteer helping an attendee get their headset on. We had 26 volunteers in the main stage and 6 in the kiosks every showing. This would not be possible without them.

As the experience ended, another track and animation was played to thank them for coming and preparations began for the next wave. The showings were back to back every day.

Scaling Setup

For 2019, VR Theater Chair Maxwell Planck wanted to increase the amount of seats significantly by 50%, since VR Theater always sold out early every morning (the most common complaint from attendees) and we wanted more people to get to see the experiences. He also wanted our “lobby” experience to be networked multiplayer.

Attendees lined up at 8am to get tickets, they would sell out by 8:30am.

This had the potential to truly elevate the VR Theater experience, but it also presented some interesting new challenges. One of those challenges was scale. Virtual reality devices and software are rather complex and have a lot of setup prerequisites and calibration that need to be done. Additionally, a few weeks before the conference, we found out we would be getting new Oculus Rift S devices instead of the previous generation Rift, which were easier to setup, but also an unknown for our team since they were so new.

Therefore, we needed to automate as much of the experience as possible in order to allow for operational bandwidth for troubleshooting, re-calibration of headsets, and helping attendees who may never have used virtual reality before. In addition, many of our experiences in the line-up were real-time rendered / interactive binaries, each with their own operational modes, adding to the run-through complexity.

All setup of machines was done remotely over the LAN network via automation scripts. This included installing Oculus software, updating Nvidia drivers, updating USB drivers, installing our experience line-up binaries and our lobby experience binary. Any time a new setup step was needed on one computer, we deployed it to the rest of the machines with automation, so we never needed to repeat most setup steps ourselves. This also made swapping a bad machine with a new one easy, as all you needed to do was plug it in and set the computer name to a convention (SEAT21, for example), and the automation software would take care of the rest of setup and activate it automatically once it was ready.

The only manual operations we needed to do were setting up Windows on fresh machines (if we had imaged them with Fog as planned, this would have been unnecessary, but that fell through onsite) and logging in to Oculus Home, for which installing TightVNC servers via automation and logging in via the master computer helped to speed up the process.

We used this orchestration capability to do maintenance tasks as well, such as cleaning up processes left running between showings and shutting down the machines at night. At the end of the conference, we ran a teardown script that removed all of our installations in ~30 seconds, allowing us to start striking right away and finish full teardown in about 3 hours (minus the space itself and rigging, as that was handled by conference contractors).

The Lobby

Last year’s VR Theater featured a “lobby” experience that attendees could sit in until the experiences were kicked off by student volunteers. For this year’s lobby, our chair wanted the experience to be networked, so attendees could see each other in our virtual world while they waited for us to launch them all into the experience line-up at once.

The lobby from an attendees VR perspective

We built our VR Theater Lobby in Unreal Engine 21.2, and used the brand new Niagra particle system to design a 1:1 replica of the actual VR Theater space in virtual reality. That way, the attendees could feel like they were entering a parallel world in the same environment, before being whisked away into the individual story worlds. We also gave them interactive particle controllers to play with while they waited for the experience to start. Each lobby client placed themselves in the correct location relative to the physical world using a computer name convention mapped to the seat number (ie. SEAT1, SEAT24, etc.), which made swapping in a new machine for a seat an easy process.

Master Control interface on a machine at the center of the theater space

Since the experience was networked, everyone could see each other’s heads and hands as particle balls, and the run-through into the line-up was centrally controlled from a “master control” interface. Using this interface, we could start the experience line-up on all computers at once, monitor the status of each seat, and even see each miniature attendee play around with their particles.

Lobby Master Control Interface

Once everyone was in their headsets, we started the experience on all machines simultaneously from the master control, and the lobby handled the rest, automating the pass-off to each experience as the previous one ended.

Since several of the experiences were interactive and had variable play lengths, we also programmed a configurable maximum duration for each experience, so the program could move along more-or-less in sync and keep things running on schedule.

When attendees finished up, a “reset” button in the master control would orchestrate the shutdown and restart of the lobby experience on every machine, in preparation for the next wave. This allowed the student volunteers to concentrate more on wiping down the VR gear and re-calibrating headsets if needed.

Notes for Future Experience Developers

I’ve worked in narrative VR, and we observed over two thousand people watch these experiences this year, so we have gathered a lot of data on what works and what doesn’t when showing these experiences to mass crowds. I want to share a few of these learnings here for future narrative VR developers to take heed if they want to participate in programs like this in the future. (Views are my own)

Prefer shorter, experimental works as opposed to longer pieces

This year, our criteria included a time limit for pieces that could be submitted. Based on feedback, we wanted to have a single line-up that we could show at every showing of ~50min. This seems to be about as long as conference-goers want to spend in this kind of experience. Anecdotally, I’ve also found that I’m growing more and more impressed with the 5–15 minute pieces that push boundaries of interaction in story VR, and that smaller play length is a big factor in allowing that kind of investment in new techniques. It’s also just practically easier to demo to people, especially if you aren’t able to show it to 50+ people at once like us!

Always allow the experience to be started from outside of VR

I cannot convey just how many times people completely new to VR would get stuck in the very beginning of a narrative VR experience because they didn’t understand they needed to click a “Start” button inside VR. To make matters more complicated, since SIGGRAPH is an international conference, there were times where these attendees had trouble understanding us trying to help them from outside of the headset because we didn’t speak their primary language and couldn’t “show” them what to do.

It’s not their fault. I strongly urge developers to include the ability for a volunteer/usher to start the experience off from outside VR, especially in this nascent stage. I understand the argument against it (I’ve made it myself in the past). You might say “they’re going to need to use the controllers later anyways, so they should have to use them at the start.” I’m telling you from experience, that isn’t helpful and only causes headaches with your experience.

Avoid DRM or any need to connect to the internet at all

I understand a studio’s urge to implement some DRM to protect their work from theft, especially with many coming from the piracy-rife film or games industry, but requiring internet access is a recipe for disaster. Besides the fact that internet access in a conference setting is very rough, there is no guarantee the machines themselves (usually donated, sometimes last minute) will have a sane setup for network adapters and such. You do not want your experience to be the one we have to skip over because the DRM stopped working.

If it makes you feel any better, the people shelling out $1000 for VR equipment out there are not exactly price sensitive. Chances are no one is trying to pirate your work given the tiny relative market size VR still operates in. Optimize for ease-of-use and mass visibility, not a non-existent VR piracy problem.

Notes for Headset Developers

VR headsets have come a long way to making the setup and maintenance experience much easier than even a year ago, but there are a few particular things I think are easy to miss if you haven’t run a mass-VR setup before with this level of complexity that I’d like to point out. (Views are my own)

Allow people to disable the “home” screen button(s) from the outside-headset settings

By far, the most common problem when demo-ing VR to newcomers is them accidentally hitting the menu button on their controller. To make matters worse, it often takes a while for the usher/volunteer to even notice that is what’s going on, since we can’t actually see the menu they are seeing from outside of VR. Oculus allows a setting for requiring a long-press to access the home menu, but the setting seems to only be available in the inside-VR settings menu.

Please imagine for a second our team having to put on 54 headsets one by one, while we are busy setting up a hundred other things, to turn on a setting inside the VR menu. These kinds of settings need to be available in the outside-VR control interface for your headset, and please allow the ability to fully disable that home menu button. VR demoers will love you, and attendees won’t walk away thinking your software is still “not stable.”

If you have SLAM inside-out tracking, allow for a seated mode where calibration is not required

Most of these mass scale VR experiences are seated, even if the experiences are 6DoF, mostly because of space constraints and complexity. Given the ability for SLAM to recognize the floor dynamically, it should not be necessary to reset the floor every time the headset loses sense of the world. The “center” will always be the place where the headset is put on and we will not setup the play area since they are sitting the entire time. It would be great if we could tell the headset that, so those assumptions can be made and we don’t have to re-calibrate the headsets constantly, often after the attendee has already put it on.

That said, the advent of inside-out tracking is in general a god send for these mass experiences. The setup complexity is significantly reduced from the outside sensor setup of old when you have such tight play areas that overlap with a few others. Even with the need for constant re-calibration (not helped by our dim lighting, mind you), the overall setup time is far less than the outside sensor setup of previous years, since that calibration only takes a few seconds.

Conclusions

The SIGGRAPH 2019 VR Theater was a great success and achieved it’s goal of expanding the scale of the program, nearly to its limit with current technology. Even with the 50% increase in capacity, the program saw sold-out seats within 30 minutes of each ticket sale opening. We were, however, able to accommodate many people on the walk-in waitlist thanks to the use of a waitlisting app and some savvy organization by subcommittee members and student volunteers. Overall, we were able to get about 12% of conference goers into the theater this year.

VR Theater is the most efficient and wide-scale way to present narrative virtual reality storytelling experiences to large audiences of graphics-interested SIGGRAPH attendees. If you have an experience you’d like to submit to the program, keep an eye out for the SIGGRAPH 2020 VR Theater submissions for the next conference in Washington, D.C.!

Thanks to all the people who worked hard putting together the program over the past year, including the VR Theater Subcommittee, 2019 Conference Chair Mikki Rose, VR Theater 2019 Chair Maxwell Planck, SIGGRAPH GraphicsNet, the studios that submitted experiences, Freeman, and the SIGGRAPH Student Volunteers.

--

--

Carlos Diaz-Padron

Software Engineer, SIGGRAPH VR Theater 2019 Subcommittee. Formerly Twilio, Penrose Studios.