Coming Soon to a Mixed Reality Theatre Near You

Peering into the future of storytelling at Tribeca Immersive 2017

Haejin Lee
Microsoft Design

--

It was a bright spring evening in New York, when the long shadows of the city stretched across a longer line of people on Varick street. I stood in the middle of that line, invitation in hand, awaiting the Tribeca Immersive exhibit that showcased stories made with emerging media. In the queue, I began to anticipate what sort of wonders might await inside this mundane grey brick building.

At 10 minutes past 7pm, the start-time of my 3 hour session ticket, the line started to move. After getting a wrist badge, taking an elevator to 5th floor and walking through a long white corridor bathed with neon purple ambient light, I finally arrived at the entrance where an erie green glow framed the black edges of an aluminum door.

“Sign up for the film you want to see. The list fills up quite quickly.” shouted the staff at the entrance. I then realized I had to rush to each venue to put my name on a waiting list, and that those lists were filling up quickly. By the time I got in, some of the experiences I really wanted to check out including “Draw me close” and “Tree” had already reached capacity. My heart went into FOMO overdrive as I scrambled to secure a few key spots among the 30 available offerings. While I was frustrated by the structure of this event, I still managed to experience 4 pieces that made my trip from Seattle to New York worthwhile.

Spoiler alert: here’s what I saw

Life of Us

#Embodying beyond human # Social

Life of Us is a story of evolution on Earth told by embodying a series of different characters. This is one of the pieces I was eager to try after listening to Voices of VR interview with the creator, Aaron Koblin. The experience was a pure joy ride. I giggled, yelled, and laughed during the entire experience—so much so that my sides hurt.

One of the key differences between VR storytelling and conventional flat cinema is VR’s ability to turn the audience into the protagonist through the use of an avatar — not just another human but sometimes another creature.

In “Life of Us”, I started my life as a single-celled organism, then evolved into tadpole, lizard, flying dinosaur, ape, human and finally a cyborg. It was ecstatic to breathe bubbles and fire through my mouth and control movement of massive flying dinosaur wings with my arms. Yet, because the story was fast paced and the user control was limited, I wasn’t able to truly feel ownership of each avatar I embodied. Still, I definitely felt like I was no longer normal me, but a different something, thanks in part to their ingenious voice modulation. My voice sounded like as if I inhaled a helium balloon. The funnier thing was that I tried to tweak my real voice to match the sound I heard as my voice — I was modifying myself to match the character I was enacting. I also spit out lot of giberish just to hear my new voice before the magic of helium balloon disappeared.

Embodying different virtual bodies that are quite different from our own is not a new concept. It was first developed by Jaron Lanier when he was at VPL research in 1980's where he found out human can learn to control bodies that have extra limbs such as a lobster. He named our ability to do this “Homuncular Flexibility.” Also, Stanford Virtual Human Interaction Lab led by Jeremy Bailenson has been actively exploring this concept by making humans inhabit nonhuman characters such as a cow, a coral reef, and a bird to see whether it increases empathy for the creatures they are embodying. They also created a humanoid avatar with 3 arms just to see if it would increase efficiency for task performance. Yes. The future of our presence in virtual world may be slightly weird, but it is definately liberating and eye-opening.

Finally, I experienced “Life of us” along with my husband. We saw each other’s evolving bodies and communicated with each other throughout the story, and this social aspect significantly increased my sense of presence in the virtual space. Because the fictional world was shared with another who perceived and reacted to me, this world felt more real. I almost forgot that I was being watched by any other attendees at the festival.

Treehugger: WAWONA

#Invisible visible, #Sensory soup

Marshmellow Laser Feast is one of the most inspiring creative agencies in the UK that tries to extend human perception by visualizing perspectives inaccessible to human senses. In 2015, they created ITEOA(In the Eyes of Animal), which was an artistic interpretation of forest seen through 3 creatures, an owl, a dragonfly and a frog. In 2016, they started to showcase their newer piece, “Treehugger”, that visualizes tree and it’s vascular/circulatory system of flowing water, and it was one of Tribeca 2017 VR arcade pieces.

What distinguished the Treehugger booth from others was a giant black foam tree placed in the center of the space. The size and placement of this foam tree was aligned with the VR experience, so when I reached out for tree in VR, I received tactile feedback from the foam tree. This alignment of physical surface with the virtual tree tricked my brain further into thinking the virtual world is real. I believe in the near future this kind of setting will be adopted by commercial VR arcades or amusement parks to increase realism and provide more reason to experience things on site instead of home.

Touching the foam tree while wearing SubPac in the back

Besides this physical feedback, MFL used SubPac, an audio device that transmits bass sound directly to the bones and muscles of the body to create more immersive VR experience. It was interesting to feel haptic feedback on my back, but I am unsure how much this helped me feel more immersed. Actually, when Subpac was very active, it broke my illusion because I was more aware of myself wearing this device in real life. I can see Subpac becoming more powerful for loud music concert simulation or FPS games where experiencing vibration is more natural and anticipated.

Tiny smell diffusing device under headset

Another interesting addition was smell. MLF attached a tiny device under headset that diffused the smell of the tree, forest, and rain throughout the experience. This made me feel more like I was in the forest. The first time I experienced the use of smell in VR was in Japan pavilion at SVVR conference in 2017. The small startup called VAQSO created a short demo, and I was able to smell coffee when I held a coffee cup, chocolate when I picked up a chocolate bar, and (a bit weirdly) perfume when I got close to Sailor Moon-like manga character. I was surprised how I was able to smell the difference immediately after I picked up different objects and brought them closer to my nose. I think smell is very compelling component to increase immersion and I look forward to what’s next regarding this sense.

Despite all these additions, what created the strongest sense of presence was still visual feedback — clouds surrounding the Sequoia tree and particles indicating nutrients that changed movements or colors in reaction to my hand position. These particles made me feel like I was looking at the matter of this world on atomic level — the atoms that are in perpetual motion to maintain life, and I was mesmerized by its sheer beauty! Although, this experience is fantastic as a VR piece, the real potential is to visualize this invisible layer of a tree actually on top of a real tree using the same AR technology.

To be with Hamlet

#Mixed reality #Live Performance

(Left) A scene form VR source (Right) Live performers enacting Hamlet and the ghost in remote location source

“To be with Hamlet” was an experimental piece that fused VR and live theater to recreate theatrical intimacy in VR. The piece demonstrated a slice of Hamlet in VR, the scene where Hamlet meets the ghost of his murdered father, and used motion capture data from live performers in Brooklyn to control movement and voice of characters.

I tried the experience with 3 other people in a tiny dark room, and was able to see other audience members floating headset avatars as well as the actors. This was a bit like Sleep No More, an experimental site-specific theater in New York, in terms of being able to roam the theatrical environment with other audience members all wearing masks. Yet, different than Sleep No More, these actors were not able to see the audience because they weren’t in the same location and they were not wearing headsets. This limitation blocked interaction between audience and actors, and made me question what the difference is between recorded and live performance if performers cannot see who they are interacting with. I believe the addition of some minor interactions between audience and actors — like performers grabbing a hand in the audience to guide them to a certain place or a simple eye gaze exchange that creates an intimacy as demonstrated in Sleep No More — would add more liveliness to this experience.

Despite of this limitation, the idea of mixing VR with live theater and multiple audiences still sounds novel and refreshing. I heard ‘To be with Hamlet” is still in a very early stages of development, and I am excited to see how the piece will evolve.

Draw me close image source

Besides “To be with Hamlet”, “Draw me close” was another piece that involved live performance at Tribeca. Unfortunately, this piece was at capacity, but it still sounds fascinating. “Draw me close” is a memoir of creator Jordan Tannahill’s relationship with his mother in the wake of her terminal cancer diagnosis. In this piece, the audience transforms into the main actor, 5 year old Jordan, and the mother character is enacted by a live performer who is co-located with audience. The experience starts with mom (live performer) hugging Jordan (the audience), and then they do activities together like opening a window and drawing on the floor. Since the illustrated room in VR is aligned with the staged room, the user was able to get all the physical tactile feedback on top of real human touch. I’ve never heard anything like this before, and I really appreciate creators like Jordan Tannahill pushing boundaries what the new medium could offer.

The protectors: walk in the ranger’s shoes

#Guided story #Documentary

The Protectors” is a story about a group of rangers protecting endangered elephants in the National park of the Congo. Through this film, my awareness of the dangers these African rangers and elephants face got immensely heightened. I learned that poachers are well armed not just for killing elephants, but rangers too, if needed. I found out later that 21 rangers were killed during the last decade protecting elephants in the park. It’s estimated the park now has less than 2,000 elephants, down from 20,000 in the 1960s. Although the resolution of video and text rendering were not the best quality, the story felt true and my heart ached when I virtually witnessed a carcass of an elephant killed by poachers. The full VR film is available here.

Image from National Geographic / Photo by Ian Doss

VR affords audiences to form intimate connection with others’ situations by providing a firsthand immersive view of events. Because of this affordance, people have been calling VR an “Empathy Machine.” While I’m not a big fan of the term, I really do appreciate creators capturing a part of the world that is not easily accessible and that deserves people’s attention. I also have no doubt that VR is a powerful tool to transport viewers to the situation that film is depicting. Yet I think it is a bit simplistic to state that more immersive technology will automatically induce more empathy. No matter how immersive technology is, if the story is not well crafted and resonant with people, it will not cultivate deeper emotional engagement.

“The protectors” arcade that set the scene before audience watching the film

Back to reality

10pm. My 3 hour ticket lost its magic, and I found myself back out on Varick street. Much different from the bustling inside, the street was dark, quiet, and slightly chilly — I was back to reality. Yet, my mind continued to replay the pieces I saw, wonder about the ones I didn’t get to see, and ponder the future of storytelling. Through Tribeca Immersive 2017, I saw a glimpse of storytelling using mixed reality, live performance, embodied social presence and multi-sensory output. What will Tribeca Immersive 2018 bring?

Perhaps stories using Artificial Intelligence as a character that guides the story and dynamically reacts to users? Or perhaps more interactive stories where the user has full agency beyond choosing a story path based on gaze? I am thrilled to see how artists will continue to push this new medium to reveal novel concepts and to create new ways of experiencing hidden, complex, and believable realities through storytelling.

I’d love to know what you thought of these projects, or what other stories you’ve seen in the immersive realm.

To stay in-the-know with Microsoft Design, follow us on Dribbble, Twitter and Facebook, or join our Windows Insider program. And if you are interested in joining our team, head over to aka.ms/DesignCareers.

--

--