Tomorrow, a VR experience about the evolution of language
Hello, world. Hello.
We wanted this to be a formal presentation, but we are terrible at those. So we’re inviting you to sit next to the fire with us. We hope that you feel like listening, because we’re going to tell you a story. Our story.
Once upon a time, there was a movie director that met a technical 3D artist. Those two were many things, but one above all: creators. And as creators, they created Future Lighthouse, an adventure in the form of a creative studio for virtual reality. They started playing, trying new things, imagining. The hype grew and grew: there was so much to do, so many possibilities. Virtual Reality had arrived as a new language and now it was time to learn how to use it and play with it. It was story time.
When they shared their vision with friends, family, clients and investors, it worked. It made sense. Everyone wanted more. But… gosh, how difficult it is to define “color” to a blind person! And how difficult it was to explain virtual reality to people who had never tried it. They explained over and over again what they had in their minds, using shadow plays, mimicry, presentations, acrobatics and demonstrations. But they couldn’t fully explain what they wanted. There was no VR experience to show as an example. But then, when they got their first financing (from great family and friends), they took 25.000€ to produce their first original piece, which would be used to explain where we were, where we were going, and what on Earth that Virtual Reality thing is.
Nicolás was in Singularity University at that moment, where he wrote his first script. It was an immersive Shots of Awe. So they picked up the phone and asked Jason Silva if he wanted to play. “Play what?”. Well, play to travel through the insides of a brain by using neural connections, fly along a bunch of eagles and get in their skin, be run over by a train, walk on water and go back to historical moments. Jason was game, of course.
[At this point, I stopped the interview with Nicolas and asked him what on Earth was he thinking: that was incredibly difficult to make. He laughed back and pointed at Roberto. “He told me it could be done”]
Effectively, six months and many frustrations later, they decided to stop. Leave it. Take a breath. The experience wasn’t advancing; the filming with Silva kept getting postponed. Other projects that required all our attention came in. The team was tired. “It got out of hand. But we liked the story so much we couldn’t leave it like that. I reworked the pitch and we used the scenarios that were already done.” And so, we continued.
“The main problem was that we didn’t understand our production capacity. I didn’t understand the animation process. We didn’t keep order, since I like working in a more anarchical way, and I didn’t know how to lead the team nor how to instruct it to work well. We made a huge mistake. We outsourced issues instead of hiring full time people for this. We didn’t prototype quickly enough. In between, we learned how to be a business and what VR was…”
One year, 25 employees and a second funding round later, we released Tomorrow at VRLA. And we couldn’t be happier. Tomorrow is a 6 minute long experience about the evolution of language. The fantastic voice of Saras Gil guides us through a beautiful time journey from prehistoric caves to the summit of the Himalaya, passing through the Amazon forest. Tomorrow has been developed using Unreal Engine 4, generating 3D graphics from a computer in real time. Part of the experience has been filmed in real image, with a 360º stereoscopic camera (Sony A7s).
We decided to release the experience for free, but accepting donations. Just to see what would happen. Tomorrow was, from the beginning, a prototype that has helped us learn how to create virtual reality. Thanks to this, we now have a pipeline of productions that will blow your minds: Ray, Melita, Time Heist, Hidden Tears… Wait for it.
And why not share our story? We don’t want you to just watch. We want you to make it yours. That’s why we’ll release the assets. There are people that are trying, testing, creating on their own. The VR community is open source, and we want to participate a little: maybe new incredible experiences will be born benefitting from our help.
And after this introduction, let’s get more technical…
When technical difficulties change the whole script
One year ago, Tomorrow used to start in a completely different way. You had a body and were in a cave. You moved towards a huge rock wall, where you found a giant ice block in which you could see your reflection and read an inscription: FIND YOUR GUARDIAN ANIMAL.
Reflecting your own body meant having a camera that recorded you. But this required so many resources that it made no sense, so we decided to remove the reflection and leave only the inscription, shining with a special shader upon staring at it.
When you looked at the sentence, other things would start shining: abstract animal drawings. As you looked at them, they began moving in a small animation. That wasn’t so complicated: we ordered the animations in flipbooks and in each page we’d place a frame. This is a very cheap 2D animation technique that works very well. Afterwards, the paintings left the walls and started surrounding you. The cave ruptured and you ended up in some kind of limbo universe.
Interaction was an issue: we couldn’t optimize the resources. So we decided to remove it. The paintings would be just decoration and you would get to a group of Neanderthals that were waiting for you. Then, between some chants, you start floating, leaving your body behind: your soul is separating and you’re going to begin an astral journey
However, there was an issue with the body. So far, we haven’t found any studio that has managed to design a body that doesn’t feel weird and ruins the immersion. There was something especially difficult: What happened when you turned around? Did the digital body also have to turn? What if you only move 45 degrees? So we decided to let the body have a small balancing to the sides depending on where you looked, and if you turned 180 degrees, the body would too. It wasn’t a bad solution but it didn’t feel natural. This would take us so long that we decided to leave it behind.
The cave was also complicated, but because of artistic reasons. Since the processing wasn’t done the right way, there weren’t any final concepts to use as reference for the modeling, so we had to create several different caves to improvise and find out which one was the best for the experience (small, or big? Snowy, or icy?)
When your characters are creepy: the uncanny valley
The assets of the cave, such as stairs, tools and the characters, were managed by an outsourcing business we didn’t get along with very much. We come back to not having a defined style and we didn’t know what to do with the animations. Finally, we fell into the uncanny valley.
In 1970, Masahiro Mori developed this theory that says that, as anthropomorphic beings look more like a human, the human’s response will be more empathetic, until a boiling point in which it becomes a strong denial. If the being looks more and more similar, until being indistinguishable from a human being, the response will be positive again. That gap is called “the uncanny valley”, where we’d find creepy Japanese robots, the Tin Toy baby from Pixar, and… our Neanderthals.
Let’s get nostalgic for a moment and remember a scene we loved but we had to remove: the eagle
This was going to be the ending of Tomorrow. We become an eagle and fly between mountains. “Giving us a feeling of vertigo” became something more like “instant puke” (also known as cybersickness). Diving in-flight isn’t as fun as it seemed. But oh well, we modeled and textured the eagle from an asset and stored it in a drawer. As a “wink”, in the mountains part you can hear the cry of an eagle. RIP.
However, it wasn’t in vain, because a few months later we worked on a project for Fly Emirates in which there was a scene with a hawk, so we reused the asset and finally, we gave it a chance to fly free. More or less.
We had another scene which didn’t survive the timings: The Reichstag Building. We wanted to reproduce the moment when Yevgeny Khaldei took the famous picture, Raising a flag over the Reichstag, on 2 May 1945. The picture symbolizes the victory of the allies over Germany.
Back to the experience, when we leave the cave behind and go through the forest (which was an exercise in tree optimization, taking them all to the minimal geometrical expression, creating triangles alongside the snowy valley and giving them the appropriate textures and looking for shaders of snow and ice that didn’t look like clay pieces) until we arrived to a place we called “The Monolith”
We made a mistake with the natural settings: the meshes were too heavy. We had faith in the engine being able to handle all the geometry, but it couldn’t. It wasn’t optimized at all. When we realized we needed to lower the number of polygons so the engine could run it, we took the terrains and exported them to Max, using proOptimazer to lower the mesh and keep the coordinate axis for the textures. Afterwards, we relocated and fused the vertex so everything was perfectly built. In the process, we also removed all the background geometry you couldn’t see to make it lighter.
The monolith: how we faked 2D videos to not make Unreal angry
The monolith was complicated from scratch: we didn’t know what we wanted. We knew that it had to be a floating geometrical shape with flat faces. But that was about it. We went through different designs (converging pyramids, etc) until we found this shape that was perfect for the job.
For some time, we left the monolith there, floating alone on the lake, with a mirror texture that didn’t reflect anything (we didn’t want to mess with the lights until we knew what we wanted). Later, it had a shiny black color, like onyx, until we decided it had to have videos in each face. Panic. Terror. Unreal and videos with different frame rates don’t get along well and we knew that.
At the beginning, we mapped each big face and put them on. The smaller sides stood black. Then we tried playing a video on each face, but we didn’t know how to make it so all of it was video. We tried to map the monolith completely, with all its sides together, creating a video with all videos playing together. However, the textures had to be huge to be able to see all of them. At the same time, we had a compression issue: we couldn’t export a video to 4096x4096. We tried with the H265 format, but with the version of Unreal we were using there was no way and we coudn’t wait.
After hitting a wall, we started from the beginning: the videos would play on each face, but the monolith and the videos would be independent, placing a layer over the monolith where the video would play. But again videos were a problem, so we decided to do the same as with the wall paintings: flipbooks.
Every face had a special shader where a 16x16 flipbook was hosted, where the videos were in frames. Funnily enough, with virtual reality we’ve been using basic and archaic techniques. These aren’t used anymore because PCs can handle more powerful things without any issue. This also happened with mobile phones: after all, when we take a big leap, we have to go back a few steps to continue.
The thing is, this didn’t look good either. The video looked like a sticker. Even if we blurred the sharp ends and changed the opacity, it still looked rough. So we thought about using an bump offset that made the video be inside the Monolith rather than on its surface. It also faked its projection depending on the camera. So the result was good.
Unreal, a complicated relationship
Unreal hasn’t been a headache: it has been a migraine. This project could have been done in Unity, but it would have looked a lot worse and we would have learned less. This project made us Unreal experts.
To be honest, we were learning about Unreal as we went, with all the implications that it had: we took decisions on the script that we didn’t check technically, such as using videos with Unreal.
The problems began when we decided to go to Laurisilva, in the Canary Islands, to record part of the experience. We had recorded the Ministry of Time with stereoscopy and the result had been so good we wanted to repeat it.
So there we were, a team of ten people in the middle of the Tenerife jungle, under a torrential rain, between impossible slopes and swamped roads, enduring a deadly cold weather with determined actors that went through our directions half-naked.
The immersion is incredible. The sounds of the jungle could almost touch you. Our native looks through you, as if searching for something. The arrow that pierces the foliage. All of it was worth it. But then we had to get that video in the experience.
Since we couldn’t use the videos as we wanted, we had to make a final video, with frames of the whole experience, and that’s what we’ve been showing. We never thought we’d have to do something like that: record Tomorrow in 360.
Actually, we haven’t been able to enjoy the real time experience until a year later. We found the Blink plugin that fixed this issue… for 13.000€ (well out of our budget for 30 seconds video). So we tried a different, free plugin. Tomorrow will soon be on Steam in real time for HTC VIVE and Oculus Rift.
Another issue we’ve had with Unreal is changing versions through time. We started with 4.8 and ended up with 4.12. And in between, each versión improved 5 things and spoiled 10. For example, light felt different, or a part in a scene stopped working, or the performance dropped down to 5 fps…
The optimization with Unreal was hard, but that was partly our fault. We had an absurd workflow: we prototyped with everything, and if the performance was bad, we removed things to optimize. We never should have abandoned the art concepts, because they are the guide the artist needs to reflect the director’s vision.
VR, the language of tomorrow
After this whole adventure, Tomorrow is finished and available on Samsung Gear VR and Google Cardboard: www.futurelighthouse.com/tomorrow.
Thank you all!