Making the “Inception effect” in Unity 3D with few lines of code

Shahriar Shahrabi
Realities.io
Published in
5 min readMay 13, 2019
https://twitter.com/IRCSS/status/1114444496287272960

Have you ever wondered how your brain would react if you bend the space around you in VR? Or change the field of view and do a vertigo? Well I did, so I started writing a series of shaders which deformed the space around me, using matrices and ended up with an Inception looking effect.

I am Sha and I am a Graphics Programmer at Realities.io, and I love messing around with shaders.

I am going to do a short break down of how these effects are achieved, and my thoughts on the process. The implementation is really simple. Here are the shaders I used for these effects. You can apply this deformation on any mesh, BUT to get similar result, it needs to be high poly enough like the meshes I used, or be tessellated. All the effects have been done in real time in Unity. The mesh used was a realities.io test mesh.

I wanted to change field of view in VR and see how that feels. Unity locks the FOV, if you have a headset connected. Feeling clever, I decided to adjust the Projection Matrix calculation in the Vertex shader of the geometries.

If you don’t know what a Vertex Shader is and how projection matrices work and are interested, I highly recommend reading on the topic, for example this video gives a simple explanation of the basics.

For the purposes of this post, it is enough to know that to show a 2D image of a 3D scene, we first need to project the 3D geometry, on the 2D surface of our camera’s plane. This is achieved by multiplying the vertices of the 3D geometry, with the Projection Matrix, which respects things such as Camera’s orientation, position as well as field of view. And the Vertex Shader is the program responsible for initiating these calculations.

There are three steps which need to be calculated.

  1. First the vertex position needs to be converted from object space to world space, this calculation varies from object to object.
  2. Then the world space positions need to be moved to View space
  3. And finally to Clip space. The step two and three vary from camera to camera. Unity combines everything in one step, by creating one Matrix per object which holds all these transformations, the MVP matrix.

For my purposes, I needed to split these calculations again in three different steps. Mainly because I want to change the intermediate vector which results from these calculations. One simple thing to try out is to change the w component of the vector in clip space (after the MVP calculation). The w component is also known as the perspective divide, enhancing this component will result in some funny stuff in VR. Here is the result of that experiment. I am changing the w competent based on the world positions of the vertices.

Another one, using the w component and by changing the z component in the view space, I get a vertigo effect in VR.

The effect are very smooth in VR. Mainly because photogrammetry meshes are typically very high poly. Hijacking the projection matrix and multiplying the intermediate result with other matrices doesn’t do anything special, compared to displacing the vertices with a map, you would still need to tessellate the model to get smooth displacement on per Vertex calculation. I decided to continue with my experimentation and see what else I could get out of changing the components of the position vector along these calculation steps. I won’t show you every experiment, since some are hard to stomach, even on flat screen, but here are two which led to interesting results.

https://twitter.com/IRCSS/status/1114191029136252929
https://twitter.com/IRCSS/status/1113733441273593856

In the above examples, I modify the y, x component of the view space, based on the world positions of the vertices.

This method looks very interesting in VR, and in its subtle form can have actual usage for VR experiences. The main limitation with this way of deformation is that it is depended of the head orientation and position. Meaning if you define a deformation, the deformation would change if you move your head. For simple things such as simple translation, you can compensate for that by taking the head movement in to account and adding your headset position to the input of your functions, however for the more complicated stuff, this becomes harder. My tip is to try to make the input of your function based on the world position of the stuff, as much as possible.

Thanks for reading, in case you have any question, or there are factually wrong information in the article please contact me. The shaders are available here. If you need a mesh to experiment with, you can download one of the many meshes which my college Azad has on sketch fab. If you do cool things with it, tag us in and let us know, we like cool stuff.

Shahriar Shahrabi is a Graphics Programmer at Realities.io

Follow him on Twitter to keep up with all things Shaders and Photogrammetry

Download the FREE Realities app on Steam

Explore the results of our Photogrammetry workflow in beautiful and detailed environments from the around the world in VR.

Download from Steam for Oculus Rift / HTC Vive.

Follow the Realities.io Team

The Realities.io group travels around the world, capturing the most inaccessible wonders of the world. Follow our story on Youtube and Twitter.

--

--