Codevember breakdowns Part 2: Depth Texture to World position.

林逸文 Yi-Wen LIN
7 min readJan 9, 2018

Continue from the first part, the second part is an extension of the shadow mapping technique. It’s still creating a camera and calculate the shadow matrix. However instead of 1 camera I created 2, so roughly I can capture the shape of the object and its position in 2 textures. With this then I can use it to create some effects, such as restraint objects to stay within the model … etc.

Before we start, this technique requires slightly more knowledges about render to texture, instancing and depth texture. If you are not familiar with them please have a look at these techniques first. And again I am using my WebGL tools for this, but most of the techniques are in the math and shader, which should be easily to convert with any library. For the math, I am using the amazing gl-matrix.

In this breakdown I will go throught how I create the voxel look effect in the image above. Here is the source code and a live demo.

First step will be creating bunch of cubes that covers the whole model. I’m using instancing here for the cubes. I first create one basic cube, then add an offset attribute for each cube ( aPosOffset )

// create cubes, using instancing
this.cubeSize = 0.1;
this.meshCube = Geom.cube(this.cubeSize, this.cubeSize, this.cubeSize);
const w = 0.8;
const h = 1.8;
const d = 1.2;
const positons = [];
for(let x = -w; x<=w; x += this.cubeSize) {
for(let y = -h; y<=h; y += this.cubeSize) {
for(let z = -d; z<=d; z += this.cubeSize) {
positons.push([x, y, z]);
}
}
}
this.meshCube.bufferInstance(positons, 'aPosOffset');

In the vertex shader, I added the position offset to the vertex position to get the final position :

void main(void) {
vec3 position = aVertexPosition + aPosOffset;
gl_Position = uProjectionMatrix * uViewMatrix * uModelMatrix *
vec4(position, 1.0);
}

Using instancing not only give us a better performance, more importantly this ‘aPosOffset’ is also the center point of each cube. Later on we are going to test if this point is inside the model or not to resize the cube.

The next step is to setup 2 shadow maps, one from the front and the other from the back. After this we will use the depth texture to recreate the world position. With the world position we can test the cube to see if it’s inside the model or not.

I am using orthogonal camera here ( without perspective ), and then it’s the usual way to get the shadow matrix :

// normalise the value from -1 ~ 1 to 0 ~ 1 for texture sampling
this._biasMatrix = mat4.fromValues(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
this._shadowMatrix0 = mat4.create();
this._shadowMatrix1 = mat4.create();
// setup the 2 cameras
const y = .5;
this.pointSource0 = vec3.fromValues(0, y, 7);
this.pointSource1 = vec3.fromValues(0, y, -7);
this._target0 = vec3.fromValues(0, y, 4);
this._target1 = vec3.fromValues(0, y, -4);
let s = 1.5;
// using orthogonal camera (no perspective) :
// mat4.ortho(out, left, right, bottom, top, near, far)
this._cameraLight0 = new CameraOrtho();
this._cameraLight0.ortho(-s, s, -s, s, 1, 50);
this._cameraLight0.lookAt(this.pointSource0, [0, y, 0]);
this._cameraLight1 = new CameraOrtho();
this._cameraLight1.ortho(-s, s, -s, s, 1, 50);
this._cameraLight1.lookAt(this.pointSource1, [0, y, 0]);
mat4.multiply(this._shadowMatrix0, this._cameraLight0.projection, this._cameraLight0.viewMatrix);
mat4.multiply(this._shadowMatrix0, this._biasMatrix, this._shadowMatrix0);
mat4.multiply(this._shadowMatrix1, this._cameraLight1.projection, this._cameraLight1.viewMatrix);
mat4.multiply(this._shadowMatrix1, this._biasMatrix, this._shadowMatrix1);

Now we are set with the cameras, for next we are going to use these cameras to render the model to textures :

_getDepthMaps() {
// setup render to texture
const size = 512;
this.fboModel0 = new FrameBuffer(size, size);
this.fboModel1 = new FrameBuffer(size, size);
// bind the frame buffer
this.fboModel0.bind();
GL.clear(0, 0, 0, 0);
// setup to front camera
GL.setMatrices(this._cameraLight0);
// drawing the model
this._draw3DModel();
// unbind the frame buffer
this.fboModel0.unbind();
// bind the frame buffer
this.fboModel1.bind();
GL.clear(0, 0, 0, 0);
// setup to back camera
GL.setMatrices(this._cameraLight1);
// drawing the model
this._draw3DModel();
// unbind the frame buffer
this.fboModel1.unbind();
}

With this you should get these textures : (front, back, depth front, depth back)

By the way it’s very important to understand that this method doesn’t work with any 3D model. If there’s a hole in the model then this will probably fail. And this could only roughly capture the shape, there will be some glitches on the sides of the model because we are only using 2 cameras. Using 6 will probably improve the quality but it’s going to be heavy on the shader so it’ll be up to you to decide how you are goign to use it, in my case the front and the back camera is enough so I go with just 2.

What are valuable to us are the depth textures, we can use it to recreate the world position using this :

vec3 WorldPosFromDepth(float depth) {
float z = depth * 2.0 - 1.0;

vec4 clipSpacePosition = vec4(TexCoord * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = projMatrixInv * clipSpacePosition;

// Perspective division
viewSpacePosition /= viewSpacePosition.w;

vec4 worldSpacePosition = viewMatrixInv * viewSpacePosition;

return worldSpacePosition.xyz;
}

In order to do this you’ll need the invert of the projection and view matrix:

this.projInvert0 = mat4.create();
mat4.invert(this.projInvert0, this._cameraLight0.projection);
this.viewInvert0 = mat4.create();
mat4.invert(this.viewInvert0, this._cameraLight0.matrix);
this.projInvert1 = mat4.create();
mat4.invert(this.projInvert1, this._cameraLight1.projection);
this.viewInvert1 = mat4.create();
mat4.invert(this.viewInvert1, this._cameraLight1.matrix);

Now we are all set to render our cubes, make sure you pass in both depth textures and all the matrices as uniforms then we can head to the shader :

float getSurfacePosition(mat4 shadowMatrix, vec3 position, mat4 invertProj, mat4 invertView, sampler2D textureDepth) {

// get the shadow coord
vec4 vShadowCoord = shadowMatrix * vec4(position, 1.0);
vec4 shadowCoord = vShadowCoord / vShadowCoord.w;
vec2 uv = shadowCoord.xy;

// reconstruct world position from depth buffer
float depth = texture2D(textureDepth, uv).r;
float z = depth * 2.0 - 1.0;
vec4 clipSpacePosition = vec4(uv * 2.0 - 1.0, z, 1.0);
vec4 viewSpacePosition = invertProj * clipSpacePosition;
viewSpacePosition /= viewSpacePosition.w;
vec4 worldSpacePosition = invertView * viewSpacePosition; return worldSpacePosition.z;
}
void main(void) {
vec3 posOffset = aPosOffset;

// get surface position from the front and back depth buffer
float z0 = getSurfacePosition(uShadowMatrix0, posOffset,
uProjInvert0, uViewInvert0, depth0);
float z1 = getSurfacePosition(uShadowMatrix1, posOffset,
uProjInvert1, uViewInvert1, depth1);

// chech if the center of the cube is within the range of z
// if is, keep the origin size if not, scale down
float scale = 1.0;
if(posOffset.z < z0 + uCubeSize && posOffset.z > z1 - uCubeSize)
{
scale = 1.0;
} else {
scale = 0.05;
}
vec3 position = aVertexPosition * scale + aPosOffset;
gl_Position = uProjectionMatrix * uViewMatrix * uModelMatrix *
vec4(position, 1.0);
vTextureCoord = aTextureCoord;
vNormal = aNormal;
}

And then it’s done !

The rest I did for my experiment is adding a noise to make the cube animate and an interaction to make the cubes disppear around my cursor.

Again this is just a simple way to capture the position of a model and it has limits. The good thing about this method is it’s not heavy so you can do it in the runtime and even with animated model. It is a bit trivial because all the invert matrices you need to create then pass in all the uniforms. If your project can afford you to use float32 textures it could be simplifed to just render the world position to the float32 texture then you don’t need to do the whole depth buffer to world position thing. If you can’t use float texture but you can use depth texture then this could come in handy.

If you are looking for more precise way you should check out Edan’s work :

During the Codevember I kept reusing this on several experiments, for example :

I force the particles to move on the surface of the model ( by setting the z ) and make the pillar to align with the surface normal. To get the surface normal just render your model’s normal to a texture, then using the same shadow coord to get the normal from this texture. In this one you can clearly see the glitch on the left / right side of the model that I mentioned.

And in this one :

The way I restraint the particles inside the model is done by pushing the particle back with the direction of the invert of the surface normal if the particle went outside of the model.

And a couple more including this, this and this. This technique really helped me a lot during my codevember, I probably couldn’t finish the Codevember without it. I consider this is more a hacky way to achieve some effects quickly. If the flaws mentioned doesn’t matter to your project then this is a quick way to create effects.

So that’s it for the part 2, I hope you enjoy it. Stay tuned for the next part !

--

--