Camera Game

Nathan Gordon
8 min readDec 20, 2015

--

Case Study

Link to game.

For this years’ Christmas Experiments I made a small (tiny) camera game. Creating it was a fun experience, which took a week and a half of very late nights after work.

I’ve dreamt of making a proper camera game for a long time — with the only real example being Pokemon Snap and couple of games that had cameras in them, like The Legend of Zelda, and Beyond Good and Evil. I think it could be just as fun as a first-person-shooter, but without having to kill anybody (which is pretty warped if you think about it). The ultimate goal would be to recreate the psychological reaction of achieving a ‘head shot’, which makes you feel awesome, but with a camera. One day, one day… This little experiment is at least one tiny step.

In order to make the most of the experience, I had a few goals:

  1. Make the entire UI using webGL shaders (not HTML/CSS)
  2. Create the scene in a 3D program and render it (not real-time)
  3. Make it work on phones to harness Device Orientation

Even though I fell a bit short of my initial idea — which was to make a difficult, Where’s Wally (Waldo, Charlie) style game — I still managed to achieve my basic goals more-or-less.

Making the UI using shaders

Everything on the site, apart from text (and the little arrows in the tutorial) is made using WebGL shaders. I thought it would be a great challenge, and help me find out what were the pros and cons of doing so. In the end I’m pretty sold on the idea!

This next part is directed at people interested in shaders, but without much/any experience to show them how simple and fun it actually is. If you are experienced, you can probably just skip through the images and understand what’s going on.

Wait a sec. What’s a shader?

The Book Of Shaders has a great, in-depth explanation here, but basically speaking, they are a short piece of GLSL code that run millions of times per second on your GPU to display the pixels on your screen. The shader that I’ll be talking about (Fragment/Pixel Shader) runs once per pixel, per frame.

And why do this?

Using shaders, you can create effects that would be impossible to achieve with an acceptable frame-rate in Javascript or other code run on the CPU. They’re highly optimised graphics code.

I’m going to run through how to make this little Focus display, as it was pretty challenging for me.

To break down my thought process when writing a shader (and I’m no expert), they’re basically a function that runs for each pixel and outputs a single colour, which is just 4 numbers, ranging from 0 to 1. For example (1, 1, 1, 1) is a solid white, (0, 0, 0, 1) is solid black (the last number is the opacity/alpha).

Something to keep in mind is that each pixel doesn’t know about the others, for him it’s just a 1x1 image.

The really helpful information comes in the inputs, which for this example include:

  • gl_FragCoord — the location of the pixel, e.g. x: 236, y: 100
  • resolution — the size of the entire image, e.g. x: 1440, y: 900

The first of these two is a default variable, the second of which needs to be fed into the function manually.

Using these two, we can tell where the pixel is relative to the entire image — and this is all we need to place and draw our elements on the page.

First step: creating the vertical lines.

float verticalLines(float thickness, float gutter) {
return step(gutter, mod(gl_FragCoord.x, thickness + gutter));
}
void main() {
vec3 color = vec3(verticalLines(2.0, 5.0));
gl_FragColor = vec4(color, 1.0);
}

This code uses two important methods, ‘step’ and ‘mod’.

‘Step’ returns with 1 or 0, depending if the second value is higher or lower and the first.

‘Mod’ (modulo) converts a number into a looping range. For example, mod(12, 10) would return 2. This would be written as ‘12 % 10’ in javascript.

A very helpful glossary to explain all GLSL functions is this site. I refer to it more often than I probably should need to.

So as the x coordinate increases, it’s restricted to my range, and the step returns 0 or 1 for when it should draw a line or not, resulting in a black or white colour.

‘Gl_FragColor’ is where we set the colour that is displayed for the pixel in question.

Next is to make the curve.

Start by making a function return 0 to 1 for a specific range.

clamp((gl_FragCoord.x — resolution.x * 0.4) / (resolution.x * 0.1), 0.0, 1.0);

‘Clamp’ restricts the first value to the second (min) and third (max).

Then we’re going to use this range to create a smooth curve by multiplying this number by itself, creating a quadratic curve.

float curve(float range, float height) {
return step(gl_FragCoord.y, range * range * height);
}

We display this by using the step function, and comparing our curve with the y coordinate. So if the pixel y coordinate is greater/less than the result of the curve equation (which uses the x coordinate), it returns 1 or o.

I’ve also added a height variable, to make the curve taller than 1 pixel.

We then create a second curve that is the exact same but just flipped horizontally. This is done just by changing the range value.

1.0 — clamp((gl_FragCoord.x — resolution.x * 0.5) / (resolution.x * 0.1), 0.0, 1.0);

Then we take these values from the two curves (either 0 or 1) and multiply them together. This does an intersect boolean operation, leaving 1 only where both curves overlap, and 0 elsewhere.

And then we do a second version of this, flipped vertically and shifted a couple of pixels.

Then we can do another intersect by multiplying them together.

Let’s bring back our lines, and intersect with them as well.

Close, however the way that the lines are clipped on the curve doesn’t look great — it would be better if they all had flat tops.

We can create a stepped curve instead of a smooth one by using modulo inside of our range function.

clamp(((gl_FragCoord.x - mod(gl_FragCoord.x, 10.0)) — resolution.x * 0.45) / (resolution.x * 0.1), 0.0, 1.0);

This is the same as the first example, but with the modulo of the x coord being subtracted from itself.

Modulo is your friend, you can do some pretty amazing stuff with it very simply. I needed to visualise what it was actually doing to really understand it though.

Here’s a modulo curve at it’s most basic

y = mod(x, 1)

And if we subtract this from a linear curve (y = x) it creates a stepped curve.

y = x — mod(x, 1)

So this is what’s happening to our range. And this creates a nicer clipping.

That’s all there is to it. It was the most complicated UI element on the site, and was actually pretty fun to figure out.

This example would have been arguably simpler to create using separate divs, and animate their individual height, but I wanted to stick to my initial goal and keep everything inside of shaders.

Here is a shadertoy of the technique that you can play with.

Creating the scene

I finished the functionality of the entire game in a couple days, thanks largely to the in-house toolset developed at my current workplace, Active Theory. The rest of the time was taken in creating the scene from scratch, in which I learnt a lot of new techniques.

The aim was to render out two equirectangular (spherical panorama) images, which would be wrapped around a sphere that the user could explore. The two images included one for the scene itself, and another for the depth data.

The depth image was used as an input to the amount of blur applied to the image. In changing the focus, I could dolly along these values and shift the point of focus.

This is the simplest version of post depth of field, and has many caveats, such as the blurred background bleeding into the foreground. However in its basic form, it still gives the illusion of 3d depth, which was the aim — to create a rendered complex scene that would be impossible to create in real-time that still felt 3d thanks to this interaction.

Here are a few shots of the modelled elements. I would have loved to spend more time on the scene itself, but I only just finished in time and it was already taking more than 3 hours to render on my poor little macbook (he churned through the night before submission).

Fin

For the second year in a row, I had a great time making an entry for Christmas Experiments. If it interests you, you should get amongst it! There’s always next year.

Personally, I find the experience more immersive on mobile as it harnesses the Device Orientation to make it feel more like a real camera.

Thanks a lot for reading. In case you missed it, here’s a link to the game itself.

--

--