Ray Tracing Adventures Part III: Multisampling

Doruk B.
7 min readNov 30, 2022

--

In the previous post we have improved the performance of our raytracer. Then its time to add new fancy functionalities and make it slow once more. To start with , we will be adding multisampling support. Once we have that, it’s not too difficult to implement distribution raytracing to support rough materials, depth of field, motion blur and soft shadows!

Let’s start with multisampling as that will be needed for the others. In fact, it is not strictly needed, except for soft shadows, but without it, the results would be too noisy.

Multisampling

Rendering a scene by ray tracing corresponds to some extent learning about the actual light distribution by sampling it using a fixed amount of rays. Previously, we rendered each scene by sending a primary ray through the center of each pixel meaning we had single sampling. This approach is prone to aliasing artifacts and multisampling is a technique that aims to mitigate the aliasing problem.

Sampling a signal has a nice in-depth theory behind it that is crucial to how computers and electronics work, but we will spare you the mathematical details here and focus on implementation.

Credits: Ahmet Oguz Akyuz, CENG 795 — Advanced Ray Tracing

First off, how do we select the different samples inside a pixel?

There are many approaches, some quite simple such as just picking a random point inside the pixel borders. However those approaches might introduce rendering artifacts if we happen to get a really bad random positioning, like clumping all at one corner. One might think that’s low chance, but allow me to remind you that we are dealing of millions of pixels and so the expected value of that happening is not so small anymore.

One relatively easy and good enough method is called Stratified Random Sampling.

Credits: Ahmet Oguz Akyuz, CENG 795 — Advanced Ray Tracing

The implementation is as simple as this:

Mersenne-Twister random number generator
Computing Jittered sample offsets inside a pixel.
Cast it away!

Now we have multiple rays, originating at slightly different locations inside a pixel, each returning a color value at the end. Let’s say we have 100 samples per pixel, what to do with that 100 color values?

Well, in the end, for that pixel we will write a single color value to our image buffer and so we need to create a single value out of all those samples ones. This is where filtering comes into play.

In our case, filtering, or reconstruction is the process of creating a final color after we obtained multiple samples for a single pixel.

The simplest approach that might have occurred to you is to just sum up all color channels, divide by the sample count, or shortly take the average value for each channel!

Turns out its the worst filter theoretically but quite decent in practice. Still, a more interesting option is to use a 2D gaussian curve, which is what we will implement next.

Gaussian filtering is based on the idea of giving more weight to samples that are closer to the center of a pixel, the sampled value from the curve will be used as a weight that multiplies the color value for that sample.

Credits: Ahmet Oguz Akyuz, CENG 795 — Advanced Ray Tracing

One thing to note here is that we are free to “choose” whatever standard deviation we want, but there is a smarter choice using a useful property of the Gaussian curve which is that about 99.7% of the total area under the curve falls within 3𝜎 of the mean.

Credits: Ahmet Oguz Akyuz, CENG 795 — Advanced Ray Tracing

Time for some code. Noting that some part of the curve is a constant after standard deviation is set, we can cache that.

Gaussian2D Struct

Once we have that, application is quite simple. We divide by the sum of weights to make sure total weight is 1, and no brightness is introduced but rather a more clever “average“.

Gaussian Filter Applied to samples

That was all for multisampling, now we are actually ready to implement the fancy stuff.

Distribution Ray tracing

Distribution ray tracing, or stochastic ray tracing — named as such due to use of Monte Carlo method — is a way of rendering “soft” phenomena. Main idea and requirement is to send multiple rays when we are computing a feature such as color instead of a single ray. We already have multiple rays per pixel thanks to multisampling, so we are more or less there. We will implement rough materials with glossy reflections, motion blur, depth of field and soft shadows. Each feature will require slightly different things so let’s investigate case by case.

Glossy Reflections

Main idea is to slightly modify the computed “perfect” reflection direction. This follows from the fact that no surface is perfectly flat in reality, they all have irregularities that naturally result in these slightly different reflection directions. Trying to simulate that will improve the realism of our materials.

Credits: Ahmet Oguz Akyuz, CENG 795 — Advanced Ray Tracing

Ultimately, we want to compute an offset vector and add that to the perfect direction. Not all offsets will work correctly though, we have to restrict our offsets in a certain direction. Otherwise, it’s not hard to get reflection directions that are completely wrong.

Here the idea of an orthonormal basis (ONB) comes into play. We are basically going to create a plane whose normal direction is the original reflection direction. In addition, all our offset vectors will be contained in that plane.

For this, the slides are great.

Let n = original reflection direction for our case.

Credits: Ahmet Oguz Akyuz, CENG 795 — Advanced Ray Tracing

Once the ONB is ready, we need uniform random numbers to create uniformly distributed offset vectors.

Uniform random numbers

Then the modified direction r’ is given by:

Credits: Ahmet Oguz Akyuz, CENG 795 — Advanced Ray Tracing

With zero roughness, we again obtain the perfect mirror results, however with roughness of 0.1 in a particular scene setup, this is the result:

Roughness = 0.1
Roughness = 0.3

Another scene with high sample count of 400 at 800x800 resolution. Thats another way of saying 256,000,000 rays.

Motion Blur

To implement motion blur, we will be computing a transformation, only translation in our case to keep it simple, and interpolate that transformation by a uniform time value for each of the primary rays.

Credits: Ahmet Oguz Akyuz, CENG 795 — Advanced Ray Tracing

Motion blur transform is applied right after the ray is inverse-transformed to the object’s local space.

Motion blur applied after regular inverse transform

There is some apparent undesired noise that we will try to solve next time, but here is the result of this:

100 samples per pixel, 351 seconds to render @800x480

Soft Shadows and Area Lights

Next up is soft shadows, and to achieve them we need area lights first. A point light can not cast a soft shadow no matter how many ray we sample with, since it can only produce one unique sample, the point position.

Credits: Ahmet Oguz Akyuz, CENG 795 — Advanced Ray Tracing

We then simple get a sample for each ray during shading to obtain the following result.

Depth of Field

This effect helps us simulate the real camera lenses by introducing a direction bending to our initial ray casting. In effect, we can achieve in-focus and out-of-focus objects and render scenes such as this one:

The derivation is a bit more involved with this one, but here is the overview of it.

And the implementation is rather simple.

Random samples are generated from a single generator that is decided to produce random numbers in range [-1,1] for this purpose.

Behold the focusing dragons!

We added some cool effects but our raytracer is once more slower than a turtle thanks to multisampling and not-so-efficient implementations of the features. We also noted some undesired noise in some scenes. Let’s hope to fix those issues until next time.

Thank you for reading.

--

--