Real-time rendering of water caustics

In this article, I present an attempt for generalizing caustics computation in real-time using WebGL and ThreeJS. The fact that it is an attempt is important, finding a solution that works well in all cases and runs at 60fps is difficult, if not impossible. But you will see that we can get pretty decent results using this technique.

What are caustics?

Caustics are patterns of light that occur when light is refracted and reflected from a surface, in our case an air/water interface.

Due to the reflection and refraction occurring on water waves, water acts as a dynamic magnifying glass which creates those light patterns.

Image for post

In this post we focus on caustics due to the light refraction, so mainly what happens underwater.

In order to get stable 60fps, we need to compute them on the graphics card (GPU), so we will compute them entirely using shaders written in GLSL.

To compute them, we need to:

  • compute the refracted rays at the water surface (which is straightforward in GLSL as a built-in function is provided for that)
  • compute where those rays are hitting the environment with an intersection algorithm
  • compute the caustics intensity by checking where rays are converging
Image for post

The well-known WebGL-water demo

I was always amazed by this demo by Evan Wallace, showing visually convincing water caustics using WebGL:

Image for post

I really recommend reading his Medium article which explains how to compute them in real-time using a light front mesh and the partial derivative GLSL functions. His implementation is blazingly fast and super good looking, but it has some drawbacks: It only works with a cubic pool, and a sphere ball in the pool. You cannot put a shark underwater and expect the demo to work, simply because it is hard-coded in the shaders that it is a sphere ball underwater.

The reason for putting a sphere underwater is that computing the intersection between a refracted light ray and a sphere was straightforward, and it involves very simple math.

All of this is fine for a demo, but I wanted a more general solution for caustics computation, so that any kind of unstructured meshes could lay in the pool, like a shark.

Image for post

Now, let's get to our approach. In this article, I will expect you already know the basics of 3D rendering using rasterization, and how the vertex shader and fragment shader work together to draw primitives (triangles) on the screen.

Working with GLSL limitations

In shaders, written in GLSL (OpenGL Shading Language), you can only access a limited amount of information about the scene like:

  • Attributes of the vertex you are currently drawing (position: 3D vector, normal: 3D vector, etc.). You can pass your own attributes to the GPU, but it needs to have a GLSL built-in type.
  • Uniforms, which are constant for the entire mesh you are currently drawing, at the current frame. It can be a texture, the camera projection matrix, a light direction etc. It has to have a built-in type: int, float, sampler2D for textures, vec2, vec3, vec4, mat3, mat4.

But there is no mean for accessing to meshes that are present in the scene.

This is the exact reason why the webgl-water demo could only be made with a simple 3D scene. It was easier to compute the intersection between the refracted ray and a very simple shape that can be represented using uniforms. In the case of a sphere, it can be defined by a position (3D vector) and a radius (float) so this information can be passed to the shaders using uniforms, and the intersection calculation involves very simple math that can easily and quickly be performed in a shader.

Some ray-tracing techniques performed in shaders pass meshes through textures, but this is out of scope for real-time rendering using WebGL in 2020. We have to keep in mind that we want to compute 60 images per second, using a good amount of rays in order to get a decent result. If we compute the caustics using 256x256=65536 rays, it means running an important amount of intersection calculations each second (which also depends on the number of meshes in the scene).

We need to find a way to represent the sub-water environment as uniforms and compute the intersection, while keeping decent performances.

Creating an environment map

When it comes to dynamic shadows computation, a well known technique is shadow mapping. It is commonly used in video games, it looks good and it’s fast.

Shadow mapping is a technique which is performed in two passes:

  • The 3D scene, seen from the light point of view, is first rendered in a texture. This texture, instead of containing the fragments color, will contain all the fragments depth (distance between the light source and the fragment). This texture is called the shadow map.
  • The shadow map is then used when rendering the 3D scene. When drawing a fragment on the screen, we can know from the shadow map if another fragment is between the light source and our current fragment. If that is the case we know our fragment is in the shadow and we should draw it a bit darker.

You can read a bit more about shadow mapping and find nice illustrations in this excellent OpenGL tutorial:

You can also find a live example using ThreeJS (press “t” to display the shadow map on the bottom left corner) here:

This technique works just fine in most cases. It can work with any kind of unstructured meshes in the scene.

My first idea was that I could perform a similar approach for the water caustics, which means first rendering the sub-water environment in a texture, and use this texture for computing the intersection between the rays and the environment. Instead of rendering the fragments depth only, I also render the fragments position in the environment map.

This is the environment map result:

Image for post
Env map: the RGB channels store the XYZ position, the alpha channel stores the depth

How to compute ray/environment intersection

Now that I have the sub-water environment map, I need to compute the intersection between the refracted rays and the environment.

The algorithm works as following:

  • Step 1: Start from the point of intersection between the light ray and the water surface
  • Step 2: Compute refraction using the refract function
  • Step 3: Move from the current position in the direction of the refracted ray, by one pixel of the environment map texture.
  • Step 4: Compare the registered environment depth (stored in the current environment texture pixel) with your current depth. If the environment depth is bigger than the current depth, it means we need to go further, so we apply again step 3. If the environment depth is smaller than the current depth, it means the ray hit the environment at the position your read from the environment texture, you found the intersection with the environment.
Image for post
current depth smaller than environment depth: you need to go further
Image for post
current depth bigger than the environment depth: you found the intersection

Caustics texture

Once the intersection is found, we can compute the caustics intensity (and a caustics intensity texture) using the technique explained by Evan Wallace in his article. The resulting texture looks like the following:

Image for post
Caustics intensity texture (note that the effect of caustics is less pronounced on the shark, because it’s closer to the water surface, which reduces light convergence)

This texture contains the light intensity information for each point of the 3D space. We can then read this light intensity from the caustics texture when rendering the final scene, and we get the following result:

Image for post
Image for post

You can find the implementation of this technique on the following Github repository: Give it a star if you like it!

Try it live!

You can try this demo if you want to see the result of the caustics computation live:

About this intersection algorithm

This solution depends a lot on the environment texture resolution. the bigger the texture is, the better the precision of the algorithm is, but the longer it takes to find the solution (you have more pixels to read and compare before finding it).

Also, reading from a texture in shaders is alright as long as you don’t do it too many times, here we are making a loop that keeps reading new pixels from the texture, that is not recommended.

Furthermore, while-loops are prohibited in WebGL (for a good reason), so we need to make our algorithm a for-loop that can be unrolled by the compiler. This means we need an end-condition for our loop that is known at compilation time, typically a “maximum iteration” value, which enforces us to stop looking for the intersection if we did not find it after a maximum number of attempts. This limitation results in wrong caustics results if the refraction is too significant.

Our method is not as fast as the simplified set up by Evan Wallace, yet it is much more tractable than a full-blown ray tracing approach, and can be used for real-time rendering. However speed remains dependent on some conditions like the light direction, refraction intensity, and environment texture resolution.

Finalizing the demo

This article is focused on the water caustics computation, but there are other techniques used in this demo.

Concerning the water surface rendering, we used a skybox texture and cube mapping to get some reflection. We also applied refraction on the water surface using a simple screen space refraction (see this article about screen space reflection and refraction), this technique is not physically correct but it’s visually appealing and fast. Furthermore, we added chromatic aberrations for more realism.

We still have some ideas for further improvements including:

  • Chromatic aberrations on caustics: we currently apply chromatic aberrations on the water surface, but this effect should also be visible on the underwater caustics.
  • Light scattering through the water volume.
  • As suggested by Martin Gérard and Alan Wolfe on Twitter, we can improve performances by using hierarchical environment maps (which would act as quad trees for the intersection searching). They also suggested to render the environment maps from the point of view of the refracted rays (assuming the water is completely flat), this would make the performances independent from the light incidence.


This work on real-time and realistic visualization of water is led at QuantStack and founded by ERDC.

About the author

Image for post

My name is Martin Renou, I am a Scientific Software Engineer at QuantStack. Before joining QuantStack, I studied at the aerospace engineering school SUPAERO in Toulouse, France. I also worked at Logilab in Paris, France and Enthought in Cambridge, UK. As an open-source developer at QuantStack, I work on a variety of projects, from xtensor and xeus-python in C++ to ipyleaflet and bqplot in Python and Javascript/TypeScript.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store