Ray Marching Metaball in Unity3D
Introduction
I’ve been following ray marching sketches from Shadertoy nowadays and amused by it’s smoothness of rendering unlike using polygons. Among many sketches, this metaball example by Inigo Quilez caught my attention, simply because of its aesthetics of organic deformations of metaball, which was amplified by smooth and quality surface shading of ray marching.
Since Shadertoy is base on only fragment shader, blobs’ positions are updated in every pixels in this example. I wondered how much performance improvement that it could be made if all calculations except for shading are taken out of fragment shader and done beforehand.
But above all, I wanted to give a shot ray marching by implementing it by myself to understand how it works since this technique can be applied to other useful technique such as volume rendering, my another huge interest nowadays. I thought getting used to ray marching would be a good start.
1. Ray marching and Distance Field
Ray marching is a 3D rendering technique . Unlike ray tracing which uses explicit polygons and functions to find the intersection, ray marching just cast a ray from the camera to each pixel and march along the direction until it finds a close enough to the intersection.
Distance field represents the closest distance between current point of the marching ray and surface of the scene constructed by mathematical formula called distance functions. This value is used as a distance of marching step, which drastically lessen the number of marching steps also lower the overshooting compared with marching with constant step size.
Ray marching technique, more specifically, ray marching with distance field technique has been popularized and grown by Inigo Quilez, who is also the author of this metaball example on Shadertoy and also a co-creator of Shadertoy. It is capable of creating high quality visuals with significantly lower size excutables than other technique, for example the image below, which is a screenshot of his real-time demo, is created in 4 kilobytes. Too see more his works, visit his demo archive.
2. Metaball
If I borrow words from Wikipedia,
Metaballs are, in computer graphics, organic-looking n-dimensional objects. The technique for rendering metaballs was invented by Jim Blinn in the early 1980s.
Or some others said,
Metaball is an isosurface in 3D space (or isoline in 2D space).
Sometimes, we all find something hard to be explained by just words, don’t we? But I believe you have seen it at least once in the animations, games or motion graphics, cause it’s been around for a long time and fairly popular effects in computer graphics.
Maybe the most similar example you can find in the real life would be a Mercury.
They act like Mercury when more than 2 metaballs are close enough as you can see in the video below. Also this is the demo I’ll break it down in this post.
Process
1. Break it down the example
a. Setup rays
Since ray marching needs to cast rays from camera to each pixel, we need to know Ray Origin which means where your camera at and Ray Direction which means direction toward each pixels on image plane from the ray origin. We’ll get to details by examining Inigo’s example.
Here is the part where the ray direction is setup in Inigo’s example :
*Note that I took out some codes irrelevant to calculating ray direction.// move camera
float an = 0.5*time;
vec3 ro = vec3(5.0*sin(an),2.5*cos(0.4*an),5.0*cos(an));
vec3 ta = vec3(0.0,0.0,0.0);// image plane
vec2 p = -1.0 + 2.0 * fragCoord.xy / iResolution.xy;
p.x *= iResolution.x/iResolution.y;// camera matrix
vec3 ww = normalize( ta - ro );
vec3 uu = normalize( cross(ww,vec3(0.0,1.0,0.0) ) );
vec3 vv = normalize( cross(uu,ww));// create view ray
vec3 rd = normalize( p.x*uu + p.y*vv + 2.0*ww );
Now we’ll break it down from part to part :
// move camera in world space
float angle = 0.5*time;
vec3 ray_origin = vec3(
5.0 * sin(angle),
2.5 * cos(0.4 * angle),
5.0 * cos(angle));// this will be used for calculating lookat matrix below
vec3 lookat_target = vec3(0.0, 0.0, 0.0);
Set the ray_origin
(the camera) in world space. At the world root (0, 0, 0), it will be circling around Y axis along with going up and down.
The lookat_target
will be used later.
// image plane construction (-1 ~ 1)
vec2 pixel = -1.0 + 2.0 * fragCoord.xy / iResolution.xy;
pixel.x *= iResolution.x/iResolution.y;
Then we construct the image plane with pixel coordinates fragCoord.xy
which ranges from left bottom (0. 0) to right top(window.width, window.height).
Since the each pixel’s position needs to be in world space coordinates, we scale the down to 2 units width and height and offset the origin to the center.
Now we get a directional vector from the camera toward lookat_target
which we defined above by subtracting ray_origin
from the lookat_target
and normalize it. We can assume this directional vector as the forward vector of our camera. With this forward vector, we can get left and up vectors by cross product with perpendicular vectors. Within these 3 axis, we can construct the lookat rotation matrix to the camera. We use this to align the image plane looking toward the ray origin by translating each pixel by multiplying this matrix. The code is following :
// camera lookat rotation matrix toward lookat_target
vec3 world_up = vec3(0.0, 1.0, 0.0);
vec3 cam_forward = normalize( lookat_target - ray_origin );
vec3 cam_left = normalize( cross(cam_forward, world_up) );
vec3 cam_up = normalize( cross(cam_left, cam_forward));
Since image plane needs to belong in camera’s matrix in order to construct the field of view, following code does that :
// original code
vec3 ray_direction = normalize( pixel.x*cam_left + pixel.y*cam_up + 2.0*cam_forward );breakdown ------->#1
float focal_length = 2.0
vec3 image_plane = vec3(pixel, focal_length)#2
image_plane = vec3(
image_plane.x * cam_left +
image_plane.y * cam_up +
image_plane.z * cam_forward)#3
image_plane = ray_origin + image_plane#4
vec3 ray_direction = normalize(image_plane - ray_origin)
This part needed a bit more breakdown. I wrote the process in the code above, I’ll explain step by step.
- #1 : By giving Z value to the image plane, now the image plane is place in the world space, 2 units far away (0, 0, 2) from the root (0, 0, 0) along Z axis.
- #2 : By multiply the lookat rotation matrix to each pixel’s position in the image plane, now the plane is always facing toward the ray origin.
- #3 : By adding position of the ray origin, now it moves along with the ray origin while it is facing toward the ray origin 2 units far away. we can assume this is a world position of pixels on the image plane.
- #4 : Since
image_plane = ray_origin + image_plane
, we can confirm thatnormalize(pixels_in_world_space — ray_origin)
is identical with the original codenormalize( pixel.x*cam_left + pixel.y*cam_up + 2.0*cam_forward )
b. March rays
// A small number that you want to use to decide if the ray is close enough to surface.
const float precis = 0.01;// ro - ray_origin
// rd - ray_direction
vec2 intersect( in vec3 ro, in vec3 rd )
{ // Maximum distance - if the ray go further than this, we'll assume there's nothing
float maxd = 10.0; // Marching step size from the distance field - the closest distance between the current ray position and surface
float h = precis * 2.0; // Total travel distance of the ray - the distance between ray origin and surface
float t = 0.0; // This is a bit mysterious to me.. based on the code it only tells whether the ray is out of the max or not. Maybe there's other usages in specific case but not in this example?
float m = 1.0;
// How many steps of marching - the more you iterate, the more precision you will have. But also the more computation cost.
for( int i = 0; i < 75; i++ )
{
// When the ray is close enough to the surface or the total travel distance is bigger than maximum distance then break the loop
if( h < precis || t > maxd ) break; // Update the total travel distance with the updated marching distance from previous loop
t += h; // Update the marching distance from the distance field
// ro + rd * t - current position of your ray
// map(ray) - return the closest distance between the ray and the the scene
h = map( ro + rd * t );
}// update m if the ray travels further than the maximum distance. This value will be used to decide whether we render background or metaball in this example
if( t > maxd )
m = -1.0;
return vec2( t, m );
}
This function marches ray a number of iterations to find a point close enough to the surface of the scene. I think the code explains itself so I just leave comments in the code above to briefly explain the details.
c. Distance Field
As I briefly mentioned above, distance field gives you the closest distance between your current ray’s position and the scene. In this example, we only render the metabll so the distance filed function is equivalent with distance function of metaball as you see in the code below, so we move on to the distance function of metaball.
// Distance Field function. it's also called Distance Map or Distance Transform
float map( in vec3 p )
{
return sdMetaBalls( p );
}
*If you curious to see how to construct a distance field with various distance functions, I recommend to read Modeling with Distance Functions by Inigo Quilez.
d. Render Metaball
Following is the distance function of metaball in this example :
vec4 blobs[numballs];float sdMetaBalls( vec3 pos )
{
float m = 0.0;
float p = 0.0;
float dmin = 1e20;
float h = 1.0; // track Lipschitz constant
for( int i=0; i<numballs; i++ )
{
// bounding sphere for ball
float db = length( blobs[i].xyz - pos );
if( db < blobs[i].w )
{
float x = db/blobs[i].w;
p += 1.0 - x*x*x*(x*(x*6.0-15.0)+10.0);
m += 1.0;
h = max( h, 0.5333*blobs[i].w );
}
else // bouncing sphere distance
{
dmin = min( dmin, db - blobs[i].w );
}
}
float d = dmin + 0.1;
if( m>0.5 )
{
float th = 0.2;
d = h*(th-p);
}
return d;
}
*I should mention that the metaball part break down is based on my quick research while I was writing this post, so I had to rely on the assumption on many parts. So please kindly understand and please do not hesitate to point them out for me.
So… let me take a deep breath... and here we go.
We can divide this metaball function into 2 big part :
- Bounding sphere
- Metaball field
d-1. Bounding sphere
For the performance reason, we ignore the calculation for metaball field until the ray hits any of bounding sphere in the group of blobs. The radius of bounding spheres are defined by each blob’s radius blobs[i].w
// minimum distance between current ray and closest blob's surface. start with the big enough number.
float dmin = 1e20;for( int i=0; i<numballs; i++ )
{
// distance between current ray and current blob
float db = length( blobs[i].xyz - pos ); // if the distance is shorter than current blob's radius
if( db < blobs[i].w )
{
// the ray is in the blob
// do metaball field calculation
}
else
{
// the ray is still out of current blob's boundary
// update dmin with the closest distance between current ray and the surface of blobs
dmin = min( dmin, db - blobs[i].w );
}
}// add just big enough to push the ray into the blob when the ray hit the bounding sphere.
float d = dmin + 0.1; // return the updated distance for the next marching step
return d;
Let me give more detail on the reason for adding 0.1 to the closest distance. For the simplicity and better understanding, let’s assume we only have 2 blobs in 2D space. See the image below :
Let’s say the ray is not almost close to the surface of Blob P1
. It will most likely get closer to next iteration, maybe it might get smaller than precision = 0.01
value in the ray marching function. That means, the ray marching function will stop the ray and will return current distance which is the closest distance between ray and bounding sphere (not a metaball). We only need bounding spheres as a entry point for the ray into the metaball field (let’s just say inside of the blob) so we add up just enough number to fool the marching function and let the ray jump into the bounding sphere. If we zoom into the image :
The darker grey curve is the edge of bounding sphere, the red dot is the current ray, the blue line is the closest distance between the ray and surface, the green line is also the closest distance applied to ray direction, and the red line is the little push d = dmin + 0.1
to let the next ray go into the bounding sphere and to prevent terminating ray marching function’s iteration. Again, without a little push, the ray marching function will just return the closest distance to bounding sphere. Note that, as soon as we jump into the bounding sphere then we don’t need this small push anymore. I’ll point it out again in next part.
So, we’re in the blob. Let’s metaball them.
d-2. Metaball field
float m = 0.0;
float p = 0.0;
float dmin = 1e20;
float h = 1.0; // track Lipschitz constant
for( int i=0; i<numballs; i++ )
{
// bounding sphere for ball
float db = length( blobs[i].xyz - pos );
if( db < blobs[i].w )
{
float x = db/blobs[i].w;
p += 1.0 - x*x*x*(x*(x*6.0-15.0)+10.0);
m += 1.0;
h = max( h, 0.5333*blobs[i].w );
}
else // bouncing sphere distance
{
dmin = min( dmin, db - blobs[i].w );
}
}
float d = dmin + 0.1;
if( m>0.5 )
{
float th = 0.2;
d = h*(th-p);
}
return d;
Note that the italic lines will not be used from this point anymore because we’re inside of the bounding sphere. Like I mentioned, the little push is not used anymore as it is overwritten by this line :d = h * (th — p)
.
Before I directly explain the code, let’s start with the case when the ray is inside only one blob :
For the simplicity, I set the direction of ray aligned to the center of blob’s center so we don’t need to re-align the distance with ray’s direction.
Since we’re in the blob, we know db is less than blob’s radius so we can normalize it by x = db/blobs[i].w
. By doing this, we can describe current distance between ray and blob’s center in relative scale 0–1. So let’s say if rays are in the middle between the center and surface of each blob then we can say they are 0.5 unit far away from the each blob’s center in each blob’s coordinates :
In graph form :
By using this normalized distance, we’ll calculate metaball’s falloff by using this inverted polynomial equation :
p += 1.0 - x*x*x*(x*(x*6.0-15.0)+10.0);
Note that we invert it because we need to march ray into the blob not the opposite so the distance should be measured from the surface :
Now the direction is from surface to center. Even though it’s a same scale, it’s gradient is smoothed by the equation. You can see in the graph :
So now we got a smooth curve for metaball’s falloff. Now, we need to decide metaballs’ volume with threshold and weight with following codes :
if( m>0.5 )
{
float th = 0.2;
d = h*(th-p);
}
The h
is the weight for metaball’s falloff and it also scales normalized distance back to world coordinates and it’s decided in the following code :
h = max( h, 0.5333*blobs[i].w );
To be honest, I don’t still fully understand why it has to be multiplied with the number. Without multiplying it, I saw some artifacts on metaball’s surface. My best guess at this moment is that it’s for calibrating the metaball’s falloff from the polynomial equation, based on his comment in the original code. I’ll follow up this as soon as I have better understanding. I leave his his original comment for the reference.
The quintic polynomial p(x) = 6x^5–15x^4 + 10x^3 has zero first and second derivatives in its corners. The maxium slope p’’(x)=0 happens in the middle x=1/2, and its value is p’(1/2) = 15/8. Therefore the minimum distance to a metaball (in metaball canonical coordinates) is at least 8/15 = 0.533333
Back to deciding metaball’s volume. As we can see in the code, if the falloff is bigger than threshold th = 0.2
, the distance becomes negative. It means the ray is inside of the metaball volume and also the ray marching function will return current travel distance since the distance becomes less than precision = 0.01
.
If we apply this into the case if the ray is in the intersection of 2 blobs’ bounding sphere, the total p value is a sum of p1 and p2 :
In 3D space, the distance map of the metaballs from this process looks like :
e. Surface geometry and normal
Since we know the distance from ray origin and the scene surface, we can construct the virtual geometry surface based on those information by :
surface_geometry = ray_origin + distance_field * ray_direction
Now we need the surface normal in order to apply lighting techniques. With polygons, we can compute surface normal by interpolating the vertex normals of triangle but we don’t have triangles, but only know the position(surface_geometry
) of the surface from the current ray. So we need to estimate the surface normal by taking the gradient of 2 points, one just behind and the other front of current ray. We sample these 2 points with a very smaller step size and get direction from behind to front then the direction will be almost perpendicular to the position, we can assume this as surface normal. In code :
vec3 epsilon = vec3(precision,0.0,0.0);vec3 surface_normal = normalize( vec3(
map(pos+epsilon.xyy) — map(pos-epsilon.xyy),
map(pos+epsilon.yxy) — map(pos-epsilon.yxy),
map(pos+epsilon.yyx) — map(pos-epsilon.yyx) )
);
However in this example, we have for loop in metaball’s distance function to iterate blobs’ array, it’s a bit inefficient because we need to run the loop 6 more times. So Inigo uses analytic approach by using polynomial :
vec3 nor = vec3( 0.0, 0.0001, 0.0 );
for( int i=0; i<numballs; i++ )
{
float db = length( blobs[i].xyz — pos );
float x = clamp( db/blobs[i].w, 0.0, 1.0 );
float p = x*x*(30.0*x*x — 60.0*x + 30.0);
nor += normalize( pos — blobs[i].xyz ) * p / blobs[i].w;
}
return normalize( nor );
My best guess at this moment, metaballs’ spherical surface normal can be calculated with normalize( pos — blobs[i].xyz )
. This polynomial would be used mostly for the falloff. I’ll follow up this as soon as I have a better understanding after more research as well.
2. Implementing in Unity3D
2–1. Implementing in Unity3D — Compute blobs’ attributes
Based on the example, in order to ray-march the metaball in fragment shader, the requirements are values of attributes for each blob such as position and radius before it starts rendering in fragment shader.
So I decided to use compute shader to calculate them and export them into textures, and then pass them to fragment shader.
a. Render Texture setup
I use pairs of 2 render textures in order to write/read blobs’ attributes. Dimension of the textures decides the number of blobs (texture.width * texture.height) for the simplicity. Following is my render texture setup.
RenderTexture create_cs_out_texture(int _w, int _h)
{
RenderTexture _out = new RenderTexture(_w, _h, 24);
_out.format = RenderTextureFormat.ARGBFloat;
_out.filterMode = FilterMode.Point;
_out.wrapMode = TextureWrapMode.Clamp;
_out.enableRandomWrite = true;
_out.Create();return _out;
}for (int i = 0; i < 2; i++)
{
m_cs_out_pos_and_life[i] = create_cs_out_texture(tex_w, tex_h);
m_cs_out_vel_and_scale[i] = create_cs_out_texture(tex_w, tex_h);
}num_particles = tex_w * tex_h;
Note that :
- Why pairs?
- I design them like a force based particle system, so all attributes are updated and related based on their previous values.
// pass the render textures to compute shade updated previous loop
cs.SetTexture(kernel_id, "out_pos_and_life", tex_a[cur_frame^1]);
cs.SetTexture(kernel_id, "out_vel_and_scale", tex_b[cur_frame^1]);// update current render textures
cs.Dispatch(kernel_id, num_cs_threadGroup, num_cs_threadGroup, 1);// swap index between 0-1
cur_frame ^= 1;
- 32 bit float for ARGB channels texture format
- I need 8 channels (2 x ARGB) in order to write blob’s position(float3), lifespan(float), velocity(float3), and scale(float) values in higher precision - Point filter
- No filter interpolation so that I read correct values as I wrote. - EnableRandomWrite
- In order to allow compute shader to write into arbitrary locations of textures — Unity3D compute shader documentation
b. Compute Shader setup
Compute shader has 2 kernels, one is for setup and the other is for update. As I have mentioned above, compute shader calculates blobs’ attributes and writes into 2 render textures.
out_pos_and_life[global_id.xy] = float4(m_pos, m_life);
out_vel_and_scale[global_id.xy] = float4(m_vel, m_scale);
The basic rules of blobs’ attributes are following :
- blobs’ attributes are initiated based on 3d simplex noise gradient with different seed. I use Noise Shader Library for Unity by Keijiro Takahashi. Following is the functions that I use for setup.
float4 reset_pos_and_life(in float3 _seed)
{
float4 _data = snoise_grad(_seed * 5678.9012);
float3 pos = _data.xyz;
float life = abs(_data.w * 3. + 1.);return float4(pos, life);
}float4 reset_vel_and_scale(in float3 _seed)
{
float4 _data = snoise_grad(_seed * 1234.5678);
float3 vel = _data.xyz * 2.;
float scale = length((vel * .1 + .9) * .9);return float4(vel, scale);
}[numthreads(8, 8, 1)]
void cs_init_buffers(uint3 id : SV_DispatchThreadID)
{
float4 seed = reset_pos_and_life(float3(id));
out_pos_and_life[id.xy] = seed;
out_vel_and_scale[id.xy] = reset_vel_and_scale(seed.xyz);
}
- These functions are used for re-initiation when blob died by calling following function. Note that the seed(
_pos
) is updated afterreset_pos_and_life(_pos)
is called.
void revive(inout float3 _pos, inout float _life, inout float3 _vel, inout float _scale)
{
float4 _data = reset_pos_and_life(_pos);
_pos = _data.xyz; //<- seed is updated
_life = _data.w;
_data = reset_vel_and_scale(_pos);
_vel = _data.xyz;
_scale = _data.w;
}
- After the setup, blobs’ attributes are updated in update kernel. The rules are basically :
. Fetch the previous values of attributes from the previous textures which I mentioned above in render texture setup.
. Velocity and life are easing-out(fast-in to slow-out) in every loop by divided by value less than 1
. Scale is updated with jello-ish bouncing motion and after the life goes below than 1 then it starts decreasing to disappear
. Position is updated by adding velocity to itself after the velocity is updated
. Blobs stay in certain size of cube in order to emphasize the metaball effects by letting them touching each others
. When they died, they are revived by calling the function I have mentioned above
[numthreads(8, 8, 1)]
void cs_update_buffers(uint3 global_id : SV_DispatchThreadID, uint3 local_id : SV_GroupThreadID)
{
// unpack data to vars
float4 _data = u_p_pos_and_life.Load(int3(global_id.xy, 0));
float3 m_pos = _data.xyz;
float m_life = _data.w;
_data = u_p_vel_and_scale.Load(int3(global_id.xy, 0));
float3 m_vel = _data.xyz;
float m_scale = _data.w;// update events
if (m_life < .001)
revive(m_pos, m_life, m_vel, m_scale);
stay_in_cube(m_pos, m_scale, m_vel);// update data
clamp_vel(m_vel);m_vel *= .96;
m_life *= .96;float _rad = u_time_delta * u_time * 3.1459/180.;
m_scale += 0.06 * sin((_rad + m_life) * 30.) / sqrt((_rad + m_life)) * (1. - sqrt((_rad + m_life)));
m_scale *= min(m_life * 15., 1.);
m_scale = max(m_scale, 0.);m_pos += m_vel * u_time_delta;
// out data
out_pos_and_life[global_id.xy] = float4(m_pos, m_life);
out_vel_and_scale[global_id.xy] = float4(m_vel, m_scale);
}
c. Debug Blobs’ attributes with instance mesh
So now I have got the values of blobs’ attributes, in order to see if they works as I intended before I start ray marching with them, I renders it with instancing mesh.
I use DrawMeshInstancedIndirect to render instance mesh with custom shader and the render textures I just updated. Basically, I followed Unity’s documentation to setup .cs and shader, but in this documentation, it uses Compute Buffer instead of Render Texture which I’m using. So note that looking up the values with InstanceID is slightly different. See the comparison below :
// in documentation
StructuredBuffer<float4> buf;
float4 data = buf[unity_InstanceID];----------->// my case
Texture2D tex;
float tex_w;// since the number of instance are same with tex.w * tex.h
int3 _coords = int3(
unity_InstanceID % (int)tex_w,
(int)((float)unity_InstanceID / tex_w),
0);
);
float4 data = tex.Load(_coords)
2–2. Implementing in Unity3D — Render metaball in fragment shader
a. Setup .cs script for ray marching
Unity3D provides pre-defined functions for you to access specific stage of event. Since ray marching is fragment shader based, I chose to implement it on OnRenderImage which is the last stage of screen rendering. Structure of the function is following :
void OnRenderImage(RenderTexture _src, RenderTexture _dst)
{
* pass the updated textures form compute shaders and other uniforms Graphics.Blit(_src, _dst, raymarching_material);
}
The function gives you a render texture (_src
), a frame buffer which contains rendering image of the scene as a texture. This will be passed to the material as a built-in uniform sampler2D _MainTex when you run Graphics.Blit
and then process the ray marching shader attached on the material to output processed image into a render texture given by the function(_dst
), which is the final frame buffer that will be rendered on the screen.
*Note that this .cs script needs to be attached to the camera you render.
b. Setup shader for ray marching
Move on to the ray marching. I use custom Image Effect Shader which gives you a template for you to start straight to write fragment shader code for Blit.
Basically, I just translated the code from Shadertoy(GLSL) to Unity3D(HLSL) so the basic structure is same but in slightly different syntax. So I’ll not explain entire codes but only couple things which differently implemented.
b-1. Setup rays
Remeber how complex it was to setup rays in the example? Fortunately, it’s a lot simpler in Unity3D cause it provides tons of useful predefined functions and variables for you. Following is how I setup the rays :
// Setup rays// Image plane in NDC space - UNITY_NEAR_CLIP_VALUE will give you the near plane value based on the platform you are working on. ex, Direct3D-like platforms use 0.0 while OpenGL-like platforms use –1.0.
// https://docs.unity3d.com/Manual/SL-BuiltinMacros.html
float4 m_pixels = float4(_uv * 2. - 1., UNITY_NEAR_CLIP_VALUE, 1);// to view space
m_pixels = mul(unity_CameraInvProjection, m_pixels);
// Perspective division
m_pixels.xyz /= m_pixels.w;// to world space// you can get the inversed view matrix on .cs
// cam.cameraToWorldMatrix
m_pixels = mul(u_inv_view, m_pixels);// camera position in world space
float3 m_rayOrigin = _WorldSpaceCameraPos;// compute ray direction
float3 m_rayDir = normalize(m_pixels.xyz - m_rayOrigin);
It’s slightly different than how we did in the example. In the example, we manually define camera’s behaviors such as filed of view and transform matrix. However, in Unity3D, we can use properties of camera object provided by the engine so we just need to undo the Matrix Transformation in opposite order.
As you can see, I use some predefined variables provided by the engine, UNITY_NEAR_CLIP_VALUE
, unity_CameraInvProjection
, and_WorldSpaceCameraPos
. You can find more information via their official documention for shader
To see if I set the ray correctly, I render ray marching spheres on top of instance cubes. We can see it’s aligned.
Since ray marching is happening after the mesh rendering, it overwrites the pixels on top of the frame buffer like you see above. However, it’s possible to composite the ray marching object with real mesh by using depth buffer. I haven’t tried it by myself yet in this project but you can see how it’s done in this blog post Raymarching Distance Fields: Concepts and Implementation in Unity by Adrian Biagioli.
b-2. Ray marching metaball
The distance function is same with the example but only I fetch the blob’s attributes from the texture :
for (int i = 0; i < 8; i++)
{
for (int j = 0; j < 8; j++)
{
int3 _texel_coords = int3(i, j, 0);
float3 blob_pos = u_tex_pos_life.Load(_texel_coords).xyz;
float blob_scale = u_tex_vel_scale.Load(_texel_coords).w; // bounding sphere for ball
float db = length(blob_pos - _point);
if (db < blob_scale)
{
float x = db / blob_scale;
p += 1.0 - x * x*x*(x*(x*6.0 - 15.0) + 10.0);
is_in_sphere = 1.0;
h = max(h, 0.5333 * blob_scale);
}
else
{
dmin = min(dmin, db - blob_scale);
}
}
}
b-3. Shading and Lighting
I borrow a lighting equation from Inigo’s another shadertoy sketch Angles. For blobs’ color and illumination is defined by blobs’ direction and speed. I decided to follow up shading and lighting in ray marching in another post cause this post has got already too long. In case if you are curious, I’ve attached project source in this post.
Resources
Project Source
References
Ray marching metaball and surface Shading
- Raymarching metaball — Inigo Quilez’s raymarching metaball
- Raymarching surface rendering — Inigo Quilez’s Angels
Ray marching
- Raymarching Distance Field by Inigo Quilez
- nvscene 2008 — Rendering Worlds With Two Triangles by Inigo Quilez
- Ray Marching Distance Field by Simen Haugo
- Distance Transform Wikipedia
- Volumetric Rendering: Raymarching by Allan Zucconi
- Raymarching Distance Fields: Concepts and Implementation in Unity by Adrian Biagioli
- Ray Marching and Signed Distance Functions by Jamie Wong
Metaball
- Metaballs (also known as: Blobs) By Ryan Geiss
- Metaball Wikipedia
Compute Shader
- Unity Compute Shader — Part 1 by Claire Blackshaw
- Unity Compute Shader — Part 2 by Claire Blackshaw
Matrix Transformation / Camera Frustum / Ray Casting
- Ray-Tracing: Generating Camera Rays by Scratchapixel 2.0
- Computing the Pixel Coordinates of a 3D Point by Scratchapixel 2.0
- The Perspective and Orthographic Projection Matrix by Scratchapixel 2.0
- Viewing Frustum Wikipedia
- OpenGL Transformation by Song Ho Ahn
- OpenGL Projection Matrix by Song Ho Ahn
- Homogeneous Coordinates by Song Ho Ahn
- OpenGL Lookat to Axis by Song Ho Ahn
Development Environment
Windows 10 / GeForce GTX 970
Unity3D 2017.4.0f1