LuxMancer: Mastering Light and Shadow- MINIRT: PART 3

B.R.O.L.Y
7 min readJan 23, 2024

--

0 — Authors: A Collaborative Endeavor:

This blog post is the result of a collaborative effort between RIDWANE EL FILALI and MOHCINE GHALMI, we together navigated the intricacies of vector mathematics, graphics, and ray tracing. The synergy of their insights and expertise brings you this exploration into the world of vectors and their applications.

Feel free to connect with Mohcine Ghalmi on Medium to explore more of his contributions and insights.

1 — Introduction :

after looking at the second chapter chapter of the walkthrough we saw how the parsing of the map is done and what the structs and the objects used but keep in mind the course is not to copy but just to get an idea in your mind and you can do much better, but make sure to send us some pictures of your art

2 — Multithreading in rendering:

Let’s chat about why the whole multithreading thing in ray tracing is a game-changer. The code we’re looking at isn’t just throwing fancy tech words around; it’s doing some heavy lifting to make your ray tracing experience smoother and faster.

Imagine trying to paint a massive wall by yourself. It’s going to take ages, right? Now, imagine if you had a bunch of friends helping out. Each friend takes a section of the wall, and suddenly, things are moving way faster. That’s pretty much what’s happening here with the threads. but keep in mind that might not work for humans because they’re way complicated and everyone with his version in the case of the thread imagine cloning a human so they’re the same person and request them to paint 4 or paintings they’ll finish in the shortest amount of time possible

let’s take the picture above as the image of our scene so the point is that every thread is responsible for several rows and for us to track the progress of the rendering we can do that by making the last thread responsible for the logging

you can check the logic :

void render_scene(struct carried)
{
...
int color;
int n;

n = yres / NUM_THREADS;
j = n * tid;
while (j < (n * (tid + 1)))
{
i = 0;
while (i < xres)
{
color = pixel_color(...);
px_img[pixel_position(x,y)] = color;
i++;
}
if (tid == NUM_THREADS - 1)
printf("\rRendering scene... [%d%%]", 100 * (w->j % n) / n);
j++;
}
if (tid == NUM_THREADS - 1)
printf("\rRendering scene... [100%%]\n");
}

now that we got the method used out of the way let’s take a look at how to determine the color of the pixel

3 — MSAA : SUPERSAMPLING ANTI-ALIASING:

The problem that we encountered and most likely you will encounter too is the problem where the shapes you just displayed are pixilated even the sphere isn’t that a joke he, don’t stress I’ll explain why

The pixel layout of a screen is represented as a bunch of squares which we call pixels to represent a sphere we just calculate its circle and if one of the pixel rays intersects with that circle we color it but that will lead us to have this shape

now you see the reason why you can see that it’s not a sphere it’s just a bunch of squares crushed together is that they are. the other reason is that it’s zoomed, so to solve this problem and have a smooth-looking image of our renders we have to implement MSAA

now you see the difference it’s beautiful he, to understand what happens let’s zoom a little closer to the objects to see it

we can see that the sharpness of the line is reduced that’s by changing the pixels that interact with or even just touched by the color to a reduced intensity of that same color and that’s done by a bunch of calculations which will see. in the normal rendering, we just have to shoot the ray directly in the middle of the pixel and we do not care if the ray just intersects with an object we just color the pixel

now that’s all good and all but we want to have a good-looking image now what about scaling things we will device the same pixel into 9 small squares and will shoot pixels based on the position of the pixel in the screen through the center of the small squares inside the pixel

the dots in the image represent the rays that are being shot and will see what will get in the intersection

we see in the image above what happens and how we get the colors based on the intersection now the thing that you need to know fellow reader is that we do not shoot 9 rays in every pixel that would turn your PC into a Furnace (just joking) it will take just a lot of compilation time and calculation but rather will shoot only the necessary rays but for the time complexity sake we will use a buffer to store the edge_colors of the last row and use then in the present color just like we will use the colors of the last pixel just to make things faster and the other reason is that they’re very close to each other and to make the images smoother.

int     sample_pixel(int *line_edge_color, int last_colors[2], ...)
{
int *color;

if (i == 0)
color = sample_first_column(line_edge_color, last_colors, ...);
else if (i == xres - 1)
color = sample_last_column(line_edge_color, last_colors, ...);
else
color = sample_centered_pixel(line_edge_color, last_colors, ...);
return (color);
}
int   *sample_first_column(int *line_edge_color, int last_colors[2], ...)
{
int *color;

color = (int *)malloc(sizeof(int) * 4);
if (!color)
error_message("Error malloc failure in sample first column\n");
if (j == yres / NUM_THREADS * tid)
{
color[0] = calc_ray(0, rss, w);
color[1] = calc_ray(2, rss, w);
color[2] = calc_ray(6, rss, w);
color[3] = calc_ray(8, rss, w);
last_colors[0] = color[3];
last_colors[1] = color[1];
line_edge_color[0] = color[2];
}
else
{
color[0] = line_edge_color[0];
color[1] = line_edge_color[1];
color[2] = calc_ray(6, rss, w);
color[3] = calc_ray(8, rss, w);
last_colors[0] = color[3];
line_edge_color[0] = color[2];
}
return (color);
}
...

now we repeat the procedure on all the pixels of the screen but keep in mind that adaptive supersampling can be faulty sometimes so to fix the color difference in time of error we’ll have to apply supersampling between the pixel colors, what we have to do is compare colors if they’re different from each other by a large margin we have to super super sample it hhhh

now to achieve the super super sampling in time of different coloring we’ll have to compare two by two colors achieved by the pixel and we do that by extracting the RGB values of the color, now lets us first look at the color and how it’s structured

color = RGB

which means it has three coordinates doesn’t that make you remember something, yes yes it simulates a point in 3 three-dimensional space and to calculate the difference between two colors we calculate the Euclidean distance using this formula

now the threshold or rather the distance that you want to make as the max for the decision of super sampling is yours but for me, I used 1000 if the colors are far away from each other we apply the super super sampling where we take the color of the edge with the center of pixel and then we merge them and restore the color but keep in mind that you have to set a limit to this procedure or you’ll run into an infinite loop. but if the colors are not that far away from each other we just average the color normally depending on the part of the execution you’re doing, if the colors are close to each other you’ll use the averaging between the four colors directly but if they’re far away from each other use the two color one but that's after you merge the center with the edges

and that’s the color that will be displayed.

with this, we finish part 3 my friends but don’t worry we are still far away from finishing and will meet in the next one. bye

--

--

B.R.O.L.Y

My name is RIDWANE EL FILALI but you can call me B.R.O.L.Y