LuxMancer: Mastering Light and Shadow- MINIRT: PART 4

B.R.O.L.Y
6 min readJan 25, 2024

--

0 — Authors: A Collaborative Endeavor:

This blog post is the result of a collaborative effort between RIDWANE EL FILALI and MOHCINE GHALMI, we together navigated the intricacies of vector mathematics, graphics, and ray tracing. The synergy of their insights and expertise brings you this exploration into the world of vectors and their applications.

Feel free to connect with Mohcine Ghalmi on Medium to explore more of his contributions and insights.

1 — Introduction :

hi it’s me again back with the 4th chapter if you want you can check the 3rd chapter to familiarise yourself with the project and how it works but this chapter will talk about the camera further and how Blender or other 3d engines simulate its view

2 — Camera :

a camera in the case of the ray-tracing is the source where we shoot rays but it’s not just a ray RPG, it’s a tool that makes us take an image of the scene and modify its elements do we zoom in zoom out, or rotate it, i you remembered in the second chapter saw a glimpse of the camera but this image will make you understand better let's see

now let's talk about Normalized Device Coordinates(NDC) When working with computer graphics and virtual cameras, the concept of Normalized Device Coordinates (NDC) is commonly used. NDC is a standardized coordinate system that maps the visible space to a cube where each side has a length of 2 units, and the cube is centered at the origin (0, 0, 0). This cube is often defined with corners at (-1, -1, -1) and (1, 1, 1).

The transformation of pixel coordinates to the range of -1 to 1 is part of mapping the 2D-pixel space to this normalized cube. now for us to understand this further let's say we have an image with a resolution of (800,600) and the pixel that we want to identify is (1,1) now to put these 2D coordinates into 3D coordinates we have to apply the NDC to get the cube and map the coordinates of the pixel to the range of [-1,1]

now don’t understand that to get the right place to shoot the ray we have to divide the pixel into a grid of 3, also keep in mind that the camera view depends on the FOV(field of view) and the aspect_ratio which is given in the map of the scene

after finding the pixel coordinates we have to alight them with the locale coordinates of the camera to shoot the ray into the enemy(in our case the pixel).

now that we are done with setting up the camera let’s take a look at tracing the rays now let’s image a scene where we shoot a ray into the scene and the ray intersects with 5 objects that are aligned somehow now which color are we choosing, the logic says the nearest one just like our eyes we cannot see objects that are hiding behind other objects

now as we can see in the image above the objects are aligned one behind the other and the ray intersects with all of them in different places, so to solve the problem here we have to calculate and see the nearest intersection and that’s the one that we will consider, once we have the closest intersection point we multiply it by the direction and add the origin of the rays(the camera in our case) to it to get to the intersection point

intersection_point = vector_add(origin, scaler_x_vector(closest_intersection, direction));

and now that we have the intersection point let’s calculate the normal vector but first let’s see what is it

the normal vector is a crucial concept used to describe the orientation or direction perpendicular to a surface at a specific point. Understanding the normal vector at an intersection point on a surface is particularly important in ray tracing and other rendering techniques. Here’s why we need the normal vector and how it is used:

3 — the normal vector

1. Surface Orientation:

  • The normal vector at a point on a surface indicates the direction that is perpendicular to the surface at that point. It defines the local orientation of the surface.

2. Light Interaction:

  • Normals play a central role in lighting calculations. The angle between the incoming light and the surface normal affects how much light is reflected or refracted. It helps determine the amount of illumination at a given point on the surface.

3. Shading:

  • In shading models, such as the Phong reflection model, the normal vector is used to calculate the specular reflection component. The smoothness of the reflection depends on the angle between the view direction, light direction, and the normal.

4. Reflection:

  • For reflective surfaces, the normal vector is used to calculate the reflection direction of incoming light. The reflection direction is determined by reflecting the view direction about the surface normal.

5. Refraction:

  • In materials with transparency, the normal vector is crucial for calculating the refracted direction of light. This is important for simulating effects like transparency, glass, and water.

6. Shadow Calculations:

  • The normal vector is used in shadow calculations. It helps determine whether a point on a surface is in shadow by comparing the angle of the surface normal with the direction of the light source.

7. Surface Intersections:

  • When a ray intersects an object, the normal vector is used to understand how the ray interacts with the surface. It aids in determining how the ray is reflected or refracted.

8. Texture Mapping:

  • Normals are used in texture mapping to ensure that textures are applied correctly to surfaces, taking into account the local orientation of the surface.

9. Bump Mapping:

  • Bump mapping relies on perturbing the normal vector to simulate surface irregularities. This technique adds fine details to surfaces without altering the underlying geometry.

10. Phong Shading:

  • In the Phong shading model, which is a widely used shading model in computer graphics, the normal vector is used to calculate the specular reflection component, contributing to the appearance of highlights on surfaces.

now to calculate the nomal vector in the case of the sphere its the ray with the direction from the center of the sphere to the intersection point just considere that there is a case were the ray can be iniciated from inside the sphere now in that case we just have to multiply the normal by -1

in the other objects the calculation is the same except we do not have to normalize it

in the next chapter we’ll talk about the texture mapping and the lights and the refraction, reflection of rays.

--

--

B.R.O.L.Y
B.R.O.L.Y

Written by B.R.O.L.Y

"1337 student and open-source advocate. Enthusiastic about Linux kernel hacking and crafting web apps that push boundaries."