Meet the Shaders : Vertices, Polygons and Meshes

Get up close and intimate with the Shaders. They’re going to rock your (game’s) world!

Binigya Dahal
13 min readMar 27, 2018

Your Favourite Game

For a moment, remember your favourite video-game. Now imagine you’re playing it. What do you like about it the most? The fun gameplay, the tantalizing soundtrack, that addictive story, or the mesmerizing graphics?

Great game. Played it as soon as I turned 18. Anyone who tells you otherwise is lying.

It takes all of those things to make a pretty great game. But, one aspect of any game is the most noticed and that one makes the initial impression of any game. Yes, the Graphics.

Graphics play a huge role in Video games, and with the advent of new powerful GPUs, they are more vital to a game than ever. Graphics may or may not be more important than the story or the gameplay, but here’s the thing: Graphics matter. A lot.

From the colorful and glorious Pokemon to the realistic GTA-V, we all remember how these games looked and how they made us feel. And, as we humans are more of a “visual” creatures, we process images more naturally than text or audio. So, it is natural that Graphics of a game get much more attention than any other elements of the game.

So, how exactly are these graphics made and presented into the game?

Furthermore, different games have their own unique and tasteful graphics than others. How is it so? Read on, and find out!

In video game development, graphics are a vital component to pay attention to. Graphics are made as art assets by artists for video games in 3D modeling software like Blender, Maya or in Image manipulation tools like Photoshop, Gimp for 2D images.

The thing you must have noticed is that every game has a different look and feel. For instance a car in a GTA game will look different than the one in a NFS Series. The graphics of a game give the game its unique brand, a unique experience for its players to enjoy and lose themselves into.

Cozy, Colorful and Fun. Each game has its own brand of Graphics.

So, what makes each game look and feel different?

The obvious answer is that they are made by different people and with different art styles and environments, but there are also little programs known as “Shaders” that make most of the differences.

Enter the Shader :

In its simplest definition, a shader is a computer program that usually runs on the GPU to tell each and every pixel how they should look like in the game. Shaders are the secret sauce that make a game look apart from the rest, by dictating and manipulating how the art assets should look like in the game.

Shaders tell the GPU how to render all those art assets into the screen, and saying how should the light behave on the surface of the 3D objects, what colour should they be in, and a whole other details.

A loose representation of how the 3D models that you see in your game, are made and drawn on the screen can be seen by the following diagram:

From an Artist’s brush onto your screen : The life of a 3D model (Source)

We will get into the details of the terms mentioned in the diagrams shortly. For now, just understand that after an Artist makes the game’s models (cars, persons, trees, etc) the models are imported into the game engine. After that, shaders are applied on these via their materials, which ultimately control how they look in your game.

Back to the basics : What is a 3D Model?

(You can skip this section, if you know about meshes, vertices, polygons and materials.)

That shiny car in GTA, the realistic players in FIFA and that shiny explosive crate, those are all 3D models. 3D models are the objects that fill up your game’s world and make it enjoyable, and fun.

If you are a gamer, you already know what to do :)

Now, to truly understand shaders and to appreciate their magic, we must be aware of the construction of a 3D model.

First, look at this:

Every young gamer’s secret crush.

What would you answer, if I asked you what was she made of? If this was Biology 101, you’re answer would’ve been bones, nerves, organs, and all. But, this is Shader 101, and our answer is going to be a little more mathematical.

A 3D model is essentially a mathematical object, made up of vertices connected together with edges, forming a mesh of polygons and are modified visually via materials and textures. Whew! Let me break it down for you:

A 3D model has two elements: Its Construction and its Appearance.

What’s on the Inside : The Construction

A 3D model is constructed of:

  1. Vertices :

Vertices are mathematical lingo for points. As we are talking about 3D models, we need to define their 3 properties : Length, height and depth.This is acheived by using the 3 coordinate system : X for length (and is horizontal), Y is height (and vertical), and Z is the depth, like this:

So, Vertices denote the X,Y,Z coordinates on a screen. They define a point in a 3D Space. They are the smallest parts of any 3D model.

2. Edges:

Edges are what connect the vertices. They help to define the shape of the 3D model. And, by modifying the edges, one can also perform different transformations on a 3D model.

3. Faces:

The area within multiple connected vertices are called faces. Faces fill up the area within the edges and make it visible.

Here are these 3 concepts, in a picture:

It’s clear now, isn’t it?

And, on a 3D model it can be seen as :

Vertex, Edges and Faces in action. (Source)

4. Polygons:

When you have multiple vertices, connect the edges to get a shape and get a face to make it visible, you have got a polygon.

So, Basically : Polygons = vertices +edges + faces

Now, polygons are of great importance in 3D games, as they are the most basic construction units of every 3D model. A similar analogy would be the cells that make up our bodies. Ergo, Polygons are the cells (the building blocks) of most 3D models in video games.

Thus, examining a 3D model of a videogame, will let you know that it is infact, made up of lots of polygons which are kind of “glued together”.

Here, like this:

Stick and stones won’t break my bones, ’cause I’m made of Polygons!

And, another thing you need to know about Polygons is that they are normally 3 sided, or 4 sided.

The 3 sided are called Trigs (after Triangles) and have 3 vertices and 3 edges, and are generally simpler and easier to color.

And, the 4 sided are called Quads (after Quadrilaterals), and you guessed it, they have 4 vertices and edges. They are a bit more complex, but give good results.

And finally, we have the poly-count. As you guessed it, it is the number of polygons a 3D model has. The higher this number, the higher the quality and higher the system resources needed to draw this on the screen.

What a difference a few thousand polys make.

5. Meshes:

A mesh is a collection of vertices, edges, and faces that describe the geometrical shape (note: shape only, not the color) of the 3D object.

i.e,

Mesh = collection of Polygons = collection of (vertices + edges + faces)

A pretty mesh!

So the mesh is pretty much what describes the structure of the 3D object in itself.

So, to recap:

From vertices to surfaces, this is how a 3D model is constructed. (Source)

Lookin’ Good : The Appearance

Now that we have covered what a 3D model is made of, let’s see how they are depicted visually.

I said earlier that the looks are controlled by a Material, which applies the texture and the shader into your model. Let’s examine each one of them:

  1. Material:

A Material is essentially, what defines how to draw the surface (or a face of the polygon) of the 3D model, on the screen. It does so by applying the information that is present in the Shaders and the textures that can be attached to the materials.

According to the Official Unity Documentation, a Material defines:

a) Which shader to use for rendering(i.e drawing on the screen) the material.

b) The specific values for the shader’s parameters — such as which texture maps, the colour and numeric values to use.

So, we can loosely say that:

Materials = Properties of (Shader) + 2D Textures

Materials play an essential part in defining how your object is displayed.

A Typical Material, as inspected by the Unity Inspector

If you closely look at a Material in Unity, you can see that it has a whole range of options to choose from (which it gets from the Shader) to affect the display of your 3D model in the game.

2. Texture:

Imagine a gift you’re buying for someone. Now, you want to wrap it. Would it be easier to draw and make a piece of the wrapping paper individually for each side, or would you just rather get a wrapping paper and wrap the whole gift at once?

And unless you’re into performing painstaking and meaningless tasks, you’ll probably say the 2nd one, right?

And, similar is the case with our 3D Model. We have a 3D model(its mesh). And, textures are the flat images that are applied over the mesh surface to give it more details. A texture has appropriate information in it, about where should it apply itself on a 3D object so that it fits in the right place, and this whole process is called UV Mapping. (More on this later !)

So, just think of texture as some good ol’ wrapping paper for your 3D model.

This should make it more clear:

This is a texture, being applied to a 3D model.

3. Shader:

Ahh, the Shader. Our main man. Our Linchpin. The Don. Okay, I’ll stop now.

The Shader, as I’ve said before, is comprised of some instructions to the GPU that tells it how to make each pixel look. The instructions are programs, that are written in CG/HLSL (Computer Graphics/ High Level Shader Language) and run on the GPU.

In this context, let’s refer to Unity Documentation and look at what it says about Shaders:

A Shader defines:

a) The method to render an object. This includes code and mathematical calculations that may include the angles of light sources, the viewing angle, and any other relevant calculations. Shaders can also specify different methods depending on the graphics hardware of the end user.

b) The parameters that can be customised in the material inspector, such as texture maps, colours and numeric values.

Simply, it means that the shader is the one, who is telling the GPU how to exactly draw the object on the screen and it also defines all the calculations related to it. And, the things that the Shader defines can be customized through the Material, on which it is attached to, through the inspector.

And the things that the shader defines are textures, colors and other properties, which we will be diving into shortly.

So, that was all that you need to know about 3D models, before starting shader programming.

Now, take another look at the image that was shown a while ago, and hopefully, this time it’ll make some more sense to you now:

The same picture. I hope you kinda understand it now.

So, a 3D model is made up of vertices and other mathematical data (like Normals, UV data, which will be covered in the upcoming lessons). The 3D model has a material, which implements the textures and the shader to define the look of the model. And, the shader is made up of code. The coding of a shader is done in either CG (Computer Graphics) language or HLSL (High Level Shader Language). And also, there is an alternative version for inferior GPUs that do not support the default Shader code.

(And by the way, as both CG and HLSL are targeted for GPUs, most of their syntax is similar, and the knowledge gained in one is easily transferable into the other.)

The journey of a Pixel : The Rendering Pipeline

(This section of this tutorial and the images are based on this excellent article, which I have just summarized it into my own words here.)

Before we get into defining and writing Shaders, there’s one final thing that we should be aware of : The Graphics Rendering Pipeline.

The rendering pipeline is the set of tasks that have to be done and performed upon a 3D model’s data, before it is displayed on the screen. The rendering pipeline shows the process of displaying the pixel on the screen, and thus define how your GPUs render an image.

(By the way, a pixel is the smallest unit of your computer’s screen. Thousands of pixels combine together to make up your screen’s resolution. So, to display something on the screen, we need to modify the required pixels of the screen.)

Also, here’s an informative piece from Wikipedia:

Once a 3D model has been created, in a video game, the graphics pipeline is the process of turning that 3D model into what the computer displays.

Because the steps required for this operation highly depend on the software and hardware used and the desired display characteristics, there is no universal graphics pipeline suitable for all cases.

However, graphics APIs such as Direct3D and OpenGL were created to unify similar steps and to control the graphics pipeline of a given hardware accelerator.

These APIs primarily abstract the underlying hardware and keep the programmer away from writing code to manipulate the graphics hardware accelerators (AMD/Intel/NVIDIA).

So, here is a general overview of the Rendering Pipeline:

A high level overview of how objects are displayed on your screen

Now, before the GPU was the CPU. During the old days, where no hardware advancements had been made yet to create a separate hardware unit to render the graphics, the rendering was all done by the CPU. The old computers used software calculations to render the graphics.

Now, this process used to be really intensive to the CPUs and thus resulted into poor performance and not much flexibility when it came to rendering the images.

Oh, the tragedy!

Then, with the introduction of Graphics card, a new graphics pipeline was introduced and was called the Fixed-Function Pipeline. It was strictly fixed and sequential. It was impossible modify the rendering process, but it was far more better than the old software based rendering pipeline.

Let’s study this in a bit more detail, as it will get us introduced to some of the elementary concepts of rendering:

The Pre-2001 Rendering Pipeline : Fixed and Firm.

Now, in this pipeline, the graphics data had to pass through these following stages before being drawn on to the screen:

  1. Input Data :

This is simply the data about the image that is to be drawn on the screen. It contains the data about each individual vertex, and the various properties of them, like their position, the color, their normals, etc.

2. Transformation and Lighting :

This involves the various geometrical operations regarding the Transformation (move, scale, rotate) to calculate the position of the image object and also the lighting operations, that are calculated for each vertex. It generally comprises of operations that will simulate a light on the surface of the object.

3. Primitive Setup: This is a process of triangulation, where vertices are combined into triangles or quads to make up the polygons.

4. Rasterization: This stage deals with the finding out the colors of the Pixels, by referencing all the information we have about the model till now.

5. Pixel Processing: It takes the data (regarding the pixels colors) from Rasterization and actually colors the pixels.

6. Frame Buffer Blend:

The Frame buffer is a computer memory structure, that holds the information about every pixel, its position, color, lighting, every data we have calculated till now. The contents of the frame buffer is what is displayed onto the screen. Thus, the frame buffer blend stage blends (transfers) the data of the frame buffer into the screen and finally displaying our image object. (It can be a 3D model or just a 2D texture.)

(The frame buffer is a pretty big deal. We will be discussing in further detail about it, and other buffers later on.)

So, here’s a thing you should know about this rendering pipeline model. It’s fixed, i.e it was not programmable and all of the lighting(simulation of the light in the 3D scene, like the sun in our real world), texturing, work was hard-coded.

Shaders were introduced later on to remove this restriction and make the whole process programmable, more flexible and dynamic.

And as the graphics cards evolved, the processing of vertices and pixels became programmable (by the shader!). Here is the more recent Graphics Rendering Pipeline, followed by the modern GPUs:

The spiky boxes represent programmable stages.

Now, you may have noticed three new components:

a) Vertex Shader: Vertex shaders can manipulate properties such as position, color and texture coordinates. Vertex shaders can enable powerful control over the details of position, movement, lighting, and color in any scene involving 3D models.

b) Geometry Shading: This is performed by the Geometry shaders. This shader can create new geometry(shapes) on the fly using the output of the vertex shader as input. So, it can draw new lines, vertices, or other geometric shapes as required.

c) Pixel Shading: It is done by the Pixel shaders(or fragment shaders) and they calculate the color and other attributes of each “fragment” — a technical term which means a single pixel.

So, this is the rendering pipeline, how the data about a image/3D model can be represented onto the screen. All this may seem a bit too complex for you to take in right now, but with time, you will begin to get it as you start writing your own shaders and interacting with these stages. For now, just be aware that this is what happens before the graphics are displayed on your monitor, and you’re good to go.

Now that you have developed a solid concept about 3D models and their working, along with the rendering process, let’s pick up the pace and go to the next part of the series. We’ll be writing our very first shader there!

Read on!

Get shading on Part 2, Inside Out : A Shader’s Anatomy.

--

--

Binigya Dahal

Video gamer, Game Programmer and Cute dogs enthusiast.