WebGL… I don’t even…

Steve Kane
4 min readFeb 5, 2015

--

A quick word

Learning to use the webgl API will enrich your web programming life and unlock an enormous world of creative expression.

This letter focuses more on concepts and a mental model and does not delve into syntax or specifics of the apis. The goal is to provide you context for learning webgl in detail by explaining through metaphor and prose three “big ideas” essential to developing your understanding of webgl:

  1. The memory model
  2. The state machine
  3. The pipeline

After digesting these high level concepts you should feel empowered to dig into the details of webgl and unlock the power, satisfaction, and fun of high performance graphical programming.

The big ideas

The GPU has its own memory

If you want the GPU to draw something, you will need to send the relevant data to the GPU to be stored in its own memory. This is unusual compared with most other paradigms of web-programming (save for perhaps Web Workers) but this is critical to understand. When you send data to the GPU you will get back a “handle” which is something vaguely like a pointer into the GPU address space. You can refer to the data you have sent to the GPU through this handle from within your javascript application.

Commonly, you will send three types of data to the GPU for storage:

  1. Arrays of data. This is most typically going to be a list of vertices.
  2. Textures.
  3. Matrices (don’t freak the fuck out)

The GPU cannot use ANY data in its computations that doesn’t live in its own memory. This means you will be copying blocks of data from the memory being used by your javascript program over to the GPU. There are a few APIs you will use to do this for the various use cases but they are all doing very similar things. There are idiosyncrasies with each but this mental model should transcend them.

The GPU is a state machine

When you water plants with one of those hoses that can start and stop water flow your process might look like this:

  1. Point hose at Hibiscus
  2. Squeeze trigger to start water flow
  3. Count to five
  4. Release trigger to stop water flow
  5. Repeat steps 1–4 for all other plants

Here is a block of pseudo-code showing a common pattern when interacting with the GPU. The ellipses just indicate parameters in the actual API that are just noise for the purpose of this illustration.

//create plants and water...  damn you metaphors!
let hibiscus = gl.createBuffer()
let sunflower = gl.createBuffer()
let waterForHibiscus = new Float32Array([1,1,1,1,1,1])
let waterForSunflower = new Float32Array([1,1,1,1,1,1,1,1,1,1,1])
//point at the hibiscus
gl.bindBuffer(..., hibiscus)
//shoot some water at the hibiscus
gl.bufferData(..., waterForHibiscus, ...)
//point at the sunflower
gl.bindBuffer(..., sunflower)
//shoot some water at the sunflower
gl.bufferData(..., waterForSunflower, ...)

The key thing to understand here is that there is NOT an instruction for “shoot water at hibiscus”. There ARE instructions for pointing at a target and for shooting water. Therefore, if you want to “shoot water at the hibiscus” you need to first aim at it and then shoot water. Brilliant!

The pipeline

Below is a distilled essence of the flow of data from your javascript program to being drawn on the screen. The main thing to understand here is HOW data moves through the system. You can dig into the particulars of each step on your own. We will draw a triangle on the screen for the purposes of illustration.

  1. Create vertexes for triangle in javascript program
  2. Copy vertexes to GPU
  3. Tell the GPU to draw
  4. Vertex Shader runs 1 time for every vertex
  5. (done for you) Rasterizer decomposes your triangle into pixels
  6. Fragment Shader runs 1 time for every pixel that needs to be colored
  7. (done for you) output bitmap of pixel colors to the display
  8. (OPTIONAL) the bitmap output from the fragment shader may itself be further processed by another fragment shader. This is called “post-processing”.

Vertex and Fragment shaders are written in a language called GLSL. It’s frankly pretty easy to learn and looks a lot like C.

Vertex Shader: Determine where vertexes should be drawn. In a typical 3d app calculations like 3d->2d transformation, skeletal animation, and distortion are done here.

Fragment Shader: Determine what color a fragment/pixel should be. In a typical 3d app calculations like lighting, reflection, and post processing are done here.

Closing remarks

Understanding the GPU memory model, state machine, and processing pipeline is critical to continued learning about webgl. The internals and fine details of effectively working with the GPU may take months or years to master but this foundation should give you the confidence and context to approach those challenges. I wish you the best of luck.

If you found this useful please share it with others and send me feedback on twitter @stv_kn. I’ll add more articles in the near future describing various parts of the API in a bit more granular detail.

--

--