Creative Coding in Blender: A Primer

This tutorial aims to encourage creative coders to consider Blender as a platform for creating 3D artworks. Blender can be daunting to learn, so this primer is written for those who’ve tried their hand at creative coding before, but wish to expand. We’ll write some Python scripts to animate geometry and conclude with Open Shading Language to add texture to those models.

This tutorial was written with Blender version 2.79.

Configuring Blender for Scripting

Unlike environments focused on creative coding, such as Processing, Blender is a swiss-army knife. Animators, sculptors and texture artists will configure Blender differently to suit their work. For any given task, there are usually two or three ways to do it — whether by hotkey, menu, mouse click or script. The steps to follow are not necessary to begin, but will improve quality of life once we’re in the thick of Python scripting.

Run Blender From The Command Line

Run Blender from Windows PowerShell.

When debugging a script, if we wish to read the diagnostic info supplied by print, we’ll need to open Blender from command line. On PC we can use Command Prompt or PowerShell; on Mac, we’d use Terminal.

Command-line interface (CLI) takes some getting used to for those accustomed to a graphical user interface (GUI). Relevant to our purposes is the command ls, which lists the items in a directory (folder) and cd , change directory (where cd .. moves up a directory). Pressing Tab after cd can scroll through auto-completions of available files and folders within the current directory. This should be enough to get us to where we’ve installed Blender. We can add Blender and Python to our system path later to avoid this step.

Scripting Layout

Select the Scripting Layout.

Blender allows us to switch between editor panels by selecting from the drop-down menus in the lower left corner of each. Several presets are available in the menu on the top info bar. Scripting includes the Console and Text Editor, as well as the 3D View to visually confirm the script’s results.

Tools To Learn the Blender API

To better understand what happens when we use Blender’s GUI we can do two things. First, if we go to File > User Preferences, under the Interface tab, we check Python Tooltips. This will give us information about the method name called when we click a button. Second, if we pull down the Info Editor, we can see our function call history.

Left: turn on Python Tooltips in User Preferences. Middle: interactive console. Right: pull down the info bar to see the command history.

In the image above, right, the call history indicates that we’ve selected all the items in the scene, deleted them, created a cube, added a subsurface modifier and changed a property of that modifier.

The interactive console allows us to explore the Blender Application Programming Interface (API) with auto-completion. By typing in a keyword and then hitting Ctrl-Space, we can see available methods for any class we’re curious about, the parameter list of a method, and even a brief description. In the example above, middle, information for adding a new light to the Blender scene is displayed. Note: the browser-based reference can also be downloaded from the above link.

Choosing and Customizing a Text Editor

Printing a message to the command-line.
Using a template script.

By dragging on the + on the upper left side of the text editor, pressing Ctrl-T while the panel is in focus, or going to View > Properties, we’ll find more options to customize the built-in text editor: line numbering, highlighting, etc. Clicking the Run Script button on the lower right, or pressing Alt-P, will execute the script. In the top-left example, a string is printed to the command-line.

The Templates menu allows us to open example scripts, providing guidance on recommended practices. In the example, a cube is created.

If we require further customization of the text editor’s appearance, we can return to User Preferences, click on the System tab, and change the Mono-Space Font. In the Themes tab, by selecting Text Editor from the left-hand column, we can change the syntax highlighting.

Editing a script in an external editor.

Alternatively, we could select an external editor, such as Visual Studio Code or Atom. When a script opened in Blender has been externally edited, a white and red life preserver appears in the lower-left corner. Clicking on this will let us resolve the conflict. For these workflows, it’s worth researching extensions that support Python syntax highlighting and linting.

Syntactic Quirks of Python

As with any development environment, we must simultaneously learn a specific API and the programming language on which it sits. As with Processing and Java, Unity and C#, or Three and JavaScript, the more we know about the language the faster and more effectively we can use the API.

When porting our scripting practice into Python, there are a few syntactic qualities to watch out for. A few are:

  • A line-break, not a semicolon, concludes a statement.
  • Blocks of multiple statements are not enclosed by curly braces {}, but are identified with an indent and begun with a colon.
  • Like JavaScript, Python does not require a data type to be specified when declaring a new variable.
  • Comments are initiated by a hash-tag #, not double forward slash //, which denotes integer division (5 // 2 yields 2).
  • Regarding abbreviations and capitalization: elif is used, not else if. Booleans are True or False.
  • Like JavaScript, arrays are initiated with square brackets [], and can be added to with append.
  • Plain language is more common; for Boolean operators, or instead of ||; and, instead of &&. In addition to == and !=, there are is and is not; the two pairs are not equivalent.

More details on Python can be found in “Learning Python: From Zero to Hero.” A last note: should we define a function of our own, Python not only accepts default arguments but allows parameters to be explicitly matched to arguments. In the test case below, the function foo

prints 13, 11, 9, -3 and 1. To explore Python’s many libraries beyond those in Blender, we can use interactive help in the command line. After navigating to where the Python that Blender uses is installed, we type help, then modules to browse available libraries.

Python’s help functionality accessed through the Command-Line Interface.

In the example depicted above, we dig into the statistics library.

Static Geometry

With an orientation under our belt, we can try creating geometry.

Grid

We begin with a grid of cubes.

We must separate a cube’s abstract place within the grid from its location in real-world coordinates. For example, a cube may be in the 2nd row, 3rd column and 7th layer of a grid, but be located at (50, 25, 100) in the scene based on the grid’s translation, rotation and scale. In this case, we convert between real world and abstract coordinates by converting abstract location to a percent, add the real world range’s lower bound (-extents), then multiply by the range’s upper bound minus its lower bound (extents - (-extents), or extents * 2). When for-loops nest in for-loops, we have to be careful of minimizing calculations. For example, since we can discover the z-coordinate of a cube in the i loop, even though we won’t use it until we create a cube in the k loop.

A key distinction: editing the translation, rotation and scale of an object is distinguished in Blender from the transformations of vertices, faces and edges of which it is composed. We’ll create separate objects for now, but will keep in mind that creating a single object is an alternative. We can always convert our grid of cube objects into one object post-hoc by selecting them all then pressing Ctrl-J to join them.

Sphere

Instead of arranging points based on a Cartesian coordinate system, we next practice a spherical coordinate system. Instead of converting spatial position to Red-Green-Blue (RGB) color, we map the longitude to hue.

This coordinate system requires us to import some trigonometric functions — pi, sin and cos — from the math library. To support conversions between RGB and Hue-Saturation-Value (HSV) color, we can use the colorsys library. We orient the cube to the sphere’s surface with Euler angles. Keeping in mind that the z-axis is up, we change the pitch of each cube to match the sphere’s latitude; we change the yaw of each cube to match the sphere’s longitude.

Animated Geometry

Once we’re ready to script animations, we may benefit from customizing our layout. For example, in the left screenshot, we add the graph editor and dope sheet. The former lets us visualize and tweak the sine wave motion created through script. The latter gives us an overview of all the key-frames placed on the properties for the cubes. Not pictured above, but helpful for scrubbing through the scene, is the timeline editor. Even without the timeline, we can start and stop an animation by pressing Alt-A. The start and end range can be changed in the Properties editor.

Sine Wave

To demo how animations familiar from other creative coding environments can be accomplished in Blender, we’ll port over a cube wave in p5.js by Daniel Shiffman.

A big difference between this workflow in Blender and interactive, real-time engines is that we typically work with a set number of frames. Within that range, we insert keyframes to mark a transformation (for example, change in translation, rotation or scale). In the frames between keyframes, Blender interpolates the intermediate values for a given property.

We treat time as a fourth dimension to be represented by a for-loop nested within the x and y loops. We place a keyframe at the first frame of the scene. After each iteration through the loop, we add fincr to evenly distribute keyframes through the range of frames.

Rotating Sphere

Once we understand how to animate one object property with keyframes, we can animate many. In the image, left, the center of the sphere oscillates between two end points; each point on the sphere rotates by an angle around an axis. The axis itself rotates based on the longitude. We generate this by adding an animation loop to the sphere code from earlier.

As mentioned in another tutorial, animating rotations is complex, requiring multiple representations: vector, quaternion, Euler, and/or matrix data types. If we wish, we can define a custom function, in this case rotating a vector, using the def keyword. Note that, when constructing vectors, an array of values is supplied to the constructor.

The above classes contain conversions, such as to_euler; we can also change the rotation mode of an object with bpy.context.object.rotation_mode = “QUATERNION” or “AXIS_ANGLE” (the default being Euler angles in XYZ order). This makes it easier to check our Python script against the graph editor. To accomplish this via the GUI, drag out the properties panel on the right side of the 3D View, select View > Properties or press N.

Left: Change rotation mode in the GUI. Middle: rotation represented as an axis-angle. Right: as a quaternion.

We add a bevel modifier for a bit of polish. A suggested process is to first test modifiers on a single cube via Blender’s GUI, then copy the function calls into the script. When adding multiple modifiers, the order in which they are added is important, and will yield different results.

Metaballs

To suggest a more organic form, we next create metaballs instead of cubes. Like globules of mercury or clay, metaballs cohere together within a given proximity. What differentiates clay from mercury would be a scalar, stiffness.

As this discussion on Blender StackExchange clarifies, we don’t create multiple metaballs; rather, we create one, then add elements to that ball.

For variety’s sake, we reverse the cohesive force every so often, making the forcefield around an element repellent instead of attractive. Since cohesion is expensive to calculate, Blender lets us adjust the resolution of this substance in the 3D view and for a render.

If our ultimate goal is not an animation but geometry, we can select a frame in our animation, adjust the resolution, go to Object > Convert To > Mesh from Meta and then refine it with modifiers and other sculpting tools.

Easing Between Multiple Values

Let’s return to a humble cube to look at more complex easing. Suppose we want this cube to to tour around the scene, hitting a stop before changing direction to visit the next. We could create an array and match the number of key-frames to the length of that array.

Blender would take care of the interpolation between each stop for us, since it coincides with a keyframe. But what if the number of stops to visit in our array doesn’t match the number of key-frames? For example, we may want to ease between arrays of 8 stops, 3 scales and 6 rotations, all with the same number of key frames. We can create a routine to ease through an array, converting a global step to a step between the appropriate elements of the array, no matter its length.

No matter which easing function we use (in this case, smootherstep for vectors, normalized lerp for quaternions) from one keyframe to the next, Blender is still easing the frames in-between by a default interpolation, Bezier. If we’d like to change the default, we can do so in User Preferences. We can also change interpolations on a case by case in the Graph Editor. By playing with the ratio of scripted keyframes and the interpolation type of each transformation, we can tweak the cube animation to evince more character.

Changing An Existing Mesh

Instead of building from the ground up, we could take an existing mesh and morph its vertices. This provides an opportunity to introduce Blender’s noise functions.

We could develop this by morphing the Suzanne model into a sphere then deforming it with noise. Depending on which flavor of noise we use, the function may give us a vector, which we could add to a point, or a scalar value, by which the point could be multiplied.

By passing in a scalar value grit to the noise function, we amplify how much we traverse the noise field, leading to a rockier look.

For comparison, left is an example of mesh deformation by adding vectors to vertex locations. Furthermore, the nesting order of loops merits exploration. Unlike the above examples, we loop through our keyframes, then loop through the vertices in an inner loop.

With the noise libraries introduced, it is important to keep in mind Dan Shiffman’s advice from The Nature of Code:

[W]e could just as easily fall into the trap of using Perlin noise as a crutch. How should this object move? Perlin noise! What color should it be? Perlin noise! How fast should it grow? Perlin noise! […] The point is that the rules of your system are defined by you, and the larger your toolbox, the more choices you’ll have as you implement those rules.

Noise allows us to be as surprised by the artwork as our audience; we can quickly generate work beyond our initial intention, then curate and shape the work later. However, the first tool we pull from the toolbox to add variety should not be the last or only tool.

Even more advanced functions for manipulating meshes available through the BMesh library. For animation, we could consider creating or importing skeletal rigs, composed of bones, then build the mesh around the pose.

Shaders

Solid colors will not suffice for bigger projects, so we next turn to shader scripting. Assuming we are working with the Cycles renderer — it should be indicated on the info bar on the top of Blender — we can add a material made of nodes to an object. By opening the node editor and ensuring that the use nodes button is pressed in the material properties, we can create shaders with a visual programming language. In the 3D View, we can preview the results by switching from Solid to Material shading, or in the properties editor itself by opening the Preview section.

In the abstract this scripting flows from left to right, although we can arrange nodes visually as we wish. On the very left are inputs — such as the position of a vertex in a model; on the very right are outputs, such as a color. The nodes between, inserted with the Add menu, accept inputs on the left and issue an output on the right. These are color-coded by data type with points and/or vectors in purple, colors in yellow, scalars in gray, and so on. In the last example, a Musgrave Texture determines the scale of a brick texture. As with geometric transformations earlier, properties of these nodes can be animated by right-clicking and inserting a keyframe.

If we script a custom shader, we’ll be working in the Open Shading Language. We have to notify Blender of this by ticking the OSL check box in the properties editor under render settings. As the tool-tip parenthetical notes, OSL works on CPU rendering — meaning it uses the computer’s sequential Central Processing Unit to render images rather than the parallel Graphics Processing Unit (GPU). For those who rely on powerful GPUs to speed up the creation of complex Blender files, this can be a big sacrifice.

Blender’s internal text editor contains templates for shaders as well as Python. Since support for OSL is not as widespread as for other shading languages, we’ll likely be scripting without syntax highlighting or proper error checking. Furthermore, not all the features mentioned in the OSL specification linked above are available in Blender.

Unlike other shading languages, such as GLSL, OSL functions can have multiple outputs, which we place in the function signature. Furthermore, all function parameters in the signature should have a default argument.

OSL treats vectors, points and normals as separate, though similar, “point-like” data types. The components of these structures are accessed with array subscripts, for example vec[1] is equivalent to the vec.y of other languages. There is no distinction between 2-, 3- and 4D vectors; this matters when porting from other shader languages, where texture coordinates can be distinguished from world coordinates. Functions like distance and length will return different values than expected with non-zero z components. color variables do not store alpha values. We can use a shortcut if we want to assign the same components to all components of a vector or color. Color1 = 0.5; stands for Color1 = color(0.5, 0.5, 0.5);.

If we add a Script node in the node editor, select External and open the file, our node will look like so:

Color1 will fill the outside of the circle; Color2, the inside. Next, we add noise for some variety. We can skirt the earlier admonition not to overuse noise by making it optional. When the scalar NoiseWeight is 0.0, we see a regular circle; when it is 1.0, we see globs. The mix function is OSL’s go-to for linear interpolation.

This also allows us to animate the noise contribution. The input vector to the noise function is multiplied by the float Jaggedness to increase the detail of the distortion. The lower this value, the smoother the noise.

There are separate functions for different kinds of noise, but for convenience we can use one noise function, to which we supply a string specifying the type, among them: “perlin”, “uperlin”, “simplex”, “usimplex”, “gabor” and “cell”.

We could make this string an input to a function. However, we have to remember to type it into the Blender node upon refreshing our shader. Cell noise, illustrated below on the sphere, is handy for creating textures similar to those in Minecraft, as it generates a crenelated look.

Creating shapes in shaders is trickier than in canvas APIs, like that of JavaScript. We benefit by treating the shader as a glorified graphing calculator. Desmos Graphing Calculator, Inigo Quilez’s Graph Toy and others are helpful tools for prototyping the shapes created by math functions before we author a full-fledged Blender shader. If we wish to create a grid texture, we can use the modulus function.

The Divisions number will indicate the number of splits we want on the texture.We try an int , color-coded in dark green, to illustrate a disadvantage in using them. In the animation above, the divisions of the texture skip from 1 to 2 to 3; since the intermediate values 2.125, 2.25, 2.5, etc. are not available, the int cannot be effectively eased.

OSL’s rotate function accepts the point to be rotated, the UV coordinate in this case, the angle in radians by which to rotate, and — instead of an rotation axis — two point-like types which are subtracted and normalized to create the axis. Since we’re working with 2D texture coordinates in the range (0.0, 0.0) to (1.0, 1.0), we use distance from the center (0.5, 0.5, 0.0) to its normal on the z-axis (0.5, 0.5, 1.0).

For floats, modulo is more complex than the % operator affords: fmod may return a negative number; mod , described as a - b * floor(a / b), returns only positive numbers. This distinction is helpful to know, as we are often working in ranges of either -1 .. 1 or 0 .. 1. The disadvantage of ints when animating — the lack of intermediate values — can be used to our advantage elsewhere. Functions like floor, ceil, mod and other functions that return integer values given float inputs allow for sharp, high-contrast patterns.

As seen in the node graph above, we pass the color of our custom shader on to a more complex shader, the Principled BSDF, to create a glossy look. We also pass the Fac value to the material output’s Displacement. This creates the illusion that the grids are stenciled.

Closures

Up to now, we’ve written shaders which output colors; we sent these to shader nodes with outputs named BSDF (Bidirectional Scattering Distribution Function). These outputs are not fixed values, but rather are functions called closures; their use by OSL is one of the language’s defining features. Blender lists available closures in the manual. Since explanations of these functions are sparse, and there is not always a one-to-one correlation between built-in Blender nodes and these functions, research is typically required. For example, multiple algorithms may exist to simulate the qualities of glass, and may need to work in concert to create a glass shader.

First, we modify our original OSL script, using the built-in dist function to find the 3D distance between two points, rather than 2D distance-squared; furthermore, we add a middle else if clause to give the circle a border. Since many closures require information about the surface normal of our geometry, we add Normal to our inputs. For both this and the UV input, we use the predefined global variables P and N as defaults.

If the step is greater than the radius and the coordinate lies outside the circle, we assign a Oren-Nayar reflectance function to the BSDF output. This model simulates small bumps on a surface which scatter light. We multiply this function by our desired color (a simple gradient by finding the sine of the coordinate’s y component). The range returned by sine is -1 .. 1, we multiply by 0.5, then add 0.5 to shift the range to 0 .. 1.

In the middle case, for a coordinate on the border, we choose to emit a light. While we could have multiplied this emission by a color, instead we accept a Temperature in degrees Kelvin. We convert this temperature to a range of color realistic for lights with the blackbody function. Lower temperatures, around 1,500K, have an orange hue; warmer temperatures, around 20,000K take on a pale blue. For contrast, we show to the left how this could be accomplished by wiring nodes; a grid of light squares that has a temperature gradient over its surface.

In the third case, when the coordinate lies within the circle, we mix a glass (microfacet_beckmann) and reflection function. Since the mix function doesn’t work with these functions, we do this manually with the Reflectivity scalar and its complement.

Since glass allows the viewer to see through the translucent surface of the material, we also output a Volume from this shader. We have two basic options: color is either absorbed by the volume or scattered.

The henyey-greenstein function addresses volumetric scattering. As the OSL documentation explains, the argument given to the function represents “the anisotropy factor, ranging from -1 to 1, with positive values indicating predominantly forward-scattering, negative values indicating predominantly back-scattering.”

Were we to use an out-of-the-box Glass BSDF with Volume Scatter, the expected result is that a volume scatter color of the opposite hue to the one input. In the example above, cyan (0.0, 1.0, 1.0) leads to a red-tinted volume. Depending on our goals, we may wish to find the complement of an input color. For this, we have functions like color("hsv", 0.0, 0.0, 1.0); and transformc("rgb", "hsv", clr); (the first string is the origin color space and the second is the target color space).

Lastly, while this script de-clutters our node editor by throwing every option into one spot, the script is opaque. For example, it may be easier to let the color assigned to our rough surface be determined outside the shader rather than inside.

Conclusion

The basic scripts in this tutorial are only the beginning. There are numerous add-ons for Blender to facilitate procedural mesh generation and animation. Given the increasing quality of animated gifs, support for 3D models in the browser, and growing communities, many possibilities open up from here.