Blender Geometry Nodes: Create Stylized Scenes

Shahriar Shahrabi
14 min readFeb 27, 2023

In this post I will cover some of the core Blender Geometry Nodes concepts I used in my stylized scene generator and procedural geometry. I will cover some concepts like ray casting to create trees, extruding faces to create cliffs, working with splines to get roads and passing on info to shader for procedural materials.

You can view the final result on my Sketchfab. I have two demo scenes, an Autumn scene and a Winter one.

As usual you can find the code on my Github: https://github.com/IRCSS/GeometryNodes-Stylized-Scene-Generator

I will focus on the baisic core tools and it is up to you to create what you want out of those.

Spawning

The main way to spawn stuff in your scene with GN is Instance on Points.

The points are in this context a series of positions that have some data associated with them (things like normals and tangents). All geometries in Blender also have points in them anyways. So if you pass on a mesh, it will take the vertices and spawn what you want there.

You can also pass on actual points (series of points that are unconnected with eachother). You can generate these points for example using Distribute Points on Faces. You pass a geometry to this node and it generates points for spawning. This is very useful because the density of these points is independent of how high poly the topology of the underlying geometry you spawn on is. This means you can model the base mesh (extrude faces around or move them) and the density of the forest you spawn on top looks visually homogenous.

Moving on to the Selection input node. This is a typical node input you have in a lot of places. You can use this to “cull” some of the spawns. For example you could use vertex color to only spawn where you paint. Or you could use normals attribute not to spawn trees on steep surfaces and cliffs. You can also use the Random Node to set a probability on how likely it is for a point to be used as spawn point.

The instance node is where you pass on the actual geometry you want to spawn. This can be something you have modeled, or something you have procedurally created in geometry nodes. You can also model a bunch of variation of the same thing, pack it in a collection and pass on this collection to this node. The node then randomly picks a random instance of one of those models in the collection and use a different one for each point.

I used this for example for spawning trees that have slightly different shapes. These shapes were created with geometry nodes themselves but were very expensive to calculate because they were using booleans. I created these in batches, seperated them to individual objects, packed them in a collection and used it for spawning instances.

The last two parts, rotations and scales is where you can oriente the instances and scale them the way you want to. For example you can align grass spawned to be oriented to the normal of the surface you spawn them on! To align the spawn in a certain direction, you can use Align Euler to Vector. The direction goes into the Vector, and the output Rotation goes in the spawn node. You can choose which of the local axis to align with the given input and if there exists a rotation you want to start with you can plug that in too.

I am spawning rail gaurds around the paths. The wooden posts are oriented randomly within certain constrain (the end point is within a cone). Here is how that is done with the setup described above.

Instances

So you spawn instances, what does that mean? Instancing is an optimization trick. You take the same object, and transform it around and render it a bunch of times! This means instances need to be a COPPY of the exact same thing. What you input into the Instances on Points is an actual mesh. You can do whatever you want with it. Once you spawn these, they become Instances. Which you can’t individually edit anymore. This means you can’t move its vertices around, without applying that change to all coppies of that object in the world. Because everything is a coppy of the same blueprint.

What you can do is translate, rotate and scale instances. Still it is easier to rotate your instances on the Instances on Points. Because at that point you still have access to all the attributes from the mesh you are spawning stuff on. If you want to align the grass to the surface normal, on this point you have access to the normal info, once the spawning is done, you don’t have access to this info anymore. You can still have access to the original mesh’s attribute by either using Store Named Attribute (more on this later) or by Ray casting or Sample Nearest Node.

Realizing Instances and Spaces

What if you spawened a bunch of points and you wanted to move their vertices individually, or act on each of them as if they were individual mesh? Then you need to Realize Instances and turn them to actual meshes.

I used this commanly for doing branches. You spawn a bunch of curves on a curve. If you pack this in a node group, you could theoriticly call it again and again on the results to get branches. But you always have to make sure to realize the instance, because the spawning can only happen on real meshes.

In maths a position or a rotation is only defined within a frame of reference. That means you need a coordinate system (forward, right and up direction plus the position of the origin), for your positions to make sense.

The same point will be represented through different numbers when the coordinate systems change.

Before you realize an instance, the position node gives you back the position in the local space of the mesh. After realizing the instances, it will do the same. What has changed, is the local space! You went from every vertex being defined in the local space of the object you put in Instances on Points node, to each being defined in a common coordinate system.

Unfortunatly Blender GN neither exposes you to a matrix data type with appropriate helper library, nor provides information on the matrix of the object within which our positions are defined. Still, whenever you are doing things like realizing, raycasting, or reading attributes across meshes, keep spaces in mind. It can be that you need to do some conversions between the various local spaces in order to be able to get comparable values.

Named Attributes

A 3D model needs a paramtisation. This is a fancy way of saying you need a way to define how the surface of that model looks like and what properties it has. For our standard triangle based mesh, this means a series of vertices that have attributes such as position, normal, tangent, vertex color, UVs, etc. A 3D Engine like Unity or Unreal have their attributes mostly at the level of vertices, because that best confirms to how the GPU processes data.

3D edditting application like Blender or Maya however save more data in a more generalized way. You can have attributes per faces, per edges, per face corner, per instances, per objects, per mesh sub segment, etc. Not to mention application like Blender use various forms of paramterisation beside triangulated meshes. Ngons, curves, metal balls, volumes and ….

Geometry node has its own terminology for the concepts introduced above. That is where all the talk of domain and fields, geometry and components come in. It can get very confusing very fast. But for advance use of geometry nodes, it is nesscery to understand on which domain the attributes are saved in.

In the standard game engine case, the domain is usually at the level of points (vertices). When engines use instancing, they also save information at the level of Instances.

You can store information per vertex at one point of your nodes and use it later to do other things with it. For example after spawning trees, I store the distance of each vertex of the ground to the closest tree, and use this later in material editor for adding drop shadows.

The attribute is saved on their the name Proximity to tree. This is used for shading in the material

This type of usage is pretty straightforward. The more advance usage is when you calculate and store attributes on one domain, but you need to use this information on a different domain. Most common use case of this is something like the case I had with instanced trees. As mentioned before, everything in the scene is procedurally textured. I need a unique ID per instance in the Shader to be able to give each individual tree unique attributes like color. The ID is a per instance thing however, but I need it per vertex. To do this I simply capture the ID as a named attribute at the level of Instance.

After I realize the instances, I access this information per point and not per instance. Blender interpolates the information so that it is available per point (as mentioned in the documentation). In the case of Instance to Point domain, the interpolation simply means that every vertex of the instance will hold the exact same copy of all the attributes that were in that instance before it was converted to a real mesh. In our case the ID will be coppied over as a vertex attribute for the shader to use.

A very useful tool to know what is going is the spread sheet and a viewer node. You see below how I am using the viewer node to confirm that indeed I do have the correct ID information saved per instance before realizing it and how that info is coppied to per vertex after realizing the trees.

This is interpolating between different domains on the same geometry (topology). If you wish to access attributes of a different topology, you either need to ray cast, or use sample nearest in combination with sample index node to read whatever information of whatever geometry at the level of whatever domain you wish. A common example of where you could use this is for stylized tree shading. For each leaf, instead of its own noisy normals, you can read the normals of a bounding sphere, and use that for shading.

Raycasting

For my trees I had the works of Ira Sluyterman van Langeweyde as references. One of the things with her trees is that the branches seem to grow as far as a hull.

Visual reference, work by Ira Sluyterman van Langeweyde

Doing the branches themselves is quite simple. As mentioned before, you spawn curves, deform them a bit, turn them to 3D meshes by using a circle profile and adjust the radius so that it gets thinner the longer you go along the curve.

To get the branches to go up to the outer hull of the tree and no further, you would need to ray cast! You shoot a ray from some point along your branch, and you find out where it meets the outer hull. Then you spawn a curve that goes from your starting position, up to the hit point on the hull.

Here for example I am ray casting from the snow layer at the top to the ground in order to create the trunks for the bushes.

The ray cast (and some other) nodes can be a bit confusing to use. These nodes are context depended. The input node Target Geometry is the mesh you want to ray cast against. In my case that is the ground below the bush. Where is the Source mesh?

The node being context depeneded means that the source mesh is derived from where-ever you use this node. Just for better readibility, I use the capture attribute node to save the hit positions right after the ray cast is done. The capture attribute node needs a Geometry input. What you input in this node is what is actually used as source mesh for ray casting. It is confusing!

I use ray casting for a bunch of stuff, including placing objects, or skin wrapping the road mesh I create to the surface of the ground.

Extruding Faces

As shown in the video, I control the whole scene by extruding simple faces to create cliffs and hills. Then trees and bushes are spawned on top of that. Extruding in Geometry Nodes is very easy. You simply use the Extrude Mesh node.

The main question is, what should you extrude? You can extrude all faces, but then you might get a lot of unneeded geometry. In my case, I wanted to create the cliff side of the hill. So I actually only wanted to extude down the outer edge of the hill, down to a certain capped lenght.

To do this, you would need to select the non manifold edges of the mesh, and only extrude this edges. The nice useful trick for choosing the boundry edges, is to check for the number of neighbouring faces to the edge. If an edge has only one face neighbouring it, the it is a boundry edge, and should be extruded to create cliff side.

Connecting the Subcurves in a Spline

I wanted to have a simple drawing tool to draw fences. The idea here is that you take a curve, you resample it so that there are control points at regural intervals, you spawn a pole on each control point, then you connect the control points. So how do you go about doing that?

The resampling part is simple, just take the Resample Curve and set it at option Lenght, and you are good to go. The tricky part is connecting the control Points.

Control points are saved as a sequence. So in theory you should be able to sample position of point 1, position of point 2 and spawn a line that has its start at 1 and end at 2.

Of course we don’t want to do that in GN, because there are no loops (YET). What we can do is spawn a line on each point (instance on point), except the very last one. For each point we read the position of the point and set it at the start of a line and the end is the position of the control point which is at a index one higher than the one we are currently spawning on!

Here we use a very useful node called Sample Index. With it you can sample whatever attribute on whatever domain at whatever index. Remember that Index here is context depended. Since spawn instances on points is running per point, the index would be the index of the control point which you are spawning a line on. The rest is simple maths.

There is a problem with the above setup which is already solved in the node pasted. If you draw several splines within the same curve, Blender will pack each unconnected spline as a different sub segment, but all of them are still part of the same curve. Their indices will simply continue from one to the other. This means in the example posted, you wont end at 4 but the begining of the next spline will be 5! We have to somehow make sure we dont connect the poles that are not part of the same sub segment.

To the rescue comes the Spline Length node. Its Point count will tell you how many control points exist in that sub segment. So if you are on Control Point 4, and you try to read the position of Control Point 5, the point count can inform you that there are only 4 points in this sub segment, so you don’t need to spawn a connecting log there.

Procedural Texturing

I just wanted to highlight one aspect of the procedural texturing. Using an empty and its local space in the shader to create an area where a certain thing can happen. I use this to create the farmlands on the ground.

In Blender you can reference any object in your shader. You can also use the texture coordinate node and the Object output to know where the fragment you are rendering right now is position in the object space of the object you referenced. You can use this coordinate system to do procedural textures or any shader effect you want (cutting objects using transparency for example).

This is very useful, because the local space means you can take that object, move, rotate and scale it and practically have a decal effect you can move around in your scene.

Everything in the scene is textured proceduraly. This means a lot of maths, and bunch of passing data from geometry nodes to the material system. All the colors are defined through a single material node group. Which means there is a centeral area where you can easily adjust the entire scene’s color and feel by adjusting 5–6 different key colors. Though that all would make for an interesting read, it would be a topic for another time!

Thanks for reading. You can follow me on various socials, head to my website https://ircss.github.io/ for more info.

--

--