Building Your Assets, in a Game Engine

Jack Spira
9 min readNov 14, 2019

Hi, I’m Jack Spira and I exist here.

I’ve been making a game engine in my spare time for a game called The Clockwork. I’ve been loving it, but recently have decided that I need to rework the build process. Let me walk you through that here, show you ways to hook into Cargo, Rust’s build system, and the pros and cons of my approach.


First off, the goals: right now we want to compile shaders and we want to pack our sprites, if they’ve been changed or added to. In the future, we’ll want to do more things like compressing audio and managing models, so we also want to take a look out to make sure our code stays pretty enough that we can do that with some ease. Most of the code that you’re going to be seeing was already written in some form or another before the build pipeline was written. I think this is the way it should be done — this is classic pre-optimization is the root of all evil stuff.

This is what we want to avoid — we’re doing a lot of work at runtime!

Before I started this journey, I compiled my shaders and I packed my sprites at runtime ( gasp! shock!) because it was easier to implement. In fact, I would still be doing that, but the number of shaders and the number of sprites has grown to the point where I get a 5 second or so delay on startup, which has slowly added up to be enough of an annoyance that it’s time to fix it.

So how do we fix it? It’s simple — instead of doing that work at runtime, we do that work at compile time. This is what a “build system” is — sometimes people call this an “asset manager”.

We could make it a separate executable which we have to remember to run before we compile, but then we’ll probably forget to run it, and running multiple applications tends to scare off non-programmers. In my experience, artists and designers just don’t have the time to be learning our engineering tools, and they’re right not to.

As a Rust project, we use Cargo, the build in package manager/code build system, and wouldn’t you know it, if you include a file in your project, Rust will automagically invoke it for you during the compilation process, but not include it in your final executable. Absolutely perfect! This is going to be easy. (Narrator: it wasn't easy)

This is what we want! All we have to do at runtime is read the assets in, and, uh, the whole video game.

Factoring Out The Build System with Cargo

We immediately run into problems, because, of course we do. Our rendering backend needs our shaders in SPIR-V, which I am lead to believe is an IR shader language, but which sounds like a type of Axe body spray, but we write our shaders in GLSL. We have a compiler, which we got through, the package system for Rust, but those dependencies are only linked to the actual program, and the which will become the basis for our build script can’t link to our actual program directly (it’s effectively its own executable). Luckily, cargo saves us here. We edit our Cargo.toml to add this header, and the dependencies we’ll be using throughout the build process:

I’ll go over some of these dependencies, but a lot of these are specific to my own needs.

Also, by default, Cargo will be looking for a file in your project’s main directory, next to src and Cargo.toml. To me, this is an awkward place, so while we're in our Cargo.toml, add this to your [package] directory:

Ah yes, block2. The greatest of the blocks.

Note that we’re in a sub-directory of the folder build, which is inside our projects main directory. I'll show you why that's useful in a second, but for now, build will be a folder empty of anything but build_scripts. My project directory now looks like this:

The “assets” folder will have both the source files and our generated textures!

Notice that we have a build folder now! Okay, so we're off to a good start! We have a nice looking folder to host all our build code, and we have a way to add any code dependencies. Nice!

Creating the Build Folder

Now, time to actually write some code. We have three “routines” we need to go through right now, though there will be more later. Here’s the main in written out, showing you exactly what we'll be doing:

Note how simple this function is — like all main’s, barring trivial exceptions, you want a main to be extraordinarily simple. This is partially a style question, but it goes hand-in-hand with code modularity.

Luckily, rustc will see our file like any other main file (as far as I can tell, Cargo is just compiling it as another executable), so we can make other files as long as we bring them into scope using mod!

Here’s what our build_scripts look like:

Immediately, as you write code in here, you’ll run into a problem. There’s no way in a script to print to console and YMMV with any debuggers (LLDB wouldn’t pick up any breakpoints for me). So we need a way to talk back from the program to the programmer. For that, as you may haven noticed from the Cargo.toml, we'll be using log4rs, a surprisingly complex logging application. I add a method called initiate_logging and it looks like this:

As with all of this, there may be a better way to handle this, but it’s working very well enough that it’s good to stay.

With that, we can now log with standard log macros: info! error! and the like, and it will print to the file we indicated with LOG_LOCATION. In our case, that's a build.log file under the build super-folder.

Final Challenges

Okay, we’re making it there. I ran into three more challenges while I was making my build scripts. Let’s start with the most important: sharing code between your build pipeline and your actual program.

Sharing Code

In our sprite packer, we create a sprite_sheet.png, which is just all our separate PNGs mashed up together, and some metadata in a separate yaml file. We need to create that meta data struct in the script and we need to read it at game startup — so the game and the build script are going to need to share the code. See the diagrams above for the difference!

The solution I’ve found is only so-so. I suspect a multi-project workspace might be the better solution, but for now, here’s what I did:

I ran the following:

Run in your favorite shell but only if it’s bash

Now we have, essentially, a place to put any shared structs or procedures between our and our actual program. In this case, we have two structs which need to be shared, so we simply declare them in a file (or refactor into cleaner individual files/folders) with the pub keyword. Here's what my "shared" project folder looks like:

You can also see the build.log here! Notice too, we made *_build_shared a lib, not a binary.

Now we need to get our and our game to actually talk to this repo. For that, navigate to the main program's Cargo.toml and add in a path to your shared program. If, like me, you're used to using Cargo.toml exclusively to grab crates off, don't worry - the process is very simple:

I wish there was a way in the cargo ecosystem to define shared dependencies between main and build scripts, but for now, this is fine.

Don’t Actually Run Too Much

Okay, so we wanted to reduce waiting time, but we’ve actually made no real difference in our wait times. Previously, we compiled on game startup — now, we compile on build time. Since we generally build the game, run it, edit code, build/run, etc, we’re still doing roughly the same amount of work. We want to reduce that workload even more.

The way to do this is simple: we only want to recompile shaders, repack textures, or do any other work, if any asset files have been added or changed.

Cargo has a built in way to handle this, but from my experience, it’s janky and very black-boxed. I don’t recommend it personally, especially since Rust’s std ships with more than enough to get you to write your own simple memoization.

Essentially, we’re going to serialize some SerializedMetaData about our work and save it to a manifest.yaml file. We'll be saving two things: file creation date, and file modification date. This means that if a file is modified, we'll be able to tell if our old generated data (the compiled shaders, packed textures) is still valid.

Our manifest.yaml will just be a serialized HashMap<String, SerializedMetaData>. The String key is just the name of each file. SerializedMetaData is a simple struct which looks like this:

Every time we try to build, we do two passes over our textures (and similarly for our shaders, but slightly differently; it’s possible to recompile a single shader and leave the rest compiled, but if we add a single new texture, we’re going to need to repack all the textures. This was very difficult to get cargo to understand using the builtin janky methods).

First, we check if the texture exists in our manifest (if it doesn’t, we repack), and if it does and the creation date or modification date have changed, we repack. If none of those are true, we don’t repack.

Our logging system we set up above helps a lot here — we log every time we repack, and, equally importantly, every time we don’t repack. It’s just as important to note when you don’t do work, as this is essential for bug fixing later on.

Here’s what our logs look like in a build where we do some work:

And here’s what a build log looks like with no work being done at all:

Nice! This means that even though we’ll be compiling a lot, as long as our assets don’t change, we’ll be paying a tiny cost (essentially, reading the names of files). This is the meat of the savings we’ll be getting from switching to a build script. If you’re not seeing any time savings, make sure to check your logs! They’ll tell you what’s happening!

However…when you do check those logs, you might notice one very weird thing.

Getting cargo check To Stop Building

We can now build things at compile time, but we actually compile…a lot. In fact, we’re constantly compiling! If you’re like me and you’re running Rust Analyzer or the Rust Language Server in your text editor of choice, it’s probably running cargo check constantly, which, you guessed it, triggers the build scripts. Because we're doing some basic io in our, even though it's not much, if we're running constantly, we'll be getting some serious CPU churn.

At the top of my build script, I add this glorious piece of code:

This is not a good solution, but as best as I can tell, it’s the only one. Luckily, I invoke all my builds using a shell script (I need to specify some graphics features at compile time as well) so adding this to the shell script is simple, but if you’re the type to open a terminal and type cargo run every time you need to, this is going to make that more difficult.

My shell script for building looks like this (with some specifics to my game stripped out for clarity):

The glory of environment variables!

cargo check won't be setting BUILD_ENABLED to true--only we will--so our build script will early exit on cargo check. I hope in the future the Cargo team can find the time to add a flag to cargo check so it doesn't invoke the build script.

Finishing Up

And that’s it! That’s how I build things in my game engine. There are still some things I wish were better — I really dislike the final trick we have to do to stop cargo check, and I wish that I had a better memoization tool. I did try hashing before, but the speed tradeoff was just too great. Finally, all of this rests on Cargo, and sometimes, Cargo just decides to not run the build script if it doesn’t think there’s been a change. This is, as you might imagine, quite frustrating! I’m not exactly sure when those situations happen, but if I find out, I’ll give an update.

Thanks so much for reading! You can follow me on twitter for other gamedev stuff!

Originally published at



Jack Spira

I make video games and love to talk about design and engineering! All my opinions are strongly typed.