Clayxels, my journey into making a game-designer friendly creation tool.

Andrea Interguglielmi
7 min readDec 9, 2019

--

Hi, my name is Andrea Interguglielmi and I make games. This lengthy post is about my quest to come up with an intuitive tool to make 3d assets, quick and dirty. I’ll explain how this journey got me into something more convoluted than the 3d-apps I tried to avoid in the first place. But also how it all led to an unexpected result that I call, Clayxels.

Clayxels displayed inside Unity’s editor.

Clayxels (made up word) are very tiny voxels that can render clay-like volumetric solids, either as tiny dots in camera view, or outputting a plain standard mesh made of polygons. This stuff has been around for ages under various forms and in various contexts, from early 2000 demo scenes, to metaballs, ray-marching, signed distance fields, marching cubes, and a whole lot of other exotic names. At the heart of all these techniques there are mathematically defined solids to create shapes and seamlessly blend them together. And best of all, no polygons are strictly needed to display them.
As of late, these techniques are getting back into the spotlight. There’s some very talented folks like Inigo Quilez, Alex Evans, Sebastian Aaltonen, leading the way on trying to cram these crazy complex computations into consumer hardware with the ultimate goal of making games.
I’m no mathematician, so I get by reading a lot of what these folks kindly share, as well as by trying to decipher those few papers that are written in human comprehensible english.

Media Molecule pushing voxels to unthinkable quality with Dreams on PS4 hardware.

My results with Clayxels so far seem very encouraging, reason why I decided to share some of the process that led to where I am now. I think this tech could end up in a nice tool for quick and dirty asset creations, as well as unlock some cool clay-like game-design possibilities at run-time. As for the rather bumpy road the led to Clayxels, I’ll need to roll back to where my pursuit for the ideal custom-tailored 3d app began, a little tool I call Galumph.

speed doodle in Galumph, my 3d paint app.

It’s 2017 and I’m working with a company that has a nice pool of talent coming from 2D fine art, problem is, they want to make 3D games.
That’s when I notice some amazing VR apps that let 2d artists make 3d art, all without much technical knowledge at all. Namely, Google’s Tiltbrush, Oculus Medium, AnimVR. These amazing tools use a headset and spacial controllers to let artists draw free-handed, just as they would with familiar painterly gestures, but they are in fact crafting 3d assets.
The idea proved to be effective, we had our lead artist make some really cool assets for the game without any 3d modeling knowledge. But I didn’t like having a production completely locked behind VR tools and hardware. I wanted a fire-exit app to make some of those 3d-sketched on a desktop without VR, just in case. So after some head scratching, I came out with an effective workflow to make 3d drawings, in 2d.

Galumph sketch after I got ok at using my own software (exported and visualized in Sketchfab).

That production didn’t last long before I was back on my own, working as a solo indie-dev. But Galumph kept occupying my weekends. Drawing a 3d model with Galumph is simple, and yet kinda weird. It works like this: you trace a stroke, rotate the view, specify a depth in 3d by using any of your existing strokes, or other solids used for reference only, and then repeat from there. These reference primitives that are used to draw on top of, became crucial to being able to draw complex sketches. I wanted to push this part further, and to do that, simple solids like spheres and cubes were not enough. I needed more complex shapes and I did not want to use external meshes, or worst, come up with yet another polygon sculpting tool.
So I ended up hacking together something based on that ray-marching technique I mentioned earlier. My code became very complex very soon, but the user interface stayed intuitive, and the workflow to define these ref- shapes and then draw on them was simple and playful.

Clayxel’s first iteration used in Galumph to draw strokes on top of organic shapes.

Mashing together solids to create various shapes and then draw on them worked quite well, Galumph was capable of making virtually any kind of 3d drawing. But it was all being coded in c++, and the GPU code soon started showing incompatibilities with some hardware. The idea of porting all of this to a simpler platform like Unity was starting to creep in my head. On top of that, these clay-like shapes were so immediate and easy to manipulate, they made the stroke workflow feel way slower in comparison. At this point Galumph was promising, but impossible to release to a wider audience without going crazy with hardware related bugs and workflow improvements. I could have sticked to what I had, it was enough to allow me to make my own games. But those sweet clay-like solids were too much fun to pass on.
So, scrap all of it, lets go back to the core of this whole idea, lets make it all based on clay-like solids. Enter Clayxels, for Unity.

Clayxels running in Unity, even just using cubes there’s a lot that can be done.

There are plenty of ray-marching resources for Unity, they are also very fun to use, but they struggle to perform well once there are many solids in scene and they all suffer from being slow on high res viewports. That’s all part of tracing one ray per pixel and checking hits on every single solid in scene. Not every ray will hit a solid, but every ray will need to check every solid for every pixel. It’s a lot of stuff to check! So, scrap ray-marching, lets go voxels. Voxels can get much closer to the surfaces much faster compared to shooting rays from the camera. Also, they don’t depend on the viewport resolution, so they scale better. Wait, Alex Evans did all of that already in 2015?!? (have a look at PS4 Dreams, Learning From Failure). OK, lets watch his talk and read his siggraph paper, fifteen thousand times. Cool, maybe I understand some of that. Definitely the best way to go about this is to start with a 3d grid that is very coarse and progressively gets refined, until it hits the solids at micro-level size (adaptive grid, or octree, sorry for the shady details, I promise the next post will be all about the geeky stuff). Underneath it all, this system still uses the same mathematical solids capable of seamlessly blend together like butter. Lets use Unity’s compute shaders, where I have amazing GPU abstractions to avoid having to deal directly with any of that Nvidia/AMD/Intel/GL/HL/Metal craziness.
All of this for Galumph 2.0 clay-studio edition? Maybe.

What could be of Galumph’s new cross platform UI.

Now that Clayxels is running in Unity, it can be used as a modeling tool and leverage in-editor viewport manipulation, hierarchy editing, inspectors, and even export FBX meshes using the existing tools. It’s an in-editor asset crafting toolkit out of the box. But the best and most exciting part is, it can be used from C# and run in game. This opens up a whole new kind of use cases I did not see initially, clever use of this stuff might make for cool puzzle games, or sandboxes.

Clayxels can run in game, it could be used for some cool game-play gimmicks

So many possibilities. And that’s where I’m at with this journey. Like any good journey, it started from a predictable place and ended up in totally unexpected wilderness. I might still keep on pursuing my spare-time ideal 3d app with Galumph, but maybe there are also other ways to get Clayxels out in the wild. Could it become a toolkit for other creators? Whatever happens, it’s addictive, so I’ll definitely share more about this from my twitter account: https://twitter.com/andreintg .

If you want to dig deeper here’s more resources on 3d-drawing apps, voxels and ray-marching techniques from people that have been into this stuff way before me:

--

--