Texture Synthesis and Remixing from a Single Example
A big part of a game developer’s time is spent translating creative ideas into representations that software can understand. And that requires a good grasp of how the software works, making game development difficult for most people to master.
At Embark, we believe it should be as easy to create as it is to play. And that pushes us to rethink the way we create today. I’m a procedural artist here at the studio, and I spend my days thinking a lot about these sorts of challenges from a content creation perspective.
So for example, let’s say I want to build a 3D model of a bridge. In essence, what I have to do is to convey that intent via a sequence of tool actions, like extrude, bevel, transform, etc. Often the final combination of steps is not immediately obvious. And even before I can start tackling this puzzle, I need to learn the language of the software. What is that tool called? Where is it located in the UI? How do I do “X”? Will tool A + B allow me to do a C-like transformation?
With proceduralism, the aim is to automate parts of that software translation by building a higher abstraction level on top. For example, we could express building a bridge, as a spline and a couple of semantically meaningful sliders — a representation that is much closer to our “raw” creative intent — and let the algorithm handle the rest.
Designing these specialized procedural algorithms, however, requires a lot of knowledge that is orthogonal to content creators’ skills. Even for an expert, it’s time-consuming and the result is usually suitable for a narrow subset of use cases (for instance, I can’t generate houses with a bridge algorithm). In practice, we end up with a separation between users and tool creators.
But what if we could automatically extract rules from examples? Thus allowing anyone to make their own tools by simply showing what they want. This is where example-based methods come in.
Above are some title slides I generated with example-based texture synthesis. Each took about 20 seconds to create. In total, I generated about 3,000 images by providing a library of roughly 150 images as style examples and six “guide maps” to dictate composition and contrast.
I could then cherrypick results from generated content. This ability to explore so many ideas and combinations quickly — and the thrill of finding combinations that worked and just seeing what would happen — was a lot of fun. It was like a visual brainstorm on steroids.
The main principle of the algorithm is simple — every pixel asks “if these are my neighbors, what is my color?”. And the answer to that question is the example image we provide. So, the example image itself is a program that every pixel follows.
But hey! Does it mean we “program” by giving an example? And the answer is yes. You are creating a unique rule-set by showing what you want. That is why I think example-based synthesis is so interesting.
We’d love for all of you to try out example-based texture synthesis for yourselves. So if you head over to our Github repository, you’ll find a light API for Multiresolution Stochastic Texture Synthesis, that my colleagues Tomasz Stachowiak and Jake Shadle helped to build in Rust.
This API will allow you to generate images on your own using our example-based algorithm. The repository also includes multiple code examples to get you started (along with test images), and you can find a compiled binary with a command-line interface.
Many of you have tried it already and achieved some fantastic results. Joakim Olsson used it to turn the Embark logo (!) into a piece of terrain, using heightmap data of Iceland, and then brought it all into Unreal.
And we’ve also seen people bringing the API into various software packages, like @JoseConseco3 who added it as a tool for Blender, or @luboslenco who used it as a plugin to make tiling images in a texture painting tool called ArmorPaint.
And there’s been interest in using it for DnD map creation (bring it on!).
When it comes to content creation, the games industry is very familiar with making content manually. We’re getting better at designing rules (manually) to automate making content, aka proceduralism. But the next step, in my opinion, is to go even further and ask how we can go directly from creative intent to content, by interpreting that intent and extracting the relevant rules that would create the content.
Of course, getting to that workflow is a long journey. And even if we have ideas on how to get there, we don’t have all the answers. And most likely, it won’t be the answer, but a piece of an overall puzzle, where texture synthesis is a small step towards a bigger vision. Ultimately, we believe that methods like these could be used to create full 3D interactive experiences.
Finally, if you want to get a more in-depth deep dive into example-based texture synthesis and various use cases we found for it at Embark, you can check out my talk from the Nordic Games conference.
Ps. We’re hiring!