How to create unique game assets with AI in custom style

Daria Wind
PHYGITAL
Published in
5 min readJun 9, 2023

--

Previously we have talked about how you can use 30+ neural networks to accelerate content creation. However, by accelerating the speed we shouldn’t forget about a more important thing — recognizability. Today you will learn how AI helps creating game assets in your unique style.

With training a model you can make AI remember any concept — a person, an object or any style, and then allows to remake it again and again in any way that text2image models can. We have already made a free video tutorial about training on people, and in this article we will focus on training on style.

The whole process of training can be divided into several steps:

  • preparation,
  • training,
  • asset generation.

There are several ways of training Stable Diffusion on concepts including textual inversion and LoRA. Today we are talking about DreamBooth, learn more about it in the ‘Training’ section.

Preparation

As an example we will take Project Winter game. It has unique low poly visuals. As we are training AI on style, you can take any environment, surroundings, objects or characters as visual references in the format of screenshots. The most important factor is that they represent the style well.

In order to get a successful training, we need to prepare the images. The recommended numbers for training:

  • people (10–15 photos)
  • objects (5–8 photos)
  • style (at least 25+ references).

For the training on style there are no strict requirements for the references, all you need is to make the images of square size (recommended resolution is 512x512 for DreamBooth 1.5, and 768x768 for DreamBooth 2.1).

An example of dataset for training on game style: screenshots from the game

Training

So, we have prepared our screenshots and are now ready to start training on DreamBooth.

What is DreamBooth? It is a tool that basically says to AI ‘here, take a look at this subject, remember how it looks and remember how it’s named’.

We start by adding the node Import Files in Phygital+ interface and the DreamBooth node. Then we connect sockets (small colorful dots near Inputs and Outputs) from Import Files to DreamBooth

In our interface we have left only the most necessary settings in the DreamBooth node, so you don’t have to spend much time learning how to train and all you need to do is to give a unique name for your model.

Pay attention that you need to change Class images to 0 images. If you leave it on the standard 100 images, the style will not be trained successfully and overall results could be worse.

The number of steps fully corelates to the amount of images. We recommend to use the minimum of 20 datasest images, but the more references you have, the better. For 20 images the number of steps of 2000 should be enough. One of our clients trained the model on 300 images and only at 5000 steps the style started to show on generations. The SD community advises to calculate the number using a simple formula: the number of dataset images X 100. We recommend to begin with multiplying by 15–20.

Now you can launch the node and wait for your result. You will get a custom model that will appear in the dropdown list in Stable Diffusion. After the training you will see a banner and then you can start generating

Generation

For generation we need to select the right model in My models in Stable Diffusion node. It will have the same name as you previously put it in the Subject in DreamBooth node.

Generating assets from text

In order to generate assets, you need to type your idea in the prompt and your unique name after that. In our case we put ‘in the style of ProjectWinterGame’.

Here’s an example of how neural network generates a new character or restylizes a famous person

With this method you can also generate locations and surroundings.

Using the same method you can generate icons by writing down what we want to see and then removing background with another node.

Summing up, using Stable Diffusion and DreamBooth node you can create unique new assets for games in any style. Our product Phygital+ is currently in Open Alpha, and already today you can start using new pipelines for your creative and business tasks. If you have any questions or suggestions, reach out to us and we will help you with your creative projects :)

--

--

Daria Wind
PHYGITAL

Technology, education and languages inspired enthusiast. Writing hobbyist. Automation and no-code learner