Top 3 pipelines for creating textures with AI (MidJourney, SD XL, DreamBooth)

Daria Wind
PHYGITAL
Published in
7 min readOct 20, 2023

One of the greatest things AI can do is boost our existing workflows. And some tasks like searching for the right texture pack can be daunting. Here I suggest a few easy-to-follow pipelines for creating textures using neural networks.

The following cases were done using this software: Phygital+ (an ultimate AI node workspace with more than 20+ AI tools available, including ChatGPT, SD XL, MidJourney and ControlNet), Blender and the web tool Seamless texture maker.

Case 1. Seamless textures from text

This use case is perfect for those who don’t want to spend time searching for the right texture pack and buy it, but who would rather create it from their own idea. With AI you can get textures with up to 2K and make it also tileable.

Step 1. You need to generate the base for your texture. You can do it in two ways. The 1st option:

  1. Add SD XL node, choose any style (Juggernaut or DynaVision recommended). Add to the prompt ‘top view, ultra detailed’. You can also choose the preset prompt style ‘Texutre’ for better results.
  2. Connect the image you like to the SD 1.5 node, turn on Tiling X and Y in the Advanced settings. Set Start Image skip (Denoising strength) to 0.4

The 2nd option: Add MidJourney node, type your prompt and add ‘top view, ultra detailed, — tile’

After getting the texture you can use it as it is or get a quick depth map to make bumps and displacements.

Step 2. Create the node Depth Mask from Image, connect the texture reference and launch.

Step 3. Now we need to remove seams. Go to Seamless texture maker, upload your image, set Parameter ‘Pre-averaging of dark and light areas of the image: Intensity’ to 20. Choose PNG format.

Step 4. Download the final result, go to your 3D software and make voluminous texture. In our example we used Blender with Displacement and Subdivision settings

Use template Seamless basic textures to recreate it!

Case 2. Seamless textures in unique style

Now you can create seamless textures with AI using only Stable Diffusion, however, creating seamless textures in your own unique style can be challenging. Here’s what you need to do step by step.

1. Collect the references of the style (minimum 15–20 images, cropped to 512x512). In our example we’ve used Project Winter as a game reference, and in that case screenshots are the way to go.

2. Then you need to train your model.

Go to Train Panel. Press ‘Train new model’, in the top list of 3 images select ‘Style’. In the Type choose ‘Game Style’. Now it’s time to give your model a unique name. It should be a name unknown for Stable Diffusion. You can easily check that by creating a SD 1.5 node and typing your game name or style in the Text prompt. If Stable Diffusion doesn’t know your style, it will not give authentic results. Concerning names it’s advised to have short names like Sks, Prwg and so on. In our case we keep it simple and just named it ‘ProjectWinterGame’. Leave the option to train on SD 1.5 in Base model and keep Training method at ‘DreamBooth’.

On the next step with Settings, turn off the toggle ‘Optimized’ and set the parameters to the following. Subject: in the style of <your unique name>. Training steps: 1200 (for 31 images).

Proceed to the next step to upload your images and start training!

3. After the training is done, you will get an email notification. Then you need to create SD 1.5 node, choose your freshly trained model in My models (it will be shown with unique name that we’ve given it on the previous step)

– If you need a texture with bigger tiles, set Width and Height to 512x512

– If you need a textures with smaller tiles and more elements, set Width and Height to 1024x1024

Type your prompt: ‘<which texture>, texture, top down view, <subject>. Set Tiling X and Tiling Y in the Advanced settings. We also advise to choose PNG format and turn off Image compression.

After getting the texture, you can use it as it is or get a quick depth map to make bumps and displacements. (The same pipeline as described above in the Case 1).

4. Create Depth Mask from Image node, connect the texture reference. Launch

5. Now we need to remove seams. Go to Seamless texture maker, upload your image, set Parameter ‘Pre-averaging of dark and light areas of the image: Intensity’ to 20. Choose PNG format.

6. Download the final result, go to your 3D software and make voluminous texture.

Use template Seamless stylized texture to recreate it!

Case 3. Restylizing existing texture into unique style: UV map

Having a UV map for specific object makes creating stylized textures quick and simple. Let’s say, you have a model for a small dog house, but it looks quite realistic, whereas your game is in low-poly style (take the game Project Winter, for instance). You can actually train AI on your style and then use this style across your generations!

1. Train your model. As we are still having the game Project Winter as an example, you can follow the same steps as in the Case 2.

2. Import your UV map on the workspace

3. Add ControlNet node, choose Type Edge and select your trained model in My models. Connect the original UV map image to Start image. Type in the prompt what you want to get ‘simple wooden planks texture in the style of ProjectWinterGame’, for example.

4. Press Start!

Note that you might need to generate several options to get the perfect detailization and style transfer. You can easily do that by copying and pasting the node.

5. You can simply download the new texture and replace it in your 3D software to make a stylized asset!

Use Stylize texture from UV map to recreate it!

Conclusion

These were some of the pipelines that we have already used in the workflows. Important note: yep, there are tools that are specifically tailored for creating textures with AI, and we have an awesome lists of these tools in our AI Library.

However, most of them are focused on creating textures simply from the text, and do not offer that much control over the settings. One of the key features that Phygital+ has is the ability to build your pipeline and come back to any part of it at any time. You can play around with settings, train on your own style, use your own images as references and create texture in various styles (we have more than 80+ Styles available for Stable Diffusion).

Hope you have found these pipelines helpful and let me know if you want to see more workflows like this! We’re constantly updating our list of use cases and templates, keep an eye on it to always stay aware of the best workflows for your creative tasks :)

The described pipelines were created in collaboration with Artemiy Kalinin.

--

--

Daria Wind
PHYGITAL

Technology, education and languages inspired enthusiast. Writing hobbyist. Automation and no-code learner