Creating Design Assets using AI

Daria Wind
PHYGITAL
Published in
8 min readApr 12, 2024

In this brief article we will show you how you can create a pack of unique design assets using AI. Use your final creations for the experiments, enhance it or change it in Photoshop or any other software if needed. :)

The AI potential for assets generation is mindblowing — you can already create almost anything from icons prototypes to abstract elements.

But we might face one problem — consistency of a specific style while generating. This issue is particularly present if we need 50–100 elements in one style, or let’s say we need a whole alphabet of letters, but using the same bit of the prompt to replicate the style doesn’t always give the best results. Or, for instance, we need to generate a specific object or element, but using Start image won’t give the same output.

In this case the easiest way is to train AI model, and then by using Start image to generate any images you want by using reference. You don’t even have to be good at prompting to do that!

So, let’s get started!

Here are some examples of design asset packs that were made using this method: all the works were created as a part of #NeuroChallenge — a challenge organized by Phygital+ and Sholotch, the studio and school of contemporary digital design.

Works by 1) @sashabagova, 2) @deadinsaydick and 3) @614element

Step 1. Collecting references

Let’s imagine, we need to generate design elements in the style of metallic blobs and transform objects into metallic deformed assets. So, we started by creating references in Midjourney.

In Phygital+ you can use the best neural networks in one place: you can generate images in Midjourney, DALL-E 3, Stable Diffusion, choose the best ones and then use them in the AI training in one interface.

For working with training and Midjourney in Phygital+ you would need a subscription, but you can always try it with our free trial.

By using GPT-4 Vision you can also get a prompt idea from any of your references, allowing you to create more images for training.

We recommend to use the minimum of 15 images for training (more tips on AI training — in our AI wiki), here we’re gonna use 18 of them.

Step 2. Training in a few clicks

Let’s try Train Panel, the AI app in Phygital+ for training AI models. Using the simple interface, you can train AI on anything from characters and people to styles, products and design elements.

All you have to do is to come up with a name and upload references, only in a few clicks.

Let’s to go Train Panel, press Train New Model. In our case we are training on the metallic blobs, and we should select Object in Category and Element — in Type.

Now we need to write down a name for our model. The more unique the name is, the better it is for generation and training, and it’s recommended to use a name unknown for the neural network. You can start with liquidmetal, one simple word.

Now we need to go to the Next step: settings. As we have the presets of the most optimal settings for each category, you can easily skip this step and go to the next one :)

Let’s upload references using Drag and drop or by clicking on the Upload.

Now we can start training! You will see the message on the screen that the training has started, and in ~20 minutes you will get an email notification, that your model is ready to use.

Step 3. Generating Assets

Our model has been trained, now it’s time for some experiments! In Phygital+ let’s add Stable Diffusion 1.5 node to the workspace. Select the blank option in Models to unlock My models, and in the dropdown list select our trained model. The most recently trained models are in the top of the list.

As soon as we choose the model, the necessary prompt for activating the model will be added automatically (in our case it’s liquidmetal Element, the unique name we have given plus the chosen Category). Let’s press Start to generate, and as we didn’t have any additional keywords in our prompt, we get the results similar to our dataset images. By doing so, we can see that our model has been trained correctly :)

Now let’s move on to the fun part — generating unusual forms, design assets, and transforming objects into blobs from references.

Method 1. Generating using ControlNet.

ControlNet is a neural network that takes the initial image and based on the prompt it puts generated objects into edges and shapes from the reference. It doesn’t look at the colors of the image, it sees it in black and white (or as a depth or normal map).

That’s why with ControlNet we can transform the colors, materials and general look of any object.

To use ControlNet in our case we need to follow the same step and select our trained model in My models.

Let’s take, for example, the Phygital+ logo, upload it to the workspace using Import Files and connect it to Start Image in ControlNet. Then we should select Edge in the Type (it’s Canny model), that will help to keep all the lines and edges from the original image.

We press Start and get a new logo in the style of metallic blobs from our references.

Now you can experiment with any forms, like creating letters in a specific style to keep a certain font and a new style at the same time :)

Method 2. Using Start Image in Stable Diffusion 1.5

This method is more interesting, as AI in this case uses shapes, form, composition and colors in generation and it also allows to manage the extent to which the original object will be changed. So, we can experiment more with forms and create more abstract design assets in the needed style.

To transform any image, you need to upload it to the workspace and connect it to Start image in Stable Diffusion 1.5. Now we need to set the Start Image Skip parameter to 0.6. You can also add more words to prompt or leave it as it is. By using Upscale afterwards, we have got this wonderful metal rabbit.

And like that we can turn any object into metal!

For abstract forms and shapes you would need to put Start image skip to 0,7 and then signifacntly transform the initial image.

For example, you can create a unique alphabet like that:

Turn any icon into an unusual asset:

Or melt any photo into a metal object:

The possibilities are endless!

During our challenge with Sholotch the participants trained their models on x-rays, on ink, on the style of David Cronenberg movies. The results were fantastic, the only limit is your creativity.

Works by: 1) Katia Rakevich @hurujoi, 2) @at_skii 3) @deadinsaydick

For boosting creativity

For finding inspiration we recommend to look into our prompt collection with the example of prompts for several use cases and styles.

You can also use Describe Images and GPT-4 Vision if you have reference, so you can generate similar images (for GPT-4 Vision use this prompt: describe this image in this way in one sentence: object, object’s features, specific style of image with brief description of color story. Dont use points only commas, dont use capital letters)

Or you can ask ChatGPT for ideas.

You can try it all yourself, including Midjourney, ChatGPT, DALL-E 3, and training in Phygital+ :)

--

--

Daria Wind
Daria Wind

Written by Daria Wind

Technology, education and languages inspired enthusiast. Writing hobbyist. Automation and no-code learner