Artificial Intelligence in Industrial Design — Using AI in your Design Process (Midjourney, Stable Diffusion, Vizcom.ai)

Akshay Bhurke
9 min readFeb 13, 2023

--

AI has been taking over art, more and more every day. The speed at which datasets and outputs are developing, is frankly shocking to anyone who has been paying even the slightest bit of attention.

Just in terms of art alone, check out what an AI program named Midjourney was capable of just under a year ago, versus today.

Comparison of the progress of Midjourney’s image generation between May 2022 and Jan 2023 — and the staggering change in quality.
Midjourney’s Progress — May 2022 to Jan 2023

Now, if you’re an industrial designer, or any kind of creative professional, you probably weren’t too concerned. But seeing how far text-to-image programs have come in absolutely no time at all, can be pretty worrying.

Whether you like it or not, AI is here to stay.

I think a better question to ask is how we can get acquainted and equipped with it. A quote I saw on Linkedin summed it up pretty well.

“AI might not take your job, but someone using AI will.”

So that being said, I’ve played around with some free, and easy-to-use tools that you, as an Industrial Designer, can potentially start to implement in your design process. And the interesting thing about these tools is that they have fairly different strengths and weaknesses — which might work in different parts of your design processes in different ways.

Let’s start with Midjourney. (https://www.midjourney.com/app/)

Midjourney, as I’ve already shown, has come incredibly far in the quality of it’s outputs — and is in my experience, the best text-to-image AI tool. I’m just constantly in awe as I see other people’s prompts come up in real-time on Discord.

My first test was to ask for a SteamPunk 6-Cylinder engine. And it immediately gave me some fantastic results. Midjourney allows you to choose one of those 4 variations it generates to either create variations of the same, or to increase the quality of it.

“orthographic render, octane, 3ds max, 6 cylinder engine, steampunk, intricate, detailled, 4k, photorealistic”

So I chose my favourite to create some variations of — and then some more-and then some more — and it kept giving me slightly different yet equally valid versions of the image I’d chosen — here’s some upscaled versions of images I liked. Already you can see that this is an almost scarily powerful tool for concept generation.

But it has a functionality within it that I think is even more useful for us designers. It can take an image, or a group of images that you input, and it can create new images inspired specifically by those images, along with a prompt for context.

Original Artwork by Artem Smirnov (left) , Midjourney generated images (right)

So I pulled an old sketch from my archive, popped it into Midjourney and gave it a small prompt:

“a detailed sketch of a sneaker, futuristic, 4k, studio lighting, pen and paper”

And they’re not bad at all — they’re definitely high-top sneakers, but pretty different from my original sketch. So if you’re just looking for variations that are closer to a reference sketch, this might not be for you. But there are better tools for that coming up, so stay tuned.

Next I tried the same exercise with the sketch render of the same sneaker design.

“shoe design, leather, dslr, studio lighting, sneaker”

Again, not too similar, but nonetheless pretty damn cool.

I tried the same exercise with a gaming mouse sketch.

“as a concept sketch, detailed, orthographic, gaming mouse, 4k, silver and orange”

I’m sure I could pick at least a few details from a sketch here or there that might be pretty interesting.

And finally for Midjourney, a sketch render of a gaming mouse as well.

“computer mouse, gaming, design sketch, photoshop, realistic, 4k, product design”

And I really like these results. Nothing too crazy, but they look great!

On to our next tool — Stable Diffusion. For this model, I’ll be using Dreamstudio. (https://beta.dreamstudio.ai/dream)

Dreamstudio works fairly similarly to Midjourney, except that it has a bit more control over how much quality and quantity you’re looking for.

Dreamstudio UI

Stable Diffusion allows you to import an image reference as well, and you can also type in a prompt for what you’re looking for — to give more context. What’s cool is you can also dial in exactly how much you want the reference image to influence the output. Lower image strengths will make the sketch more loosely based on your image, and higher percentages make it closer to the image. Let’s start at 50 and work our way up to see the difference.

50% Image Strength

Okay, not terrible. I can see some sketches with some resemblance.

Now let’s try 60.

60% Image Strength

Already I’m seeing some interesting variations, and they definitely look closer to my original sketch.

70% to me has seemed like the hotspot for variations on a sketch or image that already encompasses the overall look of what you’re looking to design. It seems like somewhere between 60–70 is where you’ll find a good area of exploration.

70% Image Strength

And finally just to show you — 80% seems to be way too close to the original to have any sort of new inspiration, but your results could definitely vary.

80% Image Strength

I also wanted to point out that you could experiment further by trying out different models — such as switching between Stable Inpainting and Stable Diffusion’s newest model.

Changing the model used to generate images for different results.

Now let’s try something a little different. Stable Diffusion has another feature where you can take a reference image and erase parts of the image out — and let the software interpret and complete the image based on the image itself and the prompt you’ve given. So if you’re looking for variations of more specific areas in an image — you can do that too using this feature. So taking the same image, I can erase some parts of my image away. I’ll keep the prompt the same, and see what I get.

Inpainting — Erasing part of image to be reinterpreted.
Inpainting results on sketch reference

It’s pretty funny to see just how much influence the Nike tick immediately had on a couple of the variations. But I hope this illustrates how Inpainting could also help with basic ideation and iteration.

I did the same exercise to see how it did with a more fleshed-out reference image. Here are some results of the reference image by itself at 50% -

Sketch Render generations at 50%

60% -

70% -

And finally, 80% -

I also tried some Inpainting variations in Stable Diffusion 2.1 -

Inpainting results after erasing the sole of the design.

and Stable Inpainting 2.0 -

Pretty cool stuff.

Now, DALL-E is another very well known AI tool that I think for most people was the name that got all the virality for text2img generators started — but unfortunately it has been the least capable for the ID use cases I’ve been looking into (in my experience).

I won’t spend too much time on them, but I’ll let the results speak for themselves.

DALL-E as well has inpainting, but it definitely is not as crisp as what Stable Diffusion provided.

And the image variations must’ve thought my sneaker sketch was pretty bad considering the harsh variations it provided.

The variations as well the inpainting for my sketch render were also not what I was hoping to see.

So finally, we’ll end with Vizcom.ai. (https://www.vizcom.ai/)

Vizcom is actually meant to be more of a fidelity enhancer for basic color-blocked forms. It seems to try to create a more fleshed-out render like image, instead of a simple looking sketch.

Vizcom has a Sketchbook-like interface, enabling you to draw something, provide a prompt for it, dial in the sketch’s influence on the output, and then also let you draw over the generated image, allowing you to fix it up and regenerate something that’s closer to what you want to see.

It’ll definitely take a lot more practice to get some better results — but for a couple of minutes of doodling, the outputs are pretty incredible.

From first step to final outcome, in less than 10 minutes.

This tool could really help visualize early concepts better, or even provide a useful underlay for what could become a more refined concept sketch.

You can still import images, but I haven’t had much success with it. Perhaps they’d be best as a reference for your color blocking. I can potentially see this having an interesting role in a creative process, away from what the likes of Midjourney and Stable Diffusion are capable of.

SUMMARY —

Midjourney :

Pros —
Incredible Image Outputs
Multiple Image References

Cons —
Short Demo (~25 Images)
Lack of Privacy
Requires Discord Account

Premium Version —
Starts at $10/Month
‘Stealth’ Image Generation at $60/Month

Stable Diffusion :

Pros —
Lots of Free Generations
Quick Results
Lots of Control
Powerful Image Ref Generations
Full Functionality in Free Version

Cons —
Image Reference Erasing is Slightly Buggy
No High-Res Archive of Past Generations

Premium Version —
No Subscription, Only Credits
$10 Equates to ~5000 Images

Vizcom.ai :

Pros —
Quick Fidelity Upgrades
Art Creation Interface
Almost All Features in Basic Plan

Cons —
Slight Learning Curve

Premium Version —
$10/Month

As you use these tools, it’ll become pretty clear that these tools aren’t yet creative enough to understand context or correctly translate exactly what you’re looking for. It is also clear that these are currently at best helping with aesthetic ideation. But even within that, you as the creative thinker will have to think of how it’ll be translated to become producible in the real world.

And in a way, I think that’s a good thing. Our individual creativity and differences in problem-solving, aesthetics and so on are maintained.

How long do we have? Years? Months?

Hard to say. But the least we can do is get on the train before it leaves the station.

--

--

Akshay Bhurke

Industrial Designer | Product Visualization | AI Enthusiast | HODLer