AI-generated user interfaces

How to Use AI to Supercharge Your UI Design?

Best practices using Midjourney in product design processes

Joschka Wolf
Bootcamp

--

Close-up of AI-generated UI concept, resembling a well-known fast food chain, @aiui.lab, Midjourney

The recent advent of next-generation text-to-image-generators like DALL-E-2 and Midjourney has raised burning questions about their impact on the design industry. The machines are evolving at exponential rates — so what will the future of UI and Product Designers hold?

Most attention is arising around questions about transparency, inclusivity, risk of deep fakes, and usage rights. However, severely overlooked is the incredibly disruptive potential that services like Midjourney already hold for design teams today.

Valuable in particular: Their uncanny ability to rapidly accelerating initial phases of digital design projects. When used rightly, Midjourney is unbelievably capable to drive visual exploration in the early stages of translating brands into digital experiences — to some extent.

AI-generated UI design concepts; source: @aiui.lab, Midjourney

Strengths and limitations: The biggest strength of text-to-image-generators being entirely visual is obviously also the biggest limitation. The stronger the visual cues of brand and product, the stronger the results. However, the initial prompt can only hint at the intent of a screens or brand: There will be no considerations whatsoever of human needs, business requirements, journeys, content, or functionality.

Weaving those visual inspirations into people-centric product design processes, and ultimately products, will continue to require the regular, human intelligence of XD experts.

Part I: How to design prompts for AI to design UI?

Note: assume you’ve already got a Midjourney account and are familiar with the basics. If not, Midjourney is now in open beta, comes with a free trial, and there are plenty of easy instructions to get you on board.

Update: as of 26 Jul 2022, Midjourney switched to their updated V3 algorithm, so some results may differ. Remember you can always add the --version 2 modifier to your prompts to more closely replicate the results

1) Get your interface concept across

You won’t be able to finely control what’s happening on screen. Keep things simple and describe it… think of a visual breakdown of what you’d like to see in the interface (‘fingerprint’, ‘fried chicken’, ‘calculator’), the more clearly defined this concept is on the internet, the better.

AI-generated interaction concepts after multiple iterations; source: @aiui.lab, Midjourney

I’ve been doing well with straight forward concepts like.. ‘mobile application for ordering fried chicken KFC’.

2) Ensure you’ll end up with a screen

A good prompt design for UI will need a reference to ‘screen design’ or ‘iphone mockup’. I have compared various prompts referencing specific device models, but haven’t seen a great impact in the outcomes.

3) Inspire the style of your UI

Include guidelines or platforms that feature screen design work to influence your style. For example, ‘material design’ or ‘dribbble’ (in case you’re into colourful gradients) should create a notable difference in your results. As said earlier, don’t expect results that ‘follow’ any guidelines.

AI-generated UI styles after multiple iterations; source:@aiui.lab, Midjourney

You can consider mixing in iconic product designs (‘smeg fridge’), art direction of movies (‘tron legacy’), or industrial designers (‘dieter rams’) in.

4) Make it on brand

The fun part! If you’ve got a prominent brand in mind, simply type it in (‘McDonalds’, ‘Tesla’). They all will work reasonably well. The more visual references can be found on the internet the more distinct will the results be (e.g. logos, product photography, advertisements, app screenshots, 3rd party design concepts).

AI-generated UI concepts with brand resemblance; source: @aiui.lab, Midjourney

I suggest to try including colouration (‘red white’, ‘dark green stripes’) or referencing certain themes (‘coffee’) or art styles (‘art deco’). This is especially important for less well known brands, a less clearly defined visual identity, and in case you want to experiment and mix things up to evolve existing brands (‘KFC’ + ‘green’, anyone?).

5) Tip: Chaining prompts for more control

I started to work with chained prompts (adding ‘::’ between inputs) to control the individual weight of each ingredient. You can turn various elements up or down if you’re not convinced of your results after a couple of iterations. Of course this means starting over fresh — I have not started using fixed “ — seed” for consistency yet, but aim doing so in the future.

Part II: How to create stunning UI concepts with Midjourney?

6) Getting started

I suggest starting to experiment with prompts that cover well-known brands — aka those with a lot of images on the internet (logo, products, advertisements, app screenshots, design concepts, etc.) — and rather simple interface concepts (‘mobile application for ordering fried chicken KFC’).

Chained prompts to get to your initial image; source: @aiui.lab, Midjourney
/imagine prompt: mobile application for ordering fried chicken KFC :: iPhone 11 mockup :: Behance, design concept, interface design :: KFC logo, red, White :: fried chicken :: UI design, UX design, UI design trends, screendesign --version 2

Update: as of 26 Jul 2022, Midjourney has switched to their updated V3 algorithm. To replicate my results, use the --version 2modifier in the prompt above.

7) Evaluate your initial image

Examples of suitable init images for UI design; source: @aiui.lab, Midjourney

Try selecting for initial images that have one or two front-facing phone screens. If they already contain some UI elements: Great! Don’t bother too much if they don’t. Give some of them 3–5 iterations and you’ll get a much better feeling of what your prompt has in store.

Selecting the right initial image is key; source: @aiui.lab, Midjourney

If you don’t get any screen-like objects in your initial image, consider re-rolling or adjusting your prompt (see Troubleshooting).

8) Iteration is key

Evolution across 29 iterations; source: @aiui.lab, Midjourney

We’re speaking at least 10–20 recursions to selectively drive variants in baby steps towards ‘developing’ interface elements.

Over time, you’ll build a feeling for what can emerge out of a rather dull scene — or realise when you’ve driven it too far, and need to take a couple of steps back. This method has turned out to be significantly more impactful than painfully fine-tuned prompts for interfaces without iterations.

9) Have fun — and patience

AI-generated UI concepts; source: @aiui.lab, Midjourney

Midjourney is evolving at light speed — and with every new update to their algorithm there will be new challenges and opportunities.

I’d love to hear about your experiences with AI generated UI:

  • What are about your best practices and prompts?
  • Have you gotten great interface results with DALL-E-2?
  • Showcase your work in the comments, connect on LinkedIn, or drop me a message!

10) Troubleshooting

  • If after 5–10 variations no interface has formed, or your mockup forms start to dissolve, it might be best to start over and to adjust your prompt.
  • Sometimes, simply re-rolling your initial prompt will provide you with a significantly better base image.
  • Not enough interface or brand elements? Try adding in additional weighted prompts (::ui design, interface design, design concept, :: iphone 11 mockup), which help dial up desired characteristics.
  • Considering other prompt modifiers like --no or --stop to further influence your results.

11) Tip: Save time and money — skip the upscale

You’ll be doing a lot of iterations to get outstanding results. And you might be as baffled as myself about how much details can appear 10 or more variants down the road.

To make the most of your time, prioritise variants over upscales, and only start upscaling quite a few versions in. I have stopped using upscale-to-max almost entirely, as I found the higher resolution often resulted in smoothing out elements that I liked better before and I ended often up choosing the non-upscaled-to-max images.

Provided patience is one of your virtues, you’ll be able to run the above comfortably in relax mode, so that those experiments won’t exhaust your budget.

Closing Thoughts

As all of the above AIs are fed training data publicly available on the internet, every result can be considered a mashup of someone else’s intellectual property. There is rightfully a discussion arising around the originality of the works produced and the usage rights combined to it.

Close-up of AI generated UI concept, resembling a well-known fast food chain, @aiui.lab, Midjourney

Clearly, the results of the described process are neither suitable, nor meant to be used 1:1 in commercial work, nor sold as they are. Rather will they serve as an additional source for inspiration and rapid iteration. Very similar to what current best practices already include: Manually curating moodboards for internal purpose from any source imaginable (often with little regard for usage rights) to benchmark and illustrate a desired look and feel. Same goes for combining existing interface elements from various benchmarks into ‘frankenstein mockups’ to build upon for sketching and ideation.

In short, the emerging technologies give a 21st century spin to a rather antiquated, laborious process, akin to working with a new team member with a fresh perspective, just a magnitude faster and without concerns to be burning the midnight oil.

⚡⚡⚡

Note: You can find more AI/UI experiments on @aiui.lab on instagram.

--

--

Joschka Wolf
Bootcamp

Head of Experience Design 🚀 I envision, design, and build digital products and teams that change people's lives. Human-centric. Result-oriented. Brand-driven.