Embracing AI in Brand Design at Alan

Reinventing our Brand Experience

Édouard Wautier
8 min readApr 4, 2023

At Alan, we’ve always been excited about the potential of new technologies to redefine the way we approach design and communicate with our audience. Today, we’re sharing how we’ve been using AI-powered tools like Stable Diffusion and Midjourney to transform our brand and create content more efficiently, with a more aspirational and captivating visual language.

AI has been a topic of much debate within the design community. Some people are strong detractors, others are strong supporters. Regardless of where you stand, one thing is certain: It’s changing design in deep ways and the best is to stay ahead of the curve. Here’s a practical take on how we’re using AI at Alan today to reinforce our brand.

Alan’s Brand Evolution

Since we started Alan, we’ve had a cute little animal coming along with our journey. Over the years, it has evolved from a simple vector logo to a furry, 3D indigo beast that represents the mysterious and friendly spirit of our company.

Alan’s brand evolution since 2016
Alan’s brand evolution since 2016

We love our mascot and always wanted to do more with it, but the math of resources, finances and great illustration work does not often add-up in the reality of business. Long things short, we’ve always been frustrated by the limitations.

The Rise of AI Image Generation

About 12 months ago, our CEO and co-founder, Jean-Charles Samuelian, started playing with Dall-E and experimenting with creating marmots (yes, our mascot is originally a marmot).

He couldn’t make anything “on brand”, but he could make marmots in many contexts or situations. We suddenly had weekly marmots punctuating our weekly reports to the company, messages on Slack, and decorating the walls of our offices.

The early days of Dall-E

Then Charles Gorintin(our CTO and co-founder) took the lead on exploring how AI (all AI, not just image generation) could change Alan. Among other things, he started deploying Stable Diffusion and tinkering with it. Quickly, Dream Booth made it possible to train a custom model. Charles trained a first model based on images that we had around (A few 3D rendered assets), and the results were interesting. Often terrible, sometimes not too far off, sometimes inspired.

(For curious people, here is a more technical paper written by Charles Gorintin)

Early days using a Dream Booth trained model on Stable Diffusion

Putting AI to the Test

The only way to really try the tech was to use it for real.

Wonder what it looks like? Ground zero of UI right here!

I was giving a presentation later that month (Here’s the talk, fully in french, sorry about that) and decided to use Stable Diffusion and our first model to make all the illustrations of the presentation.

I really learned how to use the tool, its strengths and the limitations.

Generating a good image was not zero work. It was taking about 3 hours per image, but it was still way better in terms of quality and efficiency. It was pretty impressive given how little we had invested at this point.

My very first attempt to make presentable “on brand” visuals!

Making Stable Diffusion a production Tool

Convinced by my first test, we decided to try to make Stable Diffusion a production tool for us.

Building a Library of Images

We had only fed the current model with images that we had around. This time, I built a set of images specifically for the training.

Using my limited skills as a 3D artist, I took our 3D mascot’s rig (think of it as a virtual puppet you can manipulate to create new poses), built a library of multiple poses (variations on our mascot) and rendered 32 shots of them from multiple angles, respecting the specs for Stable Diffusion (Squares of 512 x 512).

The training set of images

Training the Model

One of the essential aspects of training an AI model is finding the right balance between the number of training iterations the model goes through. During the training process, the model learns to recognize the patterns, shapes, and styles present in the training images.

If the model goes through too many training iterations, it may become too rigid and only generate images that closely resemble the training set, resulting in a lack of creativity and diversity in the generated images. On the other hand, if the model goes through too few training iterations, it might not learn enough from the training images, producing images that are barely recognizable or inconsistent with the brand’s visual language.

This is a trial and error process. What we did is generate several models based on the same set of images with various training iterations and compare the results.

We still want the model to be “free” enough to create this type of fun looking images.

Testing the Model

We used the “X/Y/Z plot” render feature in Stable Diffusion to test various versions of the models against a set of prompts that represented different situations. This allowed us to select the best models for our purposes.

Comparing different models with different prompts and different sampling methods.

Here are examples of the prompts I’ve been using as test prompts:

alanmarmot having a nap on a pile of leaf, (magic forest), autumn season, intricate red leafs, tress, calm, serenity, misty atmosphere, red orange yellow tones

alanmarmot having a nap on a pile of fresh snow , (top view), winter, cold, (magic forest), (blue sky), flying leaf, white blue tones

Full shot of alanmarmot dressed as a chemistry lab operator, playing with glass tubes, bright color chemical components, (mad scientist), smoke, inside a lab, neon light, bokeh effect

And the result is…

Pretty good. In 20 to 60 minutes, we can create images like these:

After a couple of trial and errors, here is the results using our first stable model!

It’s not complete creative freedom. It’s more like working with an assistant that can be both genius and totally dumb.

Current limitations:

  • Limited creativity in poses (e.g., yoga poses or riding a horse are challenging)
  • Inaccurate details (e.g., incorrect number of fingers, missing or oversized tail)
  • Difficulty achieving specific compositions

Here is how I recommend to use it today:

  • Maintain an open mind regarding the final image (composition and content). Experiment with different visual approaches for similar ideas.
  • The results partly depend on luck, so try multiple times, and try variations on the prompt, various sampling methods and settings.
  • Be specific in your choice of words and learn the vocabulary to describe an image (the framing, the composition, the pose, the lighting). Writing a prompt is a skill, many people online writes about it. For instance, @nickfloats regularly tweets around vocabulary and prompt writing technics. Some people use GPT to generate better prompts.
  • Set aside the keywords that will generate a specific style and use them to create consistency between visuals. For instance, for us I frequently use the combination “octane render, cinematic, movie concept art, cinematic composition , ultra-detailed, realistic , hyper-realistic , volumetric lighting, ethereal, cinematic light”. It works really well with the 3D looking style of our mascot and helps create a commun look for the images.
  • The image is rarely perfect right out of “Text to image”. You’ll need to get creative and combine techniques: When you have a good direction, move to “Image to image” to generate variations and fine tune, back and forth with “Inpainting” to correct smaller mistakes. Finalize with Photoshop if necessary. I you don’t know what these means, quick searches on Google will tell you.

ControlNet

Very recent addition, ControlNet has the potential to significantly improve the creation process. It allows us to generate visually appealing images by transforming hand-drawn or low-fidelity illustrator silhouettes into polished representations of our mascot.

For instance, I hand drawn in 2 minutes the first 2 silhouettes, added a silhouette from a 3D render and used ControlNet to create the rest.

A quick “ControlNet” render from drawings.

The two small silhouettes are slightly awkward, mostly because my drawing was not very polished. Otherwise, looks pretty good for 2 minutes of work.

You can imagine pretty efficient workflow using an vector puppet to create very low def silhouette type of visual and let the tool figure out the rest.

Training the team

We have a tool that gives good results, now we need the team to adopt it.

Today, I do:

  • Documentation on Notion (Vocabulary, description of the workflow, documenting the keywords to create certain styles, detailing of specific settings).
  • 60 minutes live training going through my workflow and a “homework exercise” is usually enough to bring people up to speed.

Conclusion

AI generation is still a very new tool at Alan. It’s very popular for internal use, and it’s becoming a tool for external communication.

It’s a strong enabler for us to do what we could not do before, because of limited resources and timing. It’s making the palette of expression of designers and non designer wider, leading to a more visual communication. It will power product teams to integrate content at scale and marketing / sales teams to build custom visuals to re-enforce our brand and its proximity to our users.

We’re eager to hear from you — how are you incorporating these technologies into your design practices? Share your experiences and let’s continue to push the boundaries of what’s possible in design and communication.

PS: Here is are a few background goodies for the brave ones who’ve read to the end.

--

--

Édouard Wautier

Lead UX / UI designer at Alan.eu, former Lead UX / UI Design at Withings. More on ma at edouardwautier.com.