Combining AI Tools and the Wes Anderson Trend

Jo Williamson
5 min readMay 28, 2023

A silly experiment with generative AI, Wes Anderson and Paddington Bear

Earlier in May, I attended a conference in Edinburgh called DiBi (Design It: Build It) with some of my colleagues from hedgehog lab. It was two days of inspiring talks about user research, designer-dev relationships, and stepping outside your comfort zone. Of course, it wouldn’t be a tech conference in 2023 without talks about AI.

Matt Garbutt gave a talk and demonstration about his journey with generative AI and how he harnesses its power in his creative process. In his talk, he used Midjourney (an AI image generator) and Chat GPT (an AI content generator) to create a collaborative campaign for New Balance and the London Marathon.

💡 The Idea

By pure coincidence, I had been exploring Midjourney on my train ride north to Edinburgh. I’ve previously written about my experiments with Chat GPT on this blog, but AI image generation was a new experiment for me.

The other important bit of context for this post is the Wes Anderson trend happening online. For those who don’t navigate the internet according to TikTok’s all-knowing algorithm, people have been sharing videos filmed in Anderson’s whimsical, deadpan, colourful, and flat stylings.

So, combine conference inspiration, my desire to explore AI image generation, my enjoyment of the Wes Anderson trend, and a personal goal to expand my Figma prototyping skills, and what do you get? Obviously, a prototype for Paddington Bear’s online marmalade shop in the style of Wes Anderson — duh.

Here’s a brief look at my process and the outcome.

🌱 Getting a concept

I started off trying to get ChatGPT to come up with a brand collaboration on which I could base this work. However, a Pepsi Max collaboration with the Chelsea Flower Show didn’t inspire me. In the end (and without the help of ChatGPT), I landed on Paddington’s Marmalade store. I wish I had a deep and insightful reason for choosing this theme, but really, I love the Paddington movies, and it felt silly and engaging enough to work. Take this as a win for still needing humans to influence AI and make good calls.

From there, I wanted AI to take over. ChatGPT gave me several options for the name of this online store, including Paddington’s Pantry, The Marmalade Bear, The Bear’s Breakfast, Aunt Lucy’s Marmalades — to name a few options. I also had it generate taglines for the business. At this point, I named the business Paddington’s Kitchen with the tagline “The taste of adventure in every jar.”

Prompt: Wes Anderson inspired logo for Paddington Bear’s Marmalade shop, no text, modern, minimal, vector style

Then it was straight into Midjourney to create a logo. Using Wes Anderson’s name was key in all prompts to get consistent styling. In the end, I ended up using a simple text logo as such an illustrative approach didn’t fit with the website design. However, Midjourney’s output was impressive nonetheless.

✍️ Website text content

Then it was back to ChatGPT to: outline the structure of the landing page; name the flavours of the marmalade available; tasting notes; copy for every section; even the hex codes for the brand.

I mentioned in my last post, but the thing with generative AI is it is a collaboration. It takes working with the software to finesse and improve. There were a number of revisions, clarifications and reworks. It still requires a person to enter the creative idea — there were parts of the copy I had Chat GPT rewrite in Paddington’s voice.

🖼️ Generating images

Then it was over to Discord to start working with Midjourney. Here’s a sample of just some of the images I generated.

Prompt: wide shot of a family kitchen, warm colours, Wes Anderson style, realistic photography, film photography, — ar 6:4
Prompt: Paddington 2 if directed by Wes Anderson, film photography, — ar 6:4 -
Prompt: Photography of a jar of orange jam with a blank white label, on a white background, octane, 4K, intricate and detailed texture — ar 4:5
Prompt: wes anderson flat lay, postcard in centre :: 3, oranges and kitchen equipment around, film photography ar 4:6 -

A note on some keywords in the prompts:

— ar 4:6 This tells the system the aspect ratio of the image. By default images in Midjourney are 1:1 (square), you can use “ — ar” followed by a ratio changes this.

::3 double colon followed by a number tells the system how important that element is

octane creates more realistic physical properties and accurate lighting in images

🖌️ Figma

Finally, it was taking all of this generated content into Figma. In my day-to-day work, I have to work with clients, budgets and dev requirements. I love working within constraints — it makes you a better designer. However, it was refreshing to have free reign to explore some of Figma’s prototyping tools that I haven’t had a chance to play with. I was able to flex some different animations and triggers. Check it out here (best enjoyed with sound)

https://vimeo.com/830972838?share=copy

🤔 Final Thoughts

One of the most interesting use cases for Midjourney was using it to generate a mockup to place the website in. Two minutes in Figma placing the image (Photoshop would also have been great for this). Going forward this is a great option for demonstrating work in a contextual location.

Prompt: an iMac, with oranges and kitchen equipment around it, directed by wes anderson, wes anderson style, clean sharp focus, film photography, — ar 6:4

A phrase I’ve seen online quite a lot recently goes along the lines of “AI isn’t coming for your job, the people that know how to use it are”. Silly weekend projects like these are a great way to play around with the new tools and get to grips with what the new world of design and tech is starting to look like.

--

--