Why use AI for our recent Gucci retail project?

Random Studio Editor
6 min readMar 20, 2023

For a new Gucci store in Chengdu we created a three-storey digital installation that transports visitors to an ever-evolving landscape. To create the digital content we involved AI in our design process. The reasons are both practical and aesthetic. The brief we received from Gucci centered around building a location; a permanent but ever-changing environment. The scale of the architecture we were working with, and therefore the screens we would use, meant that practically we had to look at using an automated approach to produce the large amount of footage we would need.

Conceptually, AI was also a good fit for the two core sources of inspiration for this project — the natural environment of Chengdu, and the Renaissance architecture of Florence — because of its painterly aesthetics. As the environment we were creating makes reference to these real, existing locations, combining them to create a new landscape, the ‘remixing’ that AI does was a perfect fit. AI is statistic-based; it produces new imagery based on a collection of existing data, creating variations on what you feed it. This worked well with the huge variety of content we were researching.

The rise of AI-generated imagery has raised many questions surrounding authorship and originality. How do you see the issue of appropriating existing imagery in this project?

For us, it was very useful to explore the time period that was embedded in our reference pictures. We weren’t working towards imitating a particular style, which is one of the darker sides of AI — the concern around copyright infringement. Is what we are doing collaging? Or creating new, original content? The way we used AI was to reach a poetic quality and we put a lot of creative energy into making something special that bridges the tradition of these old references and the newness of this technology. We were also mainly working within the public domain, which is a more ethical zone.

What was your approach as a studio to using AI?

AI works best when it’s in collaboration with humans. We really only used it as one part of our pipeline. There was a lot of manual labour in combining all these assets and creating scenes with depth and cohesion to make a single piece rather than a collage. A lot of work went into connecting the scenes all together and making a story, as well as post-processing the footage, motion, colour, positioning, deciding how things sit within the scene. Working in this way creates an interesting interplay between AI and human creativity.

Did you encounter any challenges with this human/AI collaboration?

On the Chengdu side, we encountered the bias of AI around one of the core visual elements of the hibiscus flower. A native flora of Chengdu, it is quite different from the American hibiscus flower — which is more familiar on the Internet, and therefore more present within pre-existing AI prompts. The difference between the two was a very noticeable flawed detail in Chengdu, prompting us to ensure we found the right ones that weren’t dominated by a Western species. It’s actually an interesting metaphor for AI and bias: it’s a pretty innocuous example, but it does raise how dangerous it is to think about AI as objective truth. The output is completely determined by what the AI is fed. Through this process, we also learnt how to work with the unpredictability of AI. If this project was a painted mural for example, we would have had a very set and clear brief.

“AI can be random — it’s more like training a wild animal”

How did you develop this approach?

It felt a bit like exploring a landscape. We were discovering new techniques and experimenting, combining competitors in the AI sphere. It was quite a challenge to create so many different elements and footage, so we approached it from all of the different angles. The human teamwork behind this discovery process was crucial, fronted by a strong art direction.

We knew the capabilities that AI has but what we discovered along the way was how it really ‘behaves’ and how to steer the environment we were creating to reflect and host its natural behaviour. This particularly affected the motion of the landscape, which embraced the morphing effect of ‘latent stepping’ (the moving through of a range of possible outputs). At Random, it’s part of the studio’s DNA not to use tech for the hype; it has to make sense for the subject matter we are working with — and we don’t want to shy away from the effects and aesthetics it produces. Here, the fluidity that is inherent to AI worked really well for this long-playing piece of content where there aren’t any hard cuts or shocking transitions. Where this morphing effect can often be read as a ‘shortcoming’ of the mechanics of the AI process, it became a key aspect of the visual language of the world we created for Gucci Chengdu.

What possibilities do you see for AI in the future?

To us, AI image generation is about as big as the invention of photography. Creative humans will take on a new role — it’s no longer about the virtuosity of the handiwork and craft. That said, the art and possibilities of AI is still very strong and can be pushed to new boundaries. At the moment, lots of people are playing with the technology. As things grow more mature, and the initial novelty wears off, then it will have a different impact. Right now, the creative industry is experiencing growing pains as it tries to adapt.

And what specifically is Random Studio interested in developing with AI?

After this project, we want to focus on the real time application of AI. Gucci Chengdu is an immersive environment but it didn’t run in real time. At the studio, we’re interested in building experiences that take place in our physical world — coming into the realm where we live and breathe, as opposed to us stepping into a technological realm. With real time AI application, the spaces we could create would be more interactive. We could involved sensors and cameras. These spaces could prompt an on-site interplay between human intuition and AI in a way we haven’t really seen before.

Read more about the full project on our website.

--

--