The missing part for AI to replace designers: annotated prompting

How could we become art directors of the future

Adam Nemeth
5 min readMar 23, 2023

Here is the thing: I believe midjourney would be able to completely replace a junior (sometimes even medior) designer had it only one feature: annotated retouch.

Let me show you what I mean by that:

Commenting in InVision (Source)

How a junior designer works

A junior designer usually has a few years of eye training by master trainers — at least, it is the case for the students I’ve worked with. This training is done mostly by one-on-one consultations and BIIG presentation events at the end of the semester, where the would-be-designer shows their final creation and their thinking behind it to get feedback from the other master trainers as well.

Student at her thesis defense in Fashion Art Director at KREA School of Design

Once this phase is finished, the junior designer ends up in a design agency (provided they’re lucky). They’re assigned to a senior designer (mind you, I’m making a lot of oversimplifications here) and an account manager (many account managers at once in fact, but for us, one will be enough). They’re given a laptop, an Adobe Creative Cloud license and a slack (or discord) account, as well as access to the clients’ brand guidelines.

In case a new client comes in, they’re tasked by the account manager to provide a few variations based on an idea. The format of the idea is called “the brief”.

A brief is essentially a prompt for human intelligence, in an otherwise context-free environment

And here comes the main part: mostly, apart from the brief and their training, the junior designer gets no external information: all they have to do is provide 2–8 design directions for a given brief, whatever is included in it.

In some cases, the brief will refer to a brand guideline, or will contain a moodboard, but in a lot of cases, it’s essentially “we want what X did but with Y”. When I’ve last briefed a designer, they’ve got a full GV Brand Sprint material together with some TNS Needscope magic, but I’m the rare case.

The role of the senior designer

Obviously, none of the first 2–8 design variations will be right at once. We will need to choose one direction, perhaps mix in elements from the others.

An internal design critique session will be eerily familiar to most fresh graduate art students — Photo by Headway on Unsplash

And this is where the senior designer comes in: they will tell the junior designer what to include and what to keep. Sometimes it’s so easy that they only say “do the same but in orange”, or “I like what you did in the background but try to incorporate this element from your other idea as well”. And often, what they do is

They annotate a design, by circling or pointing at various parts of it, and write a brief comment how that particular part should be changed.

This is key: this feature is evident in most collaborative graphics design platforms, be it Sketch, Figma or Zeplin.

Iteration, iteration, iteration

What happens after is straightforward: once a design direction was chosen and initial rebriefing was made (sometimes using elements of other directions), an iterative, and mainly annotation based process starts, where we fix flaws one by one. Sometimes variations are requested again, but only in details: what if we could try this, try that.

It all happens in a single conversation, a single context.

At the end of the day, once all annotations are answered, a final design emerges. Thqt final design is then again taken by other people (frontend developers) without too much context to turn into HTML code, which can go into the client’s CMS system to provide a landing page or advert or whatever they needed in the first place.

That’s it. That’s how an imaginary junior designer might spend their oversimplified life at a dreamed up design agency.

Now let’s see how it works with AI

AI to replace the junior in the workflow

The problem with junior designers is that they’re human. They tend to work less than 168 hours a week (even if much more than 40), they are in desperate need of sleep, food, heating, and all the luxuries a few million year old soft tissue computer architecture requires, not to mention emotional and mental breakdowns, or when they forget to take their laptops to vacation, or they’re that sick they can’t work anymore (even if 90% of diseases doesn’t reach this level according to some accounts).

Their response time is also subpar: by the time they read the brief most AI products out there already started to draw! So, it’s clear that we need to replace them (we might not NEED, but we WILL do it as soon as technology becomes available).

Enter AI

It’s very easy to put a good enough brief to Midjourney or DALL-E and ask for a few variations/directions.

It’s also comparatively easy to re-upload the picture they’ve just made and prompt it a little.

What we can’t do however: we can’t point our finger on things, we can’t say, hey, I don’t like that this dude has 6 fingers, can you fix it?

A typical midjourney handshake, with 1 finger amiss

What is needed

All is needed to be able to annotate images, and tell the AI to change that tiny detail, not much else (or with low weights)

The way annotation could work

Would it replace junior designers?

In the short term, no. But we would take a big leap towards that, whether unfortunate or not.

--

--

Adam Nemeth

Leading products and services the Human-Centred way / UXer, Researcher, Software Engineer // UXStrategia.net