Could I have replaced my character designer with AI?

Øyvind Knustad
8 min readJan 24, 2023

--

FLUFFY — “Imitation of Life — A machine learning story — Chapter Three”

This is Fluffy, the robot in my student film “Imitation of life — A Machine Learning Story”, I made in the third year of my study, Film and TV Production. Generating it took several months and a handful of very talented artists. As the writer/director of it I worked closely with the VFX team to have the robot I wrote come to life. The design had to reflect how the fictional company GenTech would have designed to fulfill it’s purpose. It was not enough to just make a cool looking robot, it had to be believable.

To help me with this, I asked my friend Hjalti Gunnar Tryggvason to apply his amazing drawing talents to bringing this robot to life. Maybe not my exact phrasing when I asked, but it’s an accurate description. He was interested in doing it and became a valuable part of the team as character designer.

With the rise of AI generated art in recent months, there’s a lot of talk about the ethics, copyright and the future of artist as a profession. I decided to do a little experiment to investigate the potential effects the emerging technology can have on the profession of concept artists. Or put in a more grim phrasing:

Could I have replaced Hjalti with AI?

The Movie

Poster of the short film “Imitation of Life — A Machine Learning Story”

There’s another layer of relevance that is a bit well… meta (the expression, not the company) because the issue of AI taking over creative work is mentioned quite a lot in the film, which is about AI and storytelling. What’s the difference between human inspiration and a machine’s “inspiration”. At the time, AI generated stories had barely just emerged, and AI generated art seemed to be a long ways away. Only a couple of years later, these technologies have evolved a great deal.

The movie is not at all going into the technicalities of AI, as my knowledge of it was quite limited, but it explores some of the philosophical questions surrounding AI art and creativity. However it doesn’t go into the topic of using copyrighted material in training data, or ownership of the generated artwork. I won’t go into it in this blog post.

The short film is composted of three individual parts that are loosely connected:

  • The first one is about a television screenwriter who finds inspiration in a difficult life event.
  • The second is about a data science student who needs to convince his adoptive father that his line of work won’t replace creative jobs.
  • The third is about a girl who is trying to find a way for the family not to replace their robot with a newer model.

If you are interested in watching the 17 minute short film, here is a shameless plug. You can watch it by clicking on this link:

The character design process

When designing a character, the design choices needs to come from the script and and the overall world building. It’s not enough to make something that looks cool. When designing Fluffy (the robot), we were thinking about how the fictional company GenTech would have designed it.

Here are some points the design needed to satisfy:

  • Be a story teller. Have a speaker as a mouth for good sound.
  • Be friendly looking so it wouldn’t scare the kids it’s supposed to babysit.
  • Have compartment in the torso for shopping bags and other things.
  • Realism. Look like a company in the near future actually could have built it.
  • Look like an older model that would be replaced by a newer one.

So let’s give the machine a go

Much of the success will be in the prompt construction, obviously, and I can’t ensure that my prompting is the best as it can be, but it was an honest attempt to give it all the information needed. All the prompts were a variation of:

“cinematic humanoid friendly robot with speaker as a mouth and a large torso”

I noticed that changing a word here and there or adding one would yield a different result.

Dall-e

My first couple of tries with Dall-e missed the mark completely but amusingly:

Prompt: “Fuffy_cinematic humanoid friendly metal robot with speaker as a mouth, and a large torso”
Prompt: “Fuffy_cinematic humanoid friendly metal robot with speaker as a mouth, and a large torso, digital art”

But with some tweaking with the prompt, I got a bit closer to what I was looking for:

Prompt: “fluffy big cinematic humanoid friendly metal robot with speaker as a mouth, and a large torso, digital art”

Out of these the second image landed somewhat in the ball park.

The word ‘Fluffy’ actually seemed to give it the general shape, but it also gave it some hair. No worries, next I wanted to change just the head. By selecting the head area, I was able to get iterations with different heads. Let’s see if I could find a good one, shall we:

Head edit prompt: “a friendly robot head with speaker as a mouth”

Uhm, well that was weird. Let’s try again:

Head edit prompt: “a friendly robot head with speaker as a mouth”

Not quite. But the strength of AI art is that I can quickly get iterations and by chance get something I like, so let’s get some more:

Head edit prompt: “a friendly robot head with speaker as a mouth”

This is where I started getting a bit frustrated and was thinking that in order to get a design that fits my criteria, I would spend a lot of time tweaking prompts. But this was only with one of the image generators out there so let’s try with Midjourney, which is known for it’s artistic capabilities:

Midjourney

Prompt: “Bit cinematic humanoid friendly metal robot with a speaker as a mouth”

They are definitely more spectacular than Dall-e, but the question is; can I get a design that fits my criteria?

Prompt: “big cinematic humanoid friendly metal robot with speaker as a mouth, and a large torso, digital art”
Prompt: “big cinematic humanoid friendly metal robot with stereo speaker as a mouth, and a large torso, digital art”

They all look interesting but none of them had anything I felt I could make iterations of. The only thing they all succeeded at was the general shape. So after spending time prompting away on two image generators, I wasn’t anywhere near where I wanted to be. It had for instance trouble understanding that I meant a speaker and not a bull horn and what it meant to be as a mouth. Now let’s see how the process went with Hjalti three years ago.

The Human Artist (that feels weird to specify)

Based on the script and our talks about how the robot should look like, Hjalti quickly sketched up a wide variety of robots to get started. All images from here on are made by him.

The process starts with settling on the general shape with simple sketches and then continue on the details later. I pointed to a few of them and Hjalti proceeded in that direction.

It’s already starting to look like realistic robots. At this point Hjalti started to think about movement and how the legs would fold when he sat down:

Now the robot is getting more detailed and we’re getting some nice variations. We also have three different heads that all look like they could exist in this world. The head we settled on was the one that looked like it had a speaker for a mouth. This is where that idea came from.

Reference is a very common part of the design process. As we were aiming for realism, we used reference in the line of realistic looking robots. I’m not able to show the reference images in this blog post because of their copyright.

And after these iterations and back and forth, we ended on a design we all were happy with and the 3D modeller could get to work with building it:

In conclusion

Could I have replaced Hjalti with an AI tool as character designer? From my little experiment, I would say only if I didn’t have any criteria to fulfill other than it should look cool. A designer’s job is to visualize the ideas in the script and use their own experience to add ideas and enrich the fictional world with believable design choices. I didn’t feel like the AI did any of that.

What AI art is absolutely not built for

One important thing a human can do and the AI can’t is to think about movement and how it’s engineered. The AI has no concept of that, as all it does is to guess what color the pixels should have based on a statistical likelihood. Kind of how the auto correct on your phone guesses which word will be next based on the words you have written up to that point. For a concept artist this is very important to think about.

AI art. What is it good for?

It does however give some interesting ideas that seem very random. I can totally see how this technology can be used for brainstorming ideas and be used for reference the artist can draw inspiration from. It can also be used by project developers in pitching before any artist is involved.

How I compare the human and the machine

Using these AI tools is fun to play around with, but I found it very annoying to use in a design process with terrible results.

If I have a specific idea in mind, it’s very difficult to have the AI create it.

The experience cannot compare to geeking out over robots Hjalti and listening to his perspective, either over video chat or a coffee at Madam Brix cafè. As the father in Chapter 2 in the movie realizes, even if the technology is making great progress, the value humans put into the creative process is, at the very least, really hard to replace with machines.

Hjalti Gunnar Tryggvason is an amazingly talented freelance human artist available for hire. You can reach him on his twitter: https://twitter.com/HjaltiGunnar

--

--