Drawn Together: Attempting to Animate with Generative AI

Ryan Consbruck
6 min readApr 26, 2024

--

At Studio Rodrigo (a strategy and design studio in Brooklyn), we have a few working groups that pursue interests beyond client work. These groups are built around a love of movies (Cinema Rodrigo), climate change research, and artificial intelligence experiments. After we shared our initial AI experiments in an AIGA talk, we decided to explore how generative AI could be used to reduce friction in the creative process.

Animation seemed like a creative field that could use generative AI’s ability to work from a prompt or existing image to fill in the frames between artist drawn key frames and create the illusion of movement. Our objective was to devise a workflow that could be used by visual artists, illustrators, and animators to create original work in less time.

We also wanted to avoid overly technical tools like Animate Diff which require a substantial amount of tinkering and GitHub set up.

We decided to start with a simple exercise: use generative AI to animate one isolated movement, like a person opening a drawer or turning around.

Prompt/Image to Video Experiments

Many of the AI services available claim to be able to create video from a prompt or still image. As we attempted to use these tools for our study, they all failed to integrate actual scene direction. They could accomplish camera movement, or create a still image with atmospheric movement, but were difficult to control or direct with any consistency (see below).

Leonardo AI

Leonardo AI generated video from image

Stability AI

Stability / Stable Diffusion generated video from image

Kaiber

Kaiber generated video from prompt: one seagull flying across a foggy London neighborhood, seagull transforms into a paper airplane

Pika

Pika generated video from image and prompt: realistic, cinematic, seagull flying across misty London sky

From promotional videos, it seems like Sora may be able to accomplish this better, but it was not publicly available during our experiment.

Runway - Frame Interpolation

One of our guiding principles for this study was that the output not be immediately recognizable as AI generated. We wanted it to to stand on its own. So we tried a different approach using Runway’s frame interpolation tool and input two hand drawn frames of a basic animal walking cycle:

Hand drawn walk cycle frames

and got this result:

Frame interpolation using Runway

which seemed promising, so we tried it again with some more complex movement (a hand flipping the page of a book):

Animated GIF using hand sketches

But the interpolation ends up working too hard and creates mushy transitions. This may be because elements are becoming more “implied” than visually explicit.

Runway Frame Interpolation output from manually created frames.

As another study, we used two simple illustrations from Tomie dePaola’s “Strega Nona” as key frames:

Original “Strega Nona” Illustrations

Using these as keyframes in Runway’s Frame Interpolation tool, we got the following result, which blended the roof tiles and struggled to discern between Strega Nona’s hands and face:

Output from Runway with Frame Interpolation

Returning to Leonardo AI, we input a single image and started modifying the “Motion Strength” slider.

Leonardo AI Motion Strength at 1
Leonardo AI Motion Strength at 10

With these unsatisfying results we tried a different approach loosely based on the method explained in this very detailed guide.

Manual Frame By Frame Generation

We decided to take a more hands on approach by using Midjourney to generate the individual frames, and Photoshop’s built-in “Generative Fill” and “Remove Background” tools to blend them together.

We generated an image of a character and background in Midjourney:

Midjourney Prompt: tall slender aging retired professor facing away, looking at tall ivory lacquered writing cabinet, in an old english study surrounded by books — niji 6 — ar 3:2

Then we created variations with the character turning towards the viewer. Since this was a rough attempt, we didn’t fuss over consistency in character design:

As we generated more variations, the background became more and more distorted, but we had enough movement to function as a frame by frame movement:

We took each image of the character and used Photoshop’s “Remove Background” tool to quickly isolate each character frame:

Photoshop Remove Background

Then we used Photoshop’s “Generative Fill” tool to remove the character so we could have a consistent scene background.

Photoshop Generative Fill

Which left us with this rough background:

and we re-inserted the isolated character frames onto the background:

to create this rough character movement:

This is where we stopped.

A Creative Singularity

Even this hybrid process was not accomplishing our initial objective, it was not a smooth and seamless transition. To get a polished result would require as much if not more effort than to do it manually, and the level of control of the tools would remain lacking.

It was interesting to us that as we took more and more power away from the AI tools, they were equally unsatisfying. “100% AI” looked just as bad as “40% AI”.

There’s been a reoccurring dialogue about the relationship between the introduction of new tools and how they re-shape creative industries. We believe that AI should not replace human creators but support the creative process. For this to succeed, creative individuals should have enough understanding of the tool to lead it to a satisfying result.

Currently the tool does not produce a satisfying result on its own or with a human collaborator. It seems to us that that reality is on the immediate horizon and we want to be ready for it.

We wanted to share this study to see if others have made similar attempts and get feedback on where this workflow could be adjusted to result in a more satisfying outcome. We’d love to be proven wrong.

Further Reading:

The AI Lie

“It has become standard to describe A.I. as a tool. I argue that this framing is incorrect. It does not aid in the completion of a task. It completes the task for you. A.I. is a service. You cede control and decisions to an A.I. in the way you might to an independent contractor hired to do a job that you do not want to or are unable to do. This is important to how using A.I. in a creative workflow will influence your end result. You are, at best, taking on a collaborator. And this collaborator happens to be a mindless average aggregate of data.”

AI Filmmaking Guide

Consistent Style in Midjourney

--

--

Ryan Consbruck

Ryan is product designer and animator with a love for all things weird and funky. https://www.ryanconsbruck.com/