The Magic of Deforum Stable Diffusion: Revolutionizing AI Animation

Ezagor
11 min readMar 27, 2023

--

Deforum Stable Diffusion is an extraordinary technology that is revolutionizing AI animation and image generation. As a full-stack developer, I have always had a passion for AI technology, but lacked the visual esthetics and drawing skills required to create stunning animations. That all changed when I discovered the power of Deforum Stable Diffusion.

~Ezagor

It all started when I came across some breathtaking NFT animations on social media. I was captivated by their beauty and complexity and wondered how they were made. My curiosity led me to explore the magic behind Deforum Stable Diffusion and the process of diffusion.

~Ezagor

Deforum Stable Diffusion is not only powerful but also user-friendly. I have witnessed its incredible capabilities at concerts worldwide, where AI art is projected onto augmented screens, perfectly syncing with the music. I was inspired to create my own animations for my smart screens at home and began looking for ways to generate animations with Deforum Stable Diffusion.

Anyma & Chris Avantgarde — Eternity [Live from GENESYS London]

Who am I

First of all, I’m Ezagor :)

I’m a computer engineer and also full-stack developer. I worked on psychology, marketing and branding. I studied on censydiam, need list and the hook model.
I’m working on the Web3 ecosystem right now. I have a community who are running nodes in the blockchain. And also we are entering lots of testnets.
I have +7 years’ experience of the development. I can make app, dApp, web3 site, web site and especially chatbots. And I love using AI for creating assets and art.

Twitter: https://twitter.com/ezagor_dev

Telegram: https://t.me/ezagor

Discord: Ezagor#9245

Lens: https://www.lensfrens.xyz/ezagor.lens

Lenster: https://lenster.xyz/u/ezagor

Phaver: ezagor

How to Install Stable Diffusion, Run Diffusion, and Use Colab Research for AI Animations

AI animations have become increasingly popular, and there are various tools available to generate them. In this article, you will explore three ways to generate AI animations using Stable Diffusion: installing it locally, using Run Diffusion, and using Colab Research.

How to Install Stable Diffusion Locally

Firstly, you’re going to search for and download Anaconda for MacOS or Windows.

Next, search for python 3.10 and get the MacOS or Windows version.

After, search for stable diffusion 2.1 from the huggingface that download the v2–1_768-ema-pruned.ckpt file here.

If you want to download the file from the hugging face, you should click the hugging face url link then click to download url link.

Then, search for AUTOMATIC1111 Github repository. I forked it. You can visit the repository here.

Pull down the code and copy the URL.

Open a terminal and type for cloning repository.

git clone https://github.com/Ezagor-dev/stable-diffusion-webui.git

and enter. If you don’t have “git installed” the command line tools installer agreement will appear;

Click agree.

After Anaconda is done downloading, you should install it.

You can keep clicking Continue and agree buttons till the end.

Now, you can install python 3.10

You can keep clicking Continue and agree buttons till the end.

Windows users should use git.

When the repository is finished downloading, you can reopen the terminal. So, anaconda environment is opened. You should see the base text.

You should move into the “ stable-diffusion-webui/ “ directory.

Let’s check do you have webui.sh file into the directory with “ls” command.

When you see, you have “webui.sh” file, type;

chmod +x webui.sh

Move the checkpoint file from the Downloads directory which you downloaded from the huggingface into the stable diffusion webui directory.

mv ../Downloads/v2-1_768-ema-pruned.ckpt .

Don’t forget to type “.” at the end.

mv v2-1_768-ema-pruned.ckpt models/Stable-diffusion

Then, launch webui.sh.

After launching, you can see at the beginning like that screenshot.

If you see an error message in the terminal, you should install v2-inference-v.yaml file here.

Move the downloaded file into the your stable diffusion directory as your checkpoint file. Rename it to “.txt” to “.yaml”. Be sure that your file has the same name as your checkpoint file. “ v2–1_768-ema-pruned ”

Then, run the webui.sh script again.

./webui.sh

Copy the URL into a web browser and you should now have a working copy of stable diffusion locally installed on your computer.

Then, you can use this tool very well for generating animations.

After delving into the exciting world of Stable Diffusion, I was initially content with installing and using the software on my local machine.

However, I soon realized that running the intense AI computations was causing my laptop to overheat. Determined to continue my exploration of this fascinating technology, I set out to find alternative methods for generating AI animations.

Run Diffusion

And so began my journey into the cloud-based world of AI animation generation, where I discovered Run Diffusion.

This fully-managed, pay-as-you-go platform was incredibly easy to use and allowed me to generate animations quickly and efficiently.

Then, I started to run some animations. I chose the first option.

Then, drop down the screen, I chose 1 hour. I recommend that use “Play sounds to notify me” option.

Drop down the screen and click on the Launch button.

They’re setting platform. It will take about 1–3 minutes.

After finished, you can see this screen.

You can use text to image feature on txt2img, image to image feature on img2img. But I love Deforum feature and I will explain it.

You can change the size of your output. Default is 512 x 512.

I tried 512x512, 1024 x 1024, 3:2, 3:1, 2:3, 2:1 ratios. I prefer to use 1024 x 1024 or 512 x 512.

I recommend using the “Eular a” sampler and “DDIM” samplers, but feel free to experiment with other sampling methods as well.

In Keyframes tab, you can choose your animation mode, you can choose your max frames and zoom, angle, transform and translations. You should generate one thing and change the parameters one by one. Then update the video. So, you can see the results of your parameter changes.

In Prompts tab, you can write your prompts. You should use json format. It’s super easy. Use “time_value”:”prompt” format.

I recommend that you should divert the max frames equal time parts as like screen shot. ( Max frames is 120 and I diverted prompt to : 0–30–60–90–120)

You should be careful at FPS part where is the Output tab. You can change it 1 to 240. I mostly used it 12, 15, 17, 30, 60.

Then, just click the Generate button..

After generation %100 completed, you can see the small image into the page.

Click to “Click here after the generation to show the video” button.

Video time is depend on your max frames, step and FPS.

Average time to generate an animation is between 6 minutes and 4 hours depend on the maximum frames(at least 300), FPS( at least 12) and step (at least 40) size.

You should check the countdown timer. Be careful with the countdown. If you’re generating an animation when your time is up, it will be disappeared.

You can click the “Stop” button to stop session.

After stopping the session, you will see this screen.

You should check your balance. If it’s not enough to generate an animation, you should add funds.

You can add $10 to your balance defaulted.

After generating more than 50 animations, my thirst for knowledge and experimentation led me to seek out even more advanced techniques for using Stable Diffusion.

Colab Research

Finally, I found a colab research file. You can try it for free a limited time. If you want to keep continue, you should increase your free account to pro account. It’s almost $9/month. (You have 100 credits. If your credits will finished, you can add more credits)

Firstly, please connect with your google account and start with Setup step.

You should click the run icon. When it turns green, you can continue with the next step.

You can change the output_path_gdrive with your drive directory path.

You can see your files at the left side after the clicked file icon. But firstly, you should run these codes.

After that, you can click the file icon and see the file structure.

Keep continue step by step. You don’t have to change anything in there.

It’s important that you should have a huggingface username and access token.

You can take access token from the hugging face website → settings → access tokens tab → new token

After entering the username and password, in animation settings, you can change the animation mode, max frames, angle, zoom, translation and rotation, color coherence..

I prefer to use 300 and 1000 max frames. For the other settings, I will explain these to you into the my next article.

You can see the prompts label. It’s really important. You should make lots of practice on it. Generate and change, realise the differences and change it again..

You should use “time”:”prompt” format. I want to make an animation about Marvel superheroes. So, I wrote;

After run the prompts code that you can continue with Run part.

You can choose weight and height sizes. You can change seed and steps. I strongly recommend that you should use “klms” sampler. It’s fantastic.

Be careful in the Run part. You should change again the batch_name value for every try. Otherwise, your files can be confused.

At the beginning of running the code, you can see the rendering animation frame 0 of 300

It starts to rendering. Let’s wait till the end. Time is depend on your max frame and step choices.

Create video from frames part is really important. Many people can be failed this part because of the pathing of the image_path and mp4_path.

You should open the your file and click any image what generated by you. Then you should copy the path of the file. After copying, paste it into the image_path.

Then remove the %05d part from the path. Then copy it to mp4_path and change the “.png” to “.mp4”

Uncheck the skip_video_for_run_all and render_steps options.

Run the code. When it done, you can go to your drive folder and you can see your animation video.

You can watch and compare to FPS differences between these videos.

60 FPS

15 FPS

30 FPS

This provided me with the opportunity to push the boundaries of my AI animation generation abilities even further, without having to worry about my laptop overheating.

~Ezagor

Get ready to unlock the limitless potential of AI-generated animations with our cutting-edge tool. In my upcoming article, we will delve into the intricate technical details and explore the exciting features that make this tool an absolute game-changer in the world of animation. So stay tuned and join me on a journey of discovery as we dive into the inner workings of this revolutionary tool.

If you haven’t read my previous article about using Midjourney for generating AI images, you can check it out here !

~Ezagor

--

--

Ezagor

Computer Engineer • Full-Stack Developer • NFT • I Ching • Duality • Digital • Web3 Researcher