How to Solve Stable Diffusion Errors (Performance Tips Included)

Solve AUTOMATIC1111’s web UI errors and tips for performance

Fanis Spyrou
Generative AI

--

Photo by Elisa Ventur on Unsplash

You do all the steps to install stable diffusion, and then you run it. And then you get an error. You search the internet and find a quick fix. You try it, and now you get a different error.

Why can’t anything in this world be easy and work on the first try, you might be thinking. (Because nothing is easy and nothing works on the first try someone would say.)

Don’t worry, because I am here to save you some time. I personally got a lot of errors when I tried to run stable diffusion. I solved them by searching and asking on forums or by trial and error. I hope these solutions help you.

By the way, if you are having trouble installing stable diffusion on your Windows computer, you can check out my step-by-step guide:

Errors

To fix any errors, you need to “right-click” on the “webui-user.bat” file and click “edit”. Then add arguments to “set COMMANDLINE_ARGS=” like so:

You can find all existing options by following this link.

Can’t load safetensors

To be able to load models with a “.safetensors” extension, you need to add this line in “webui-user.bat”: set SAFETENSORS_FAST_GPU=1.

Also, I found out that safetensors can’t be used when you use the --lowramoption, or you will get the error in the image:

NansException

If you get this error, then, as is written in the last line, use --disable-nan-checkwith the other command-line arguments.

You may get this error if you use --opt-sub-quad-attention.

Black image

This error may occur after using --disable-nan-check. If you persist in generating, then you might get a normal image.

If you have an NVIDIA GPU, then using --xformerscould solve black image generation. To use this option, you need to install “xformers”. Open a terminal (‘shift+right-click’ and ‘Open PowerShell window here’) and type pip install xformers.

In general, to solve this error, you need to add --no-halfto the command-line arguments. Usually, this argument goes with --precision-fullor --precision-autocast.

--no-halfand --precision-full in combination, force stable diffusion to do all calculations in fp32 (32-bit floating-point numbers) instead of “cut off” fp16 (16-bit floating-point numbers). The opposite setting would be --precision-autocast which should use fp16 wherever possible. You might get “better” results with full precision, but it also takes longer. The default is to use fp16 where possible to speed up the process, and just live with the fact that there is less possible variation in outcomes.

Not enough memory

This error occurs if you have a low amount of VRAM. If you have 4–6GB of VRAM, add --lowvramto the command-line arguments. Likewise, if you have 8GB of VRAM, add --medvram.

These options will conserve memory at the cost of slower generation, but in the process, they will fix (or at least mostly limit) this error from occurring.

If this error continues to occur even after you add these options, then this means you need to remove some other options you have added. Or you can add --no-half, if you haven’t already.

Bonus: Performance tips

If you have a NVIDIA GPU, then it is recommended that you install and use--xformers, which will give you more performance.

In order to have the fastest image generation, start with no arguments (start with--xformersif you have a NVIDIA GPU) and progressively add more when you get errors.

I started with --medvramonly, because I have 8GB of VRAM. I then added --opt-sub-quad-attention, which is supposed to give you better performance.

You can also try --opt-split-attentionor --opt-split-attention-v1 together with --opt-sub-quad-attention or independently.--opt-sub-quad-attention is better, in theory, than --opt-split-attention for DirectML backend (AMD GPUs).

This gave me a NaNsException, so I added--disable-nan-check.

However, after that stable diffusion generated black images, so I needed to add --no-halfand--precision-autocast, too.

After adding these arguments, I have had no errors ever since.

You can also add this argument to load any model’s weights (with either a “.ckpt” or “.safetensors” extension) straightaway:--ckpt models/Stable-diffusion/<model>.

Wrap up

I have spent a lot of time trying to find and test these solutions, either by searching through different sources or by using brute force.

These options helped me solve any errors I got, and I hope they help you too.

Have a nice day, and thanks for reading!

Stay up to date with the latest news and updates in the creative AI space — follow the Generative AI publication.

Follow me to stay in touch. And thanks for the support.

--

--