How to Perform “Hires Fix” in ComfyUI

Prompting Pixels
5 min readMar 22, 2024

--

One of the first features that users look for when transitioning from Automatic1111 WebUI to ComfyUI is the “Hires Fix” feature.

This simple checkbox in the Automatic1111 WebUI interface allows you to generate high-resolution images that look much better than the default output.

Here’s how you can do it:

Automatic1111 Interface

Since ComfyUI is a node-based system, you effectively need to recreate this in ComfyUI. As a reference, here’s the Automatic1111 WebUI interface:

As you can see, in the interface we have the following:

  • Upscaler: This can be in the latent space or as an upscaling model
  • Upscale By: Basically, how much we want to enlarge the image
  • Hires Steps: This is the number of steps to take when upscaling the image (Automatic1111 defaults to what was used in the original generation)
  • Denoising Strength: How much noise to add to the image before upscaling and denoising (a higher value provides more creative output)

Two Different Options Available

You have two different ways you can perform a “Hires Fix” natively in ComfyUI:

  • Latent Upscale
  • Upscaling Model

You can download the workflows over on the Prompting Pixels website.

Note: We’ll also go over the Ultimate SD Upscale node at the end of this article as well.

Latent Upscale

Latent upscale is essentially an image-to-image process where the image is first generated, then a representation of the image is in the latent space to then be upscaled to a higher resolution.

This process is effective at generating high-resolution images. However, the downside is that the generated image at the larger scale is not a 1:1 match to the original image.

Therefore, some details may change.

To start with the latent upscale method, I first have a basic ComfyUI workflow:

Then, instead of sending it to the VAE decode, I am going to pass it to the Upscale Latent node to then set my desired resolution along with the KSmapler node:

The VAE Decode (bottom of image) is optional to review outputs along the way

When setting the KSampler node, I'll define my conditional prompts, sampler settings, and denoise value to generate the newly upscaled image.

Pro Tip: If you want, you could load in a different model or set different conditional prompts into the second KSampler if you want to experiment with different results.

Comparing the Outputs

Let’s take a look at what we got from this workflow:

Here’s the original image:

Resolution of the original image: 512x512

And here’s the latent upscale image:

Resolution of the upscaled image: 1024x1024

The change in quality is noticeable right away!

While the overall subject is largely the same, small details like the mast on the boat, the island in the background, and clouds all changed slightly.

You can control this only to an extent through the denoise value in the KSampler node.

Common Question: So why not just make it at the higher resolution to begin with? Well, if the image is too large, diffusion models have a hard time generating an image with the correct details (disfigured face, limbs, etc.). So if we create an image at a lower resolution, then upscale it, we can get a better result without distorting the image.

Using an Upscale Model

Upscaling models allow you to generate a high-resolution image at a 1:1 representation of the original image — maintain the original image’s integrity.

To use an upscaling model, you’ll need your decoded image from the VAE and then you’ll pass it into the Upscale Image (using Model) node:

Then, you’ll also bring in the Load Upscale Model to define what upscaler you'll want to use (there are many of these available).

That’s it — then you can generate the image at a higher resolution.

However, there’s a catch.

Using only these two nodes will 4x the resolution of the image — which may be too much for some cases.

So a 512x512 image now becomes 2048x2048.

Therefore, you could pass it through the Upscale Image node to downscale the image to a more suitable resolution:

Comparing the Outputs

Let’s review the outputs.

Here’s the original image:

Resolution of the original image: 512x512

And here’s the upscaled image:

Resolution of the upscaled image: 1024x1024

As you can see, no details were lost in the upscaled image — but noise starts to get introduced at a higher resolution and it generally doesn’t look as good as the latent upscale method.

However, to fix this, you could test different upscaling models as the outputs can change drastically.

So there you have it, how to perform a “Hires fix” in ComfyUI.

🎉 Ultimate SD Upscale (BONUS)

Aside from the methods available above, which are available natively within ComfyUI, you can also use the ComfyUI_UltimateSDUpscale custom node.

This all-in-one node allows you to upscale either with an upscaling model like we did above or with the ControlNet tile-based model for even better results.

Here’s how to set it up using the upscaling model:

As you can see, I defined the upscale_by value to be 1.5 times the image so it’s not too large and left all other options at their default values.

The quality of the output is much better than that of the native nodes.

If you want to use ControlNet, you can just add in the Load ControlNet Model node along with the Apply ControlNet node to generate the image at a higher resolution:

Be sure to connect inputs and outputs accordingly.

I like the Ultimate SD Upscale method as it generates equally good outputs as those outlined above but is all contained within a single node.

You can download this workflow over on the site as well.

Want to bump up your skill level of working with diffusion models? Then consider signing up for the Prompting Pixels course.

--

--

Prompting Pixels

Official account for Prompting Pixels (YT Channel & Website)