Build UI for Stable Diffusion Pipelines with Gradio: Just 2 Lines of code!

Shubham Raj
2 min readMar 11, 2024

--

I recently made some changes in the gradio library that allows you to create Stable Diffusion demos with minimal coding. While this guide assumes familiarity with Gradio, if you’re new to it, there are resources available to get you started https://www.gradio.app/guides/quickstart

One-Line Demo Magic

Imagine creating a Stable Diffusion demo with just a single line of code! This streamlined approach is now possible. Just make sure you are using gradio version ≥ 4.21.0

Here’s how:

  1. Import Libraries: Begin by importing the necessary libraries, including torch and a library for building the UI (like Gradio).
  2. Load the Pipeline: Create an object of your pipeline class. Make sure you use the correct torch type based on CPU or GPU. I am deploying on CPU, if you are using GPU then use the default code given in the pipeline docs.
  3. Create the Demo: With just one line of code, leverage the interface.from_pipeline function and pass your pipeline object to this method. This will automatically generate a default UI specific to that pipeline.
  4. Launch the Demo: Finally, call the launch method on your demo object to bring your creation to life!

Supported Stable Diffusion Pipelines

Gradio offers effortless demo creation for a wide range of Stable Diffusion pipelines, including:

  • StableDiffusionImg2ImgPipeline
  • StableDiffusionInpaintPipeline
  • StableDiffusionDepth2ImgPipeline
  • StableDiffusionImageVariationPipeline
  • StableDiffusionInstructPix2PixPipeline
  • StableDiffusionUpscalePipeline

You can read more about these pipelines from this link: https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview

Example: Text-to-Image Demo

Let’s see how to create a demo for text-to-image generation using the StableDiffusionImg2ImgPipeline

Python

import torch
import gradio as gr
from diffusers import StableDiffusionPipeline
# Load the pipeline
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float32,
use_safetensors=True,
)

# Create the demo in one line!
demo = interface.from_pipeline(pipe)
# Launch the demo
demo.launch()

This code will create a user interface where users can input text prompts and see the corresponding Stable Diffusion generated images.

You can see the output of the code above on this https://huggingface.co/docs/diffusers/en/api/pipelines/stable_diffusion/overview (this space is using CPU so it can take some time to generate the output)

Conclusion

With these recent updates, creating interactive Stable Diffusion demos is now a breeze. This empowers you to showcase your creations, educate others, and foster collaboration around Stable Diffusion. So, dive into the world of Stable Diffusion and unleash its potential with these user-friendly demos!

--

--