Generate AI Images with Stable Diffusion (SDXL 1.0) Locally via AUTOMATIC1111 on Mac

Set-Up Guide (tested on M1 32GB)

Ingrid Stevens
4 min readDec 28, 2023
Using SDXL Base + Refiner: 25.2 second image generation time

Setting Up AUTOMATIC1111 on Apple Silicon

These instructions guide you through the process of running a Stable Diffusion Checkpoint (in this case, the SDXL 1.0 model) locally and setting up a web UI for access. It’s a distilled version of this tutorial, but here I’ve isolated the instructions specifically for running SDXL 1.0 with AUTOMATIC1111 on Apple Silicon.

Acknowledgment and Background:

  • Full credit to the excellent instructions provided by stable-diffusion-art.com
  • For a detailed understanding of features, visit their website [also linked below].

Before We Begin:

This article serves as a streamlined starting point. For comprehensive details and additional features, refer to the original instructions on their website [additional links provided at the end of the article].

By the end of this article. you should have this running too:

UI: Gradio / Model: SDXL

Thanks to, and Notes Before We Start

  • Thanks to stable-diffusion-art.com for their very comprehensive instructions— please visit their website for more details on the features
  • This article is meant as a starting point without extra details, but please visit the above link to understand the whole range of capabilities and features.

Instructions

  1. Install: Assuming you have Homebrew installed, in your terminal, run the following to install required packages:
brew install cmake protobuf rust python@3.10 git wget

2. Clone stable-diffusion-webui: In a directory of your choosing, clone AUTOMATIC1111

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui

This will create the following directory: stable-diffusion-webui

3. Download the Model

Use SDXL 1.0:

The downloads for both are ~12GB
  • move both base & refiner models into the folder: stable-diffusion-webui/models/Stable-diffusion

4. Start the UI

  • In terminal navigate to your directory:
cd /stable-diffusion-webui
  • run the .sh file (this takes a few minutes)
./webui.sh --no-half

Play with Stable Diffusion

Now that you have the AUTOMMATIC1111 UI running, check the settings and enter a prompt.

  1. Make sure to set the “Stable Diffusion checkpoint” to the model we downloaded earlier
  • prompt: “photo of young Caucasian woman, highlight hair, sitting outside restaurant, wearing dress, rim lighting, studio lighting, looking at the camera, dslr, ultra quality, sharp focus, tack sharp, dof, film grain, Fujifilm XT3, crystal clear, 8K UHD, highly detailed glossy eyes, high detailed skin, skin pores”
  • negative prompt: “disfigured, ugly, bad, immature, cartoon, anime, 3d, painting, b&w”
  • set the width and height to 1024
Results (the orange arrow points to the seed)
Close up: SDXL Base

2. Now, let’s enable the refiner and use the seed (2195684695) from the previous image and see what we get:

set seed

--

--