AI in Design: Parametric Design + Stable Diffusion Rendering

Yilin Wang
Design Intelligence Course
5 min readDec 19, 2023

1.Concept Development

AI is currently the cutting edge technology while it integrates with architecture design in a quite slow speed. This is because the process of AI for design now is basically image-generated visual representation instead of genuinely understanding the design process.

Stable Diffusion(SD), which is a text-to image algorithm, has become quite popular among designers. By adding positive and negative text prompts to the user interface, Stable Diffusion can generate realistic images quite fast and authentic, which will definitely inspire designers. Many architectural designers use it to generate great renderings for clients at the early stage of the project. However, it is too hard to generate accurate images of specific project’s models if we only provide texts. We have to combine images of our models with text prompts on which we want AI to generate images based.

concept for my MVP

So here comes my question: what if I integrate stable diffusion algorithm with normal parametric design in architectural industry? By coomputatioanl design in grasshopper, designers are able to generate dynamic design option, and then we can generate many corresponding renderings based on model option/ text prompts by stable diffusion. I think this workflow could be radical and genuinely boosting designers efficiency and creativity.

This MVP(minimal viable product) is designed for architectural designers, the market is for architectural firms. Users can use this MVP at the concept design (massing model)stage. The value proposition is combining AI (images generator) into the process of parametric design in grasshopper in architecture industry. The MVP consists of two parts. The first part is to set up Stable Diffusion webUI workflow into grasshopper. The second part is to make two parametric toolkits for auto facade/plan. In this way, designers can make interactivity between parametric model and renderings at the early stage of projects.

2. Set up Stable Diffusion Webui in grasshopper

To be aware, setting up this ui in grasshopper will be very tricky, which requires installation on plugins/models. If you missed one part, the ghx file will not work correctly. So I will try my best to show you important procedures. In case you miss some parts, I link these two github repos here for you: stable diffusion webui and Stable Diffusion in grasshopper. I carefully followed these repos’ manual and succeeded in connecting webui into grasshopper.

The softwares you should download on computer are python and git. The grasshopper plugins you should download are ladybug/ human/ Ambrosinus. You can find all these plugins on food4rhino.

Firstly, Download the stable-diffusion-webui repository, for example by running git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git.

After than you can go to the folder path you downloaded this repo and run webui-user.bat. Then you can see the webui opened on your browser.

Stable Diffusion webUI.bat

The next step is to download ControlNet from the right corner-extension of this UI. ControlNet is a very useful image-to-image generative algorithm added to stable diffusion, which means we add a image-control prompts on top of text prompts.

download ControlNet

Now the part for downloading and setting up stable diffusion ui is all set, and you can open ghx file.

examples on parametric tower with stable diffusion ui

The following image shows first part of labeled input. You should set up three folder directories, which are webui folder path(this is the folder where you downloaded stable diffusion from github);output images directory and rhino viewpoint directory. And every time you open grasshopper, click these three buttons so as to connect to the api.

the first part of labeled input

After clicking the first button on the upper image, the command prompt will automatically run and open stable diffusion ui. When you see codings in command prompt is like the image below, it means the ui is all set and you can go to the next step.

the coding in command prompt showing everything is set

The second part of input is viewport name, which is exactly the same name of rhino viewpoint’s name you want to export as image for stable diffusion. In the following image, I chose “perspective” name and you can see the image gh get.

the second part input_viewpoint name

The third part of input are prompt and negative prompt. This part of grasshopper works in exactly the same way like stable diffusion ui works.

the third part input_prompts

All right, now every input is set. You can change prompts/ negative prompts to whatever you want. Finally, you can click on the button in the following image, and then grasshopper will generate images based on image in viewpoint and your text prompts, and save the image in the directory you have just set.

click the button and generate output images

3. Set up parametric toolkit1 _ Auto Facade

Next, for the second part of MVP, I introduced two custom parametric toolkits for architectural design. They are quite straight forward and not sophisticated. I just want to show you the whole picture of combining stable diffusion with parametric design models.

The auto facade toolkit is to generate vertical panel for elevation, and the inputs are boxes/breps in rhino. There are three number-type input to control the width, height and density of these panels.

The custom tool of auto facade

4. Set up parametric toolkit2_Dynamic pentagon tiling plan

This type of dynamic form is inspired by a landscape architecture project in New York City called pier55 island.

pier55 island

This tool is to seperate an untrimmed rectangular surface by interconnected quadrilateral and pentagon. Imitating the form of the precedent, I extrude the landscape part by different heights and make the center part plane.

custoom toolkit2__dynamic pentagon tiling plan

5. Combining these two parts into final MVP

The following video demonstrates how this MVP works and the problem it solve. From this video you can imagine how it can inspire designers and boost designers’ efficiency and creativity.

how MVP works

--

--