Seal The Deal: Generative AI In Interior And Landscape Design

From Sketches to Reality

Teresa Lobo
7 min readNov 27, 2023

In interior and landscape design, the challenge of successfully closing a deal with a client is a familiar struggle. There are instances where designers provide design proposals without charging, resulting in potential losses of time, resources and money. The decision-making process for clients involves a number of factors that can influence their choice. These include budget, personal circumstances and disagreement with the design. We can all agree that this last reason could have been avoided if the design had been presented in a more appealing way or if we had offered more options to help the client find their dream space. This always leaves us with a bitter taste.

At Sngular, we have a tailored proposition for designers to enhance their proposals and seal the deal. A method to easily obtain renders from your sketches in multiple styles. Read on to explore this opportunity!

Image of a garden divided in two. On the left it looks like a sketch (b&w) and on the right, like a render.
Example of a sketch to render transformation

What is Generative AI

Generative AI is a branch of AI that enables users to create new content from a variety of input sources. These sources could be anything from text to images, sounds or other forms of data . The generated data can also be in one of these forms, music, videos, 3D models… By using these generative models, artists, designers, story-tellers and innovators can expand the limits of their imaginations and usher in new opportunities for content creation.

Currently, we have various methods for artificial data generation, each with its own distinct approach. These encompass flow-based models, generative Adversarial Networks (GAN), Variational Auto Encoders (VAE), and diffusion models. Diffusion models have been of great relevance in image generation. You might have heard of the popular image generation AIs called DALLE-E, Stable Diffusion and Midjourney. These three are all based on diffusion models (or variations of the method). In this article we will focus on Stable Diffusion, which is the only one that is open source and free to use.

What are Diffusion Models

The concept of generating images using diffusion models originates from the world of physics, more specifically non-equilibrium thermodynamics, which deals with the compression and spread of fluids and gasses based on energy. Wow, the conversation has suddenly elevated. Don’t panic, we will explain the physical concept of diffusion with a very simple picture.

If we put a small drop of red paint in a glass of water, initially it will look like a blob of red in the water. Eventually, the drop will start spreading and gradually turn the whole color of the water pale red. In the intermediate states between red blob and pale red water, it is very hard to predict where the red particles are located. But when the color is completely spread in the water, it is uniformly distributed and this final state is much easier to describe mathematically. Non-equilibrium thermodynamics can track each step of this spreading and diffusion process, and understand it to reverse it with small steps into the original complex and concentrated state.

Above: sequence of pictures showing the diffusion of a red ink drop in a glass of water. Below: sequence of pictures showing the noise being added iteratively to a picture of a cat.
Above: exemplification of the physical concept of diffusion. Below: Exemplification of the forward (adding noise iteratively) and reverse (removing the added noise) diffusion process in AI.

Diffusion methods for generating images work in two stages. First, noise is added iteratively to the training data, turning it into an uncharacteristic noise image (like the red ink drop falling into the glass of water, turning it pale red). This process is called forward diffusion. Then, the machine is trained to reverse the process to convert the noise into images. The machine will learn its own non-equilibrium thermodynamics laws, which it will then use for generating images from noise images. This second step is called reverse diffusion and is equivalent to reversing the pale red water back into the red ink drop.

Stable Diffusion

Unfortunately, I have some bad news for you: what we just talked about is NOT how Stable Diffusion works. Not exactly. The reason is that the above diffusion process is very computationally slow. You won’t be able to run on any single GPU, let alone the commercial GPU on your laptop. To solve this problem, the concept of latent space is introduced, a space smaller than the image. Instead of operating in the high-dimensional image space, it first compresses the image into this latent space, where noise is then added (the forward and reverse diffusions we talked about are actually done in the latent space). This makes Stable Diffusion a latent diffusion model.

In its simplest form, Stable Diffusion is a text-to-image model. Give it a text prompt and it will return an AI generated image matching the text. So, where does the text prompt enter the picture? Stable Diffusion takes aid from another model called CLIP to condition or steer the image generation process in the direction defined by your inputted text.

Stable Diffusion as a text-to-image model — By the author

ControlNet

But what if you could have more control over your creations?Enter ControlNet. Just like we said CLIP conditioned image generation through text, ControlNet conditions the noise predictor (and therefore the image generation process) additionally with scribbles, edge maps, depth maps, segmentation maps and so on. This leads to enhanced performance of Stable Diffusion and exceptional control over creations.

Stable Diffusion’s renders provided for the sketch of a building

We will use ControlNet’s scribbles’ functionality to turn rough sketches into beautiful renders.

Unveiling the results

I’ve been testing Stable Diffusion to see if it works for this purpose. I’m no artist, so I used online sketches. As I am not an artist, I sourced drawings online, and as I am not a pro in landscape or interior design, I researched some famous design styles. However, I am certain that a professional designer could suggest better prompts and scribbles. Let’s see how I did for two garden sketches:

Example 1:

Example 2:

Although the generated images are clearly conditioned by their inputted sketches, you can tell that stable diffusion takes some artistic liberties. It may inspire fresh concepts. By this point, you might be pondering, could stable diffusion aid me in designing the space from scratch? Can I submit a photo of the room or backyard as it is now and obtain design suggestions and examples?

I involved Stable Diffusion on the creative process for renovating a small garden. This were its proposals:

Promising…

How to get started. Tips

By now I hope to have drawn your attention and got you thinking: How can I do my own experiments?. Here is the answer:

  • In this youtube tutorial you can find a step-by-step guide on how to obtain these renders from your sketches, for free in a windows computer with a GPU (no need to worry, if you have 3D modeling computer-aided design (CAD) programs installed, you have it). If you are an Apple user, then this is your installation guide.
  • On the Stable Diffusion interface, you’ll find an option to add negative prompts along with your prompt. This helps you clarify what you don’t want.
  • To learn more about prompting, check out this article. While the current Medium article only discusses landscape design experiments, we have included a separate guide for prompting in interior design.
  • I have noticed that sketch → image generation is more effective when the scribble has fewer details or shadows. Take this into consideration when experimenting.
  • Just be patient — prompt engineering can be a slow and meticulous process.

Conclusions

In a nutshell, the results we’ve got so far look pretty promising. Sure, the generated images don’t match the sketches 100%, but that might not be a bad thing. Rather than viewing this deviation as a drawback, it opens up the possibility of leveraging artistic liberty to our advantage.

The use of stable diffusion for interior and landscape design has many practical applications:

  1. It can enhance client communication. By producing diverse image styles through the basic text-to-image functionality and showing the results to the client, you can extract more information about client preferences with minimal effort.
  2. It enables the creation of approximate renders, providing clients with a visual sense of the proposed outcome, thus enhancing the overall appeal of proposals. With your expertise on the matter you might even find better prompts for creating more precise renders.
  3. Integrating stable diffusion into the creative process enables new elements and design considerations to be discovered. Embracing this divergence may lead to unexpected and innovative outcomes, further enriching the creative process.

I hope you enjoyed reading this post and feel inspired to test it on your projects. Can you imagine how this would work on your CAD designs, on SketchUP screenshots? Keep on experimenting!

At Sngular, we specialize not only in artificial intelligence but also in integrating these advancements into practical, productivity-enhancing tools. Envision incorporating this capability into your everyday tools. Perhaps transforming it into an application that enhances communication for both you and your clients. Maybe some applications outside design came to mind, like marketing and advertisement. If you’re considering such a development, we’re here to lend our assistance and help you seal the deal.

Thank you for reading!

🔄 If you enjoyed the article, you can follow me on LinkedIN.🔄

Other posts from the AI team:

--

--