Having fun: Inpainting with Stable Diffusion

Micheal Lanham
5 min readOct 9, 2022

Stable Diffusion, the new open-source kid in the world of text-to-image generators is currently seeing a surge in enhancements and apps. From web interfaces to local desktop applications. All with the intent of extending Stable Diffusion functionality in a number of ways.

If you are new to text-to-image generation you may indeed wonder how you can extend an AI that generates images from text. Well, as it turns out there are a couple of nifty tricks that make tools like Stable Diffusion even more useful, inpainting, and outpainting.

Outpainted image of the Mona Lisa with Infinity Stable Diffusion

Outpainting and Inpainting

Outpainting and inpainting are two tricks we can apply to text-to-image generators by reusing an input image. Where outpainting is the technique whereby we fill out or extend the area around an image, inpainting fills in the missing areas of an image. A great example of outpainting is the extended image of the Mona Lisa shown above.

Both techniques can further enhance the possibilities text-to-image generators provide. In this post, we will focus on inpainting. Inpainting has a number of interesting uses aside from just filling in missing content.

Follow Along

If you want to try these inpainting tricks on your own visit the Infinity Stable Diffusion GitHub…

--

--

Micheal Lanham

Micheal Lanham is a proven software and tech innovator with 20 years of experience developing games, graphics and machine learning AI apps.