A friendly Introduction to Denoising Diffusion Probabilistic Models

Antony M. Gitau
9 min readJul 9, 2023

I recently attended a Nordic probabilistic AI school, ProbAI 2023, which inspired my interest in generative models. I'm building an understanding as I document and share my learning in this exciting space of “writing” computers.

In this first series of short write-ups on denoising diffusion probabilistic models (DDPMs), I want to demystify DDPMs by giving a general context about these classes of models followed by a vivid example with easy-to-grasp maths formulas.

What are DDPMs?

They are a class of generative models that work by iteratively adding noise to an input signal (like an image, text, or audio) and then learning to denoise from the noisy signal to generate new samples. Huh, let's break that statement down and then give a step-by-step example of the process.

Generative models

Are a type of model that can generate new data instances. Previously, machine learning models have done a good job of learning differences in data and then making predictions or classification tasks. For example, a model trained on a digits dataset like MNIST can recognize a 0 from a 1. Generative models, on the other hand, learn the distribution of digits and can create a “fake digit” which closely resembles a real digit.

Fig 1. From a Machine Learning course by Google

--

--

Antony M. Gitau
Antony M. Gitau

Written by Antony M. Gitau

Google DeepMind Scholar, studying MSc. Artificial Intelligence for Science Website: https://antony-gitau.github.io/