Member-only story
Deep Image Prior in PyTorch
Image Denoising with No Data and a Random Network
Deep learning and neural networks have been tightly associated with big data. Whether it is image classification of language translation, you almost always require a vast quantity of data to boost the task accuracy for the model to be applicable to real-world datasets. Even under few-shot or one-shot scenarios, the preliminary is that you still need a large variety of data to train the network. But what if I tell you that you don’t need any data or any pre-trained network, and yet you can perform image restoration or even super-resolution?
In this article, we will dive into a completely different realm of deep networks, namely deep image priors (DIP), that doesn’t require any datasets for training and yet learns to separate noises and images to perform image restoration. A A PyTorch tutorial would be discussed in detail to showcase the power of DIP.
What are Deep Image Priors?
Figure 1 is a simple illustration of how DIP works. It is unexpectedly simple. You start by having a randomly-initialised network that aims to reconstruct the target image from pure noise. The output reconstruction from the network is then compared with the original image to compute a loss function to subsequently update the network. After some…

