There’s LIGHT even in the DARKEST places

Deep learning helps you show the light

Sai Akhil
6 min readJul 20, 2018

Have you ever thought that low light photography can be as good as daylight photography? Seems impossible right?, but Deep learning makes it possible. Using deep learning pipeline, you can take an image in dark with low exposure time and convert it to an image which is as good and sometimes even better than the images taken with long exposure time.

How does it work?

It’s simple. Just follow the given steps below:

  1. Take an image in the dark. Consider the image below

2. Now, pass this image through the deep learning model trained to do this job and wait for your mind to be blown.

3. Boom !!!! Magic. The results are out. The image below is the output for the input image above.

Output image after passing it through the deep learning model

Why Deep Learning?

This can be done by using traditional image processing techniques like increasing the brightness, contrast etc. But this process may not produce desired results.

RAW images can be processed by using rawpy

1. Using rawpy:

rawpy is a RAW image processing library for python. It loads RAW images and processes the image. But with dark images, it may add unwanted noise to the image.

import rawpy, imageio
raw = rawpy.imread('image.ARW')
raw = raw.postprocess()
imageio.imsave('image.tiff',raw)

The above code takes a raw image (image.ARW) as input and processes the image and saves the processed image (as image.tiff).

The below images gives the demonstration of a dark input image processed by using rawpy. The output image generated by rawpy is bright but it has added noise to the image.

RAW image in dark and its corresponding output using rawpy

2. Using Deep Learning:

Dataset :

The dataset consists of Sony RAW images with various short exposure times and its corresponding long exposure PNG images as the label images. The total dataset consists of 231 different long exposure images and 2,697 short exposure images with various exposure times.

Going into the dark:

The input for the neural network is Sony RAW image. The RAW images are Bayer RAW images. A Bayer filter is a colour filter array consisting of a mosaic for arranging RGB color filters on a square grid of photosensors. The filter consists of one-fourth red, one-fourth blue and half green pixels as green colour is perceived more by our naked eye.

Image source: https://arxiv.org/pdf/1805.01934.pdf

The Bayer array is split into its corresponding Red, Green, Blue and Green image arrays. Black levels are subtracted from the image and the pixels are normalised in between 0 and 1.

def split_raw(img):
img = img.raw_colors.astype(np.float32)
img = img - np.amin(img)
img = np.maximum(img-512,0)/(16383-512)
img = np.expand_dims(img,axis=2)
#print(img.shape)
l = img.shape[0]
w = img.shape[1]
#print(l,w)
red = img[0:l:2,0:w:2,:]
grn1 = img[0:l:2,1:w:2,:]
blue = img[1:l:2,1:w:2,:]
grn2 = img[1:l:2,0:w:2,:]
split = np.concatenate((red,grn1,blue,grn2), axis=2)
#print(split.shape)
return split

Digging Deeper!!!

After this, the image is amplified by multiplying it with the amplification factor. The amplification factor in this case is the ratio of exposure of label image to the exposure of input image. Now, the label images are also normalised between 0 and 1.

The dimensions of the images are very large, so instead of taking the entire image into the neural network at once, the input is divided into patches of length 512*512 or 1024*1024 and these patches are taken randomly and then augmentation is done. The resulted images are sent as input to the neural network.

Data augmentation is the process in which new training data is created by rotation, translating and shifting the already existing data. Data augmentation is done to reduce overfitting of the model. In this project, data augmentation is done by translating and transposing the input matrix.

Training:

The dataset is split into training data and test data. The training data consists of 1865 Sony RAW images. These raw images are split according to the split_raw function and random input patches of size 512*512 are selected and then the augmentation is done. These augmented patches of images are sent as input into the neural network and trained for 400 epochs.

The network architechture used for trainin is U-Net . The architechture of the U-Net is given below:

Image source: https://arxiv.org/pdf/1505.04597.pdf

U-Net architechture is used in this project because it can process high resolution images efficiently. . The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization.

Loss Function:

The loss function used is mean squared error(MSE).

loss = tf.reduce_mean((output_image - label_image)**2)

Optimisation Function:

The Optimisation function used is Adam’s optimizer with learning rate as 0.0001.

opt = tf.train.AdamOptimizer(learning_rate=0.0001)

The training is done for 400 epochs with batch size as 1 (i.e 1865 iterations per epoch). In every epoch random patches of the input image is taken and sent into the neural network.

Testing:

The results obtained from the trained model are overwhelming. The time taken by the model to produce an output of an image is about 4–5 seconds on the system on which I trained my model (32 GB RAM, 4GB Graphic card). It will be even faster on system with better configuration.

Some of the test results are:

Image in dark and its result from the deep learning model
Image in dark and its result from the deep leaning model
Image in dark and its result from the deep learning model

Limitations:

  1. This model works only for RAW images but not already pre-processed images like jpg, png, jpeg etc. It is because the pre-processed images are already compressed and processed. Hence there will be data lost in those images and we cannot retain actual information from them.
  2. This model works only for Sony camera RAW images as different cameras have different sensors and their RAW images have different formats. Hence, we cannot generalize this model for every RAW image. The split_raw function should be changed accordingly for different RAW image formats.

Failure Cases:

As you might have known that iPhone uses Sony sensor( for its camera, I tried testing RAW images on iPhone with this trained model, but the results were not good. The images resulted were blurred. These abnormal results can be because of the difference in number of bits in Sony and in iPhone camera image.

RAW image from iphone and its result from trained model

Future Work:

  1. We can change the model to support for iphone RAW images.then create a mobile application
  2. We can create a mobile application which which captures an image in the dark and the deep learning model processes the image to make it look like it was shot in the bright daylight.

References:

  1. Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun, “Learning to See in the Dark”, in CVPR, 2018.
  2. Olaf Ronneberger, Philipp Fischer, “U-Net: Convolutional Networks for Biomedical Image Segmentation” in 2015.

Thanks for reading!!!

--

--