Easy Neural Style Transfer With Google Colab

One use of neural networks that interests me a lot is the generation of styled images, popularly known as ‘Neural Style Transfer’. This concept was introduced in the 2015 paper A Neural Algorithm of Artistic Style and an open source tensorflow implementation of it was made available in this article http://www.chioka.in/tensorflow-implementation-neural-algorithm-of-artistic-style

A generated image using John Cena’s photo as content and starry night as style.

I recently played around with the implementation, generated some cool images and then decided to build a package on top of the implementation so that anyone can generate cool images too without having to worry about the implementation. I chose for this post, to use the image I generated using John Cena’s picture.

The brief

Here is a summary on how to easily generate cool images. You should have two images, a content image and a style image. The content image is the image that we want to transfer style unto while the style image is the style that we want our generated image to have. The aim here is to generate an image that is similar to both the style and content images (i.e the image will look like the content image and have the style of the style image). Here is a sample of content image, style image and generated/mixed image triple.

content image: John Cena; style image: starry night

I used google colaboratory (colab) as the platform here. It is as simple as uploading your images to colab, changing some settings, running the cells, and downloading the generated image.

Google Colaboratory

This is a Jupyter notebook environment that runs in the cloud and has almost all packages for machine learning installed. It is free to use and the best part is the free GPU runtime that is offered.

Open colab by following this link https://colab.research.google.com

Open a new notebook on colab and change the runtime type to use the GPU hardware accelerator.

Below are steps to generate images using Neural Style Transfer on colab:

Clone the repository

Paste the code below in a code cell and run it. This piece of code will remove the folders named ‘ComputerVision’ and ‘NeuralStyleTransfer’ if they exist in the current working directory, then it clones the repo from https://github.com/ldfrancis/ComputerVision.git and copies the ‘NeuralStyleTransfer’ folder unto the current working directory.

!rm -r ComputerVision NeuralStyleTransfer
!git clone https://github.com/ldfrancis/ComputerVision.git

!cp -r ComputerVision/NeuralStyleTransfer .


Having cloned the repo, you then upload your images (the content and style images). Ensure that both images are of the same dimension and the content image is named ‘content.jpg’ while the style image is named ‘style.jpg’

from google.colab import files


Both images should be of the same dimension. Hence, if the content image is 400 x 300, the style image should also be 400 x 300. Set the image height and width in the variables IMAGE_HEIGHT and IMAGE_WIDTH. Also, you specify the number of iterations to use for training. The default is 200 which is very small. The maximum iteration number I have used so far is 5000.

CONTENT_IMAGE = "content.jpg" 
STYLE_IMAGE = "style.jpg"

The paths to the content image and style image are specified below. This should not be changed when running on colab.

path_to_content_image = "/content/"+CONTENT_IMAGE
path_to_style_image = "/content/"+STYLE_IMAGE

View the images that were uploaded

It is good practice to view the images you are working with. So, we use matplotlib to view the content and style images. The code below reads in the images.

import matplotlib.pyplot as plt
c_image = plt.imread(path_to_content_image)
s_image = plt.imread(path_to_style_image)

Display the content image and its size. It should be of the same dimension as the style image.

print("Content Image of size (height, width) => {0}".format(c_image.shape[:-1]))

Display the style image with the code below.

print("Style Image of size (height, width) => {0}".format(s_image.shape[:-1]))

Perform training iterations to generate image

With all settings intact, import the ‘implementNTS’ module, set the image dimension and the run the style transfer to generate the mixed image. This will iterate for the number of iterations set. To obtain better results, try increasing the iteration number.

from NeuralStyleTransfer import implementNTS as NST
NST.run(ITERATION, style_image=path_to_style_image, content_image=path_to_content_image)

View generated image

View the image that is generated from the style transfer.

generated_image_path = "/content/NeuralStyleTransfer/output/generated_image.jpg"
image = plt.imread(generated_image_path)

The final generated image can be downloaded by running the code below


Now, go ahead and generate as many images as you can. I’ll like to see the images you generate.

who is this ?