Neural Style Transfer Using Tensorflow in Python

Luka Chkhetiani
Coinmonks
6 min readJul 3, 2018

--

Credits to Magdiel Lopez

In contemporary high-tech world, Deep Learning is used in different ways to achieve specific goals in specific topics. Engineers and developers across the world use the AI algorithms for mainteance, cybersecurity, mathematical solutions, customer service and etc.

Image recognition and classification is not a hard job, thanks to world-class algorithms such as Inception, or ResNet are. But today, I’m going to discuss algorithm that needs much less processor power, but is able to give you amazing results in case of good data propagation and pre-processing.
Tensorflow implementation with several techniques gives us ability to synthesize ordinary pictures with specific styles.

Let’s start with dependencies.
Some of you may not have installed all of the demanded dependencies, so that, to make sure everything’s going smoothly, I’m gonna start from zero.

P.S. I’m mac user, so I’m gonna re-train the neural net on CPU.

I’ve used this project couple of times, so to make sure I’m showing you everything from the beginning, I’m gonna do all the work in virtual environment (because I’ve installed all the dependencies earlier).

I’m gonna start with installing pip, homebrew, tensorflow, opencv. But, as I’ve installed them, I’m gonna show you the commands how you can do the same. it’s pretty easy, even if you’re starter.

Requirements:

Pip: sudo easy_install pip
Brew: pip install homebrew
Tensorflow: pip install tensorflow
OpenCV: pip install opencv-python

(In case of denying access, please just use administration privileges using “sudo” command. Example: sudo pip install homebrew)

After installing the dependencies, let’s check the environment by command
pip freeze

Go to somewhere, for instance on Desktop, and make a new folder, let’s call it deepstyle. Enter the directory using the command:
cd Desktop/deepstyle

Clone into the repository

After all the dependencies are installed, and you’ve created new directory, lets go on github and clone the deepstyle repository from https://github.com/cysmith/neural-style-tf.git using the command:

git clone https://github.com/cysmith/neural-style-tf.git

If everything’s going smoothly, then it’s ok. After cloning into repository, lets enter it’s directory:
cd neural-style-tf

Pre-trained models

The next step is to download the pre-trained models. We are going to synthesize one image into other, and as you know, two images are not enough data for AI. Lets go on the link : http://www.vlfeat.org/matconvnet/pretrained/

We should search for imagenet VGG-19 pretrained model (imagenet-vgg-verydeep-19) weights. Just download it. It’s size is 534.9Mb so it may take a while. After downloading the file, just move it to the project directory (neural-style-tf folder) as it’s shown down.

Name of the file is imagenet-vgg-verydeep-19.mat

Input & style images

After all of this, we are almost done. Now, we need two images to extract style from one of them, and implement it into another one. Let’s keep it in a romantic way and imagine: what if Magda Eisenhardt (mother of Quicksilver) and Henry Allen (father of The Flash) met each other.
You guess it, let’s synthesize the images of The Flash and Quicksilver.

I’m gonna use the flash as a style, and implement it into image of quicksilver. One advice: use as accurate images of the faces as possible. If you’re using CPU and even more, it’s not so powerful, the neural net would find pretty hard to synthesize them. Thus, there might be some inaccuracies.

I’ve searched and downloaded their images, that I’m gonna show you in the next image:

Sources: SharkAttack.com & Bustle.com

Copy the downloaded images to the project directory.
The image, which you’re using for input image should be copied into folder named “image_input”

And the input image should be copied into folder named “styles”.

Training (re-training)

We’re all set. Lets go in terminal and run the following command:
Be sure you’re in the neural-style-tf directory.

bash stylize_image.sh ./image_input/quicksilver.png ./styles/theflash.png

The command stands for : bash stylize_image.sh ./<input image directory>/<input image name> ./<styles image directory>/<styles image name>

Press enter, it’ll ask you if you’ve installed required dependencies, just push “y” button on keyboard and press enter.

After that, it’ll ask you for CUDA enabled GPU, but as I mentioned above, I’m a mac user, so I’ll press “n” button.

If something goes wrong, don’t be scared. It’s just happening because you’ve not installed all of the required dependencies. In my case, it was scipy library.
Just go on and use the command: pip install <name of library>
In my case: pip install scipy

After installing the demanded library, just use the same command I’ve shown you above:

bash stylize_image.sh ./image_input/quicksilver.png ./styles/theflash.png

Here the magic happens:

The model is being re-trained on your input images

The process is not taking too long. The detailed info about how many iterations are made and how accurate is the image precision becoming will come up in terminal window like this:

The neural network needs 1000 iterations to synthesize the images. Just wait for some time. For my MacBook Pro with 2.2GHz i7 processor and 16GB RAM that “some time” is about 47.62306 minutes.

The Result

After the process is done, you’ll see the precision details and the elapsed time:

The process is done.

Here we go. Let’s go to the image_output directory and check the result folder. There we’ll see some files including the pre-processed image and metadata.

All we need here is the result.png .

As we can see, the style of the flash’s image has been implemented into our input image.

It doesn’t have surprising accuracy, but the algorithm have done it’s job: it transfered the style from one image, to other.

You can use this neural net to transfer the styles from world-class painters into some people’s images, maybe yours, and it’s working pretty well.

And, finally:

If you find reading this post interesting, just press like and appreciate my work

I’d like to hear your feedback and questions below!

--

--