Style Transfer using Pytorch

Alex Diaz
Analytics Vidhya
Published in
7 min readJan 9, 2020

--

I have recreated the style transfer method of this paper “Image Style Transfer Using Convolutional Neural Networks”, by Gatys. In the paper, style transfer uses the features found in the VGG19 Network and I have used Pytorch as well as Keras to recreate it. I will write another article with the same implementation but using Keras.

I have used my dog, called Roscón, as model for this experiment! He was rewarded with a biscuit :). The code can be found here in my Github.

Style Transfer using Convolutional Neural Networks

In this paper, style transfer uses the features found in the 19-layer VGG Network, which is comprised of a series of convolutional and pooling layers, and a few fully-connected layers. In the image below, the convolutional layers are named by stack and their order in the stack. ), Conv_1_1 is the first convolutional layer that an image is passed through, in the first stack. Conv_2_1 is the first convolutional layer in the second stack. The deepest convolutional layer in the network is Conv_5_4.

Architecture of VGG19 Network

Separating Style and Content

--

--