Deep Learning: Zero to One — Art Generation

Sam Putnam
1 min readMay 9, 2017
Justin Johnson

Justin Johnson, now at Facebook, wrote the original Torch implementation of the Gatys 2015 paper, which combines the content of one image and the style of another image using convolutional neural networks. Manuel Ruder’s newer 2016 paper transfers the style of one image to a whole video sequence, and it uses a computer vision technique called optical flow to generate consistent and stable stylized video sequences. I used Ruder’s implementation to generate this stylized butterfly video.

I have included exact steps to replicate the video I created on an AWS EC2 P2.xlarge (Tesla K80) GPU instance, with the installation of Torch dependencies included as well — Starting from Slide 17 — You can create a stylized video just like the butterfly/flower video:

https://www.slideshare.net/SamuelPutnam/deep-learning-and-artistic-style-transfer-for-videos-enterprise-deep-learning

Go to the iTunes Podcast.

--

--