Copista: Developing Neural Style Transfer application with TensorFlow Mobile

Tinkering with Deep Learning

1. Introduction

I should confess that I am not a Data Scientist :). It’s my take on Machine Learning as a Software Engineer.

Its all started when I came across Pete Warden’s blog TensorFlow for Mobile Poets in the beginning of 2017. At about the same time I discovered TensorFlow Stylize example for Android. And I was thinking that this was an example I would like to play with.

Lucky me I was ignorant and was not aware about Prisma app existence :)

I got obsessed with Machine Learning and took two courses on Coursera: Machine Learning (Stanford University) and Neural Networks for Machine Learning (University of Toronto). It was not easy to grab the concepts but somehow I struggled through them.

After finishing the courses I felt very optimistic and powerful, like I need 2 weeks — 4 weeks to implement Neural Style Transfer application based on the demo application than I found in TensorFlow that should meet two requirements:

  1. Be fast
  2. All work should be done locally on mobile device.
Well, it took me a lot longer time than I thought it would. But you can check the results yourself here Copista — Cubism, expressionism AI photo filters at Google Play.

I played with the TensorFlow demo application for 3–4 days and came up with the following problems:

1. TensorFlow models were trained for fixed input image size (256 X 256). And I wanted to be able to process any image size.

2. The style transfer speed was not enough.

Neural Style Transfer 
Neural Style Transfer based on Convolutional Neural Networks (CNNs). Neural Style Transfer takes two input images — a content image and a style image, and synthesizes a new image, or the stylized image, which has the content of the content image, but is painted in similar texture as the style image.
This effect is based on the layered representations of an image by CNNs: the high-level CNN features encode the semantic content (objects and structure of the input image), whereas the low-level features encode colors, basic shapes and texture.
Copista implementation is based on a combination of Gatys’ A Neural Algorithm of Artistic Style, Johnson’s Perceptual Losses for Real-Time Style Transfer and Super-Resolution, and Ulyanov’s Instance Normalization.

In the next part I will talk about training TensorFlow models for TensorFlow Mobile: Copista: Training models for TensorFlow Mobile