Member preview

Transforming Pictures with Neural Style Transfer in iOS

Since deep neural networks took off in 2012 with the winning ImageNet challenge of AlexNet, AI researchers have been applying deep learning technology, including pre-trained deep CNN models, to more and more problem domains. What can be more creative than creating art?

One idea that has been proposed and implemented, called neural style transfer, allows you to take advantage of a pre-trained deep neural network model and to transfer the style of an image, or any Van Gogh or Monet masterpiece, for example, to another image such as your profile picture or a picture of your favourite dog, thereby creating an image that mixes the content of your picture and the style of a masterpiece.

There’s actually an iOS app called Prisma that won Best App of the Year in 2016 that does just that; in just a few seconds, it transforms your pictures with any of the styles you choose.

In this article, you will look at how to train a fast neural style transfer model that can be used in your iOS to achieve what Prisma does.

Training fast neural-style transfer models

In this section, follow these steps to learn how to train a model using the fast neural-style transfer algorithm with TensorFlow:

  1. On a Terminal of your Mac or preferably GPU-powered Ubuntu, run git clone this Github repo, which is a fork of a nice TensorFlow implementation of Johnson's fast-style transfer, modified to allow the trained model to be used in iOS or Android apps.
  2. cd to the fast-style-transfer directory, then run the script to download the pre-trained VGG-19 model file as well as the MS COCO training dataset.
  3. Run the following commands to create checkpoint files with training using a style image named starry_night.jpg and a content image named ww1.jpg:
mkdir checkpoints
mkdir test_dir
python --style images/starry_night.jpg --test images/ww1.jpg --test-dir test_dir --content-weight 1.5e1 --checkpoint-dir checkpoints --checkpoint-iterations 1000 --batch-size 10

There are a few other style images in the images directory that you can use to create different checkpoint files. The starry_night.jpg style image used here is a famous painting by Vincent van Gogh:

Using Van Gogh’s painting as the style image

The whole training takes about five hours on the NVIDIA GTX 1070 GPU-powered Ubuntu and would certainly take much longer on a CPU..

4. Open the file in a text editor and uncomment the following . two lines of code (on lines 158 and 159):

# saver = tf.train.Saver()
#, "checkpoints_ios/fns.ckpt")

5. Run the following command to create a new checkpoint with the input image named img_placeholder and the transferred image named preds:

python --checkpoint checkpoints \
--in-path examples/content/dog.jpg \
--out-path examples/content/dog-output.jpg

6. Run the following command to build a TensorFlow graph file that combines the graph definition and the weights in the checkpoint. This will create a .pb file of about 6.7 MB:

python --model_folder=checkpoints_ios --output_graph fst_frozen.pb

7. Assuming that you have a /tf_files directory, copy the fst_frozen.pb file generated to /tf_files, cd to your TensorFlow source root directly, likely ~/tensorflow-1.4.0, then run the following command to generate the quantized model of the .pb file:

bazel-bin/tensorflow/tools/quantization/quantize_graph \
--input=/tf_files/fst_frozen.pb  \
--output_node_names=preds \
--output=/tf_files/fst_frozen_quantized.pb \

This will reduce the frozen graph file size from 6.7MB to about 1.7MB, meaning if you put 50 models for 50 different styles in your app, the added size will be about 85MB.

That’s all it takes to train and quantize a fast neural transfer model using a style image and an input image. You can check out the generated images in the test_dir directory, generated in step 3, to see the effects of the style transfer. If needed, you can play with hyper-parameters, documented at and see different and hopefully better, style transfer effects.

One important note before you see how to use these models in your iOS and Android apps is that you need to write down the exact image width and height of the image specified as the value for the --in-path parameter in step 5 and use the image width and height values in your iOS or Android code (you'll see how soon), otherwise you'll get a Conv2DCustomBackpropInput: Size of out_backprop doesn't match computed error when running your model in your app.

Adding and testing with fast neural transfer models with iOS

If you haven’t manually built TensorFlow libraries, you need to do that first. Then, perform the following steps to add TensorFlow support and fast neural-style transfer model files to your iOS app and test run the app:

  1. If you already have an iOS app with TensorFlow manual libraries added, you can skip this step. Otherwise, create a new Objective-C-based iOS app named, for example, NeuralStyleTransfer, or in your existing app, create a new user-defined setting under your PROJECT's Build Settings named TENSORFLOW_ROOT with the $HOME/tensorflow-1.4.0 value, assuming that's where you have your TensorFlow 1.4.0 installed, and then in your TARGET's Build Settings, set Other Linker Flags to be:
-force_load $(TENSORFLOW_ROOT)/tensorflow/contrib/makefile/gen/lib/libtensorflow-core.a $(TENSORFLOW_ROOT)/tensorflow/contrib/makefile/gen/protobuf_ios/lib/libprotobuf.a $(TENSORFLOW_ROOT)/tensorflow/contrib/makefile/gen/protobuf_ios/lib/libprotobuf-lite.a $(TENSORFLOW_ROOT)/tensorflow/contrib/makefile/downloads/nsync/builds/lipo.ios.c++11/nsync.a

Then set Header Search Paths to be:

$(TENSORFLOW_ROOT) $(TENSORFLOW_ROOT)/tensorflow/contrib/makefile/downloads/protobuf/src $(TENSORFLOW_ROOT)/tensorflow/contrib/makefile/downloads $(TENSORFLOW_ROOT)/tensorflow/contrib/makefile/downloads/eigen $(TENSORFLOW_ROOT)/tensorflow/contrib/makefile/gen/proto

2. Drag and drop the fst_frozen_quantized.pb file and a few test images to your project's folder. Copy the same and .h file from the NeuralStyleTransfer app folder from repo to the project.

3. Rename ViewController.m to and replace it and ViewController.h with the ViewController.h and .mm files from the above GitHub link.

4. Run the app on the iOS simulator or your iOS device, and you’ll see a dog picture, like in the figure:

The original dog picture before style applied

5. Tap to select Fast Style Transfer, and after a few seconds, you'll see a new picture with the starry night style transferred:

Like having Van Gogh draw your favorite dog

You can easily build other models with different styles by just choosing your favorite pictures as your style images and following the steps in the previous section. Then you can follow the steps in this section to use the models in your iOS app. Here is the detailed look at the iOS code that uses the model to do the magic.

Looking back at the iOS code using fast neural transfer models

There are several key code snippets in that are unique in the pre-processing of the input image and post-processing of the transferred image:

  1. Two constants, wanted_width and wanted_height, are defined to be the same values as the image width and height of the image examples/content/dog.jpg of the repo in step 5:
const int wanted_width = 300;
const int wanted_height = 400;

2. The iOS’s dispatch queue is used to load and run your fast neural transfer model in a non-UI thread and after the style transferred image is generated, to send the image to the UI thread for display:

dispatch_async(dispatch_get_global_queue(0, 0), ^{
    UIImage *img = imageStyleTransfer(@"fst_frozen_quantized");
    dispatch_async(dispatch_get_main_queue(), ^{
        _lbl.text = @"Tap Anywhere";
        _iv.image = img;

3. A 3-dimensional tensor of floating numbers is defined and used to convert the input image data to:

tensorflow::Tensor image_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({wanted_height, wanted_width, wanted_channels}));
auto image_tensor_mapped = image_tensor.tensor<float, 3>();

4. The input node name and output node name sent to the TensorFlow Session->Run method are defined to be the same as when the model is trained:

std::string input_layer = "img_placeholder";
std::string output_layer = "preds";
std::vector<tensorflow::Tensor> outputs;
tensorflow::Status run_status = session->Run({{input_layer, image_tensor}} {output_layer}, {}, &outputs);

5. After the model finishes running and sends back the output tensor, which contains RGB values in the range of 0 to 255, you need to call a utility function called tensorToUIImage to convert the tensor data to an RGB buffer first:

UIImage *imgScaled = tensorToUIImage(model, output->flat<float>(), image_width, image_height);
static UIImage* tensorToUIImage(NSString *model, const Eigen::TensorMap<Eigen::Tensor<float, 1, Eigen::RowMajor>, Eigen::Aligned>& outputTensor, int image_width, int image_height) {
    const int count = outputTensor.size();
    unsigned char* buffer = (unsigned char*)malloc(count);
    for (int i = 0; i < count; ++i) {
        const float value = outputTensor(i);
        int n;
        if (value < 0) n = 0;
        else if (value > 255) n = 255;
        else n = (int)value;
        buffer[i] = n;

6. Now, you can convert the buffer to an UIImage instance before resizing it and returning it for display:

UIImage *img = [ViewController convertRGBBufferToUIImage:buffer withWidth:wanted_width withHeight:wanted_height];
UIImage *imgScaled = [img scaleToSize:CGSizeMake(image_width, image_height)];
return imgScaled;

If you found this article an interesting read, you can explore more such deep learning and reinforcement learning apps with Jeff Tang’s Intelligent Mobile Projects with TensorFlow. This book covers more than 10 complete iOS, Android, and Raspberry Pi apps powered by TensorFlow and built from scratch, running all kinds of cool TensorFlow models offline on-device: from computer vision, speech and language processing to generative adversarial networks and AlphaZero-like deep reinforcement learning.

For more updates you can follow me on Twitter on my twitter handle @NavRudraSambyal

Thanks for reading, please share it if you found it useful 🙂