A16Z AI Playbook: TensorFlow iOS example quickstart
There’s another dimension to the iOS TensorFlow example in the A16Z AI Playbook: it provides a working example that can be used directly with Xcode 8 and Swift 3, which isn’t yet common. What follows is a set of pointers that (hopefully) make it easier for iOS+Swift developers to jump right into it.
1: Clone & Run The Sample App Unmodified
First, clone or download the playbook repository from github and then open the Xcode project found under ai/ios/CueCard. Keep in mind the project requires Xcode 8, and uses Swift 3.
First, open the project. You’ll notice that there’s a file missing:
Don’t panic! The contents of the file are there, just compressed, to get around github’s file size limits without using Git LFS. The TensorFlow static library is also compressed in this way.
To get these files in place, just build the project. There’s a build phase added to it that will check for the files and extract them from archive if they’re not in the correct location:
Now that you’re built the project the file will still appear in red because Xcode doesn’t update group contents dynamically. To have Xcode “see” the file you’ll need to close and open the project back up again.
Run The Project on a Device
With the project built you’re now ready to run it, and because the example uses the device’s camera, you’ll need an iOS device to run the example on.
After launch you should see a screen like this:
After pressing the button to switch to ‘scan mode’ you should see the bars change according to the prediction as you scan different items:
2: Train & Use Your Own Model
Now that you’ve got the app running, you can train your own model to detect different things.
The TensorFlow setup we use and how to get it up and running is covered in the section “Code Part 1: (Re-)Training a Model”. I suggest that even if you know TensorFlow and have a setup already running on your machine you take a look at this section to make sure that you adjust to any possible idiosyncrasies of the tutorial.
If you haven’t yet installed TensorFlow, I will repeat what’s said in this section of the playbook — as counterintuitive as it may be, you might have an easier time compiling TensorFlow from source than installing it.
Finally — you might be tempted to install and use the GPU extensions (e.g. CUDA) right away. The playbook includes instructions for this but if you are not familiar with TensorFlow and/or GPU extensions I don’t recommend spending time on that until you’ve got the whole setup working.
TensorFlow Model Retraining
Once you’ve got a working setup, the section “Code Part 2: Adding AI to Your Mobile App” takes you through how to retrain the model. The result of that process will be label files and the network graph that you can use to replace those in the iOS app.
Modify the iOS App Code
Once you have a retrained model and have placed it in the Xcode project, you are close to being done. In general, you shouldn’t need to modify code beyond what’s in CueCardViewController.swift. The code in that file is primarily written for simplicity and readability, and changes in interface, labels, and files are managed there.
Additionally, there’s the TensorFlowProcessor class (written in Objective-C) which wraps around the TensorFlow static library. Modifications that have to do lower-level parameters like image dimensions belong there.
By now you should up and running with both a TensorFlow setup that you can retrain and an iOS app that can be used to process images using your own trained network. Hope this is useful! And as always, please feel free to send questions, comments, or fixes my way.
Happy hacking! :)