WWDC 2018: Apple announces Create ML

Ray Yamamoto Hilton
Eliiza-AI
Published in
4 min readJun 7, 2018

--

At this year’s WWDC, Apple updated their CoreML library to offer performance improvements (via a batch API) and model optimisation (through quantisation). Also announced was Create ML; a simple, easy to use API that makes machine learning much more accessible, without requiring specialist knowledge.

Projects that support CoreML

Machine Learning Models

To give a little context, an example of using a machine learning model on a device would be to detect what a photo is of. In the below example, passing an image through an image classifier (an ML model) will give a prediction of what it is of; in this case, a Giraffe.

Where do these models come from? There are a few pre-trained, generalised models such as InceptionV3, YOLOv3, ResNet, etc which can be used to detect common objects. However, if you want to build your own model to detect more specific things, you would have to use machine learning frameworks, such as TensorFlow, to train your own. This can be a very steep and complex learning curve, which can be prohibitively expensive.

Create ML

Different models supported by Create ML

Enter Create ML. At it’s simplest, it provides a drag-and-drop environment for training an image classifier. Given examples of Elephants and Giraffes, CreateML can build a new model to detect these animals.

However, a model built just on a small set of examples won’t perform very well. To address this, CreateML employs a technique called Transfer Learning which benefits from Apple’s existing, massive, image training data (100s millions of images), and applies your own images to the final layer.

Transfer learning builds upon Apple’s existing image classifier

This not only speeds up training and increases accuracy on relatively small datasets, but it can also reduce file size. Apple provided an example of reducing model size from 100Mb (for InceptionV3) to 3Mb.

Training in Swift

Invoking the Image Classification Training UI

Using Swift in Xcode Playgrounds makes for a very simple environment for using Create ML. Using a couple of lines of code, you can bring up an interactive widget for training your classifier.

Automatically select the best algorithm for your use case

As well as a simple drag-and-drop UI for image classification, the API also provides high-level support for creating text classification, word tagging, categorisation and quantity estimation models. Writing these scripts in swift gives you control over which algorithm (e.g. boosted tree, random forest, etc) you want to use, or you can also simply use the MLRegressor class to automatically choose the best algorithm for your data.

MLDataTable

For tabular datasets, Apple provide MLDataTable, which is backed by Turi Create’s SFrame. This class provides functionality similar to Pandas in Python, allowing you to manipulate tabular data using simple expressions.

Examples of manipulating tabular data with MLDataTable

This is particularly exciting as a pandas-esque dataframe type has been one of the missing components for doing scientific computing with Swift.

Metal

Machine learning on apple platforms is accelerated by Metal, their low-level GPU programming framework. They make an interesting point that this applies to iOS as well as OSX GPUs, perhaps implying that we can perform on-device training for customised and private models.

Finally, Apple have been working with Google to bring Metal acceleration to TensorFlow. They cite an example of 20x improvement when training the InceptionV3 model. This should allow users to not only take advantage of the built in GPU, but also scale out across external GPUs.

I’m personally quite excited to see collaboration between Google and Apple in this area — not only for Metal support in TensorFlow, but also with Swift for TensorFlow and Google’s recent cross-platform MLKit.

--

--