Machine Learing is going mobile

Machine learning is one of the fastest-growing and most exciting fields out there, and deep learning represents its true bleeding edge. With more computational power, recent trends indicate more and more hype around bringing machine learning into mobile devices and making it truly smart. Right now, developers do not have to limit themselves to making network calls to cloud services like Amazon AWS or IBM Watson.

Apple Core ML

With the brand new Core ML framework introduced at the latest WWDC conference, it seems that machine learning became a first class citizen on the iOS platform. The truth is, developers could enhance their apps with deep learning magic way before iOS11, but right now it’s just easier than ever. At least in some cases.

So, what is Core ML and what it is not? Let’s demystify all the hype around it.

Core ML does not reflect a common understanding of a machine learning framework — you can’t use it to train models based on data, but it’s fine — training on a device is not something you would want to do anyway.

Its sole purpose is to perform inference on the device, mainly in machine learning tasks like classification and regression.

Core ML can handle several different types of models, such as:

  • Support vector machines (SVM)
  • Tree ensembles such as random forests and boosted trees
  • Linear regression and logistic regression
  • Neural networks: feed-forward, convolutional, recurrent

While these are unarguably most common ML applications, more and more of them are behind the scope of Core ML.

Having said that, let’s dive into details.

In order to get it working, you need a trained model in Apple specific open .mlmodel file format. It contains all the information like your model architecture, weights & biases, class labels. Coremltools is a Python package, which comes in handy when you are up to this. In particular, it can be used to:

  • Convert existing models to .mlmodel format from popular machine learning tools including Keras, Caffe, scikit-learn, libsvm, and XGBoost
  • Express models in .mlmodel format through a simple API
  • Make predictions with a .mlmodel (on select platforms for testing purposes)
➡️Check our Privacy Policy⬅️

It’s worth noting that Coremltools does not support Tensorflow (Core ML works at a much higher level of abstraction). However, is it open source, so probably it will support more machine learning frameworks soon.

Model in this format can be directly added to Xcode project. Xcode utilises it to generate class named after your model, wrapping MLModel object
with a friendly interface, allowing you to make predictions.

That’s pretty much it; Core ML takes care of the rest!

What’s more, Core ML comes with other new tools, Vision framework and new capabilities of Foundation’s NSLinguisticTagger class. Both play really nicely with Core ML allowing you to preprocess your data before being fed into the model.

The kinds of jobs Vision can perform are:

  • Finding faces within an image. This gives you a rectangle for each face
  • Finding detailed facial features, such as the location of the eyes and the mouth, the shape of the head
  • Finding rectangular elements of the image, e.g street signs
  • Transforming two images, so that their content is aligned (this is useful for stitching together photos)
  • Tracking the movement of objects in a video
  • Detecting the regions in the image that contain text
  • Detecting and recognising bar codes
  • Determining the angle of the horizon

If you’re into NLP you can perform tasks like:

  • Language identification
  • Tokenising
  • Part-of-speech tagging (detect whether something is a noun, an adjective, or an adverb)
  • Lemmatisation (convert the word “running” to “run” — this is different from the concept of stemming, since you can also turn the word “was” to “is”)

All of this works directly on the device, without making any network calls.

Nonetheless, Core ML has several limitations and drawbacks:

  • It is optimised to run fast and efficiently on your phone. This means you don’t have to worry about whether to run on GPU or CPU. Apple actively switches between the CPU and GPU based on how computation and memory heavy the task. That’s great, but you can’t force Core ML to run on the GPU, even if you really want to.
  • Security - compiled model is not encrypted in any way. With access to .ipa, your model structure can be inspected quite easily.
  • The recommended way of updating model is new app release. There is a workaround, though: put the model’s swift file into the target, compile new model from .mlmodel to .mlmodelc without changing its interface, put those sources to the server, download them from inside of your app, initialise new model using YourModelClass.init (contentsOf: URL) method. Well, this is definitely not the way Core ML is designed to be used.
  • iOS11 and later only.
  • It’s not open-source — it will get updates only when new system version is released. That’s fairly uncommon in the machine learning world.

Metal Performance Shaders

Under the hood, Core ML works on Accelerate framework and Metal Performance Shaders. First one allows to make large-scale mathematical computations and image calculations optimized for high performance on CPU, and the latter gives you control over collection of classes that let you unleash the power of Metal and GPU.

Why would you use Metal instead of super simple Core ML that works on top of it anyway? It gives you a lot more possibilities, not necessarily for the price of complexity and difficulty. Since iOS11, Apple provided a new higher level graph API, intended to simplify the creation of neural networks. However, there is no fancy converter tool like Coremltools and you have to know some Metal framework basics. With MPS graph API you can describe graph in similar manner you would in Keras.

You are still limited to layer types Apple decided to implement, but if you want full control, it is possible to go deeper and use MPS low-level classes directly. However, using Metal is a far cry from simple. I highly recommend checking out the Forge framework.

Wrap up

To sum it up, Apple with it’s has managed to perform a quite neat strategy making prevalent machine learning models integration possible and easy. If you’re ok with its limitations, you should check it out. If you need something more, you can always reach out for Metal. Apple did not dive into creating tools for training ml models, at least not yet. Although it might change, given the fact it has revealed its own blog focused on their research in this field.

None of the solutions above tackle the case of on device learning based on gathered data, mostly due to hardware limitations. Learning on the device basically makes no sense right now, although it the future it might be one of the solutions for the privacy concerns. Recent Google Research Blog publications indicate that we might head in this direction.

In conclusion, it is still getting easier and easier to enrich your products with AI. It is worth it to get into the field of machine learning especially if training models on your own data is something you’re willing to do.

--

--