Integrate Your Machine Learning Models in Apple Devices
This year at Think 2018 we announced a partnership with Apple to seamlessly integrate AI in consumer applications. We started with Watson Visual Recognition and tooling to easily train and deploy custom image classifiers. With just a handful of images per category, you can train your own models in minutes and export them as Core ML. Core ML will allow you to embed your classifier directly in your iOS application and run it locally on the device.
Apple announced Core ML 2, a new version of its suite of machine learning apps for iOS devices, at the Worldwide Developers Conference (WWDC) 2018 in San Jose, California today.
Core ML 2 is 30 percent faster, Apple says, thanks to a technique called batch prediction. Furthermore, Apple said the toolkit will let developers shrink the size of trained machine learning models by up to 75 percent through quantization.
Today we are excited to announce the next step in our integration of Watson Studio with Core ML, allowing users to convert and export Keras (with Tensorflow backend), Scikit-Learn and xgboost models to Core ML. All that is available in Watson Studio: our AI-platform that support end-to-end collaborative environment that enables developers to quickly and easily catalog, classify, provision, train and deploy models.
Fostering collaboration between iOS developers and Data Scientists
Watson Studio enables multidisciplinary teams across organizations to collaborate. We are convinced, after working with clients around the world, that rich collaboration is key to unlocking the full potential of AI.
With this integration, data scientists can focus on creating innovative and accurate models using the open source framework of their choice, and then hand them over to iOS developers to integrate them into applications. For the enterprise, this is an inevitable step toward mobile and AI, revolutionizing how we work.
From straightforward machine learning models to complex neural networks, you can build and train your models using IBM Watson Studio. With Core ML, you can then build iOS apps that use those trained models on your device.
Get started with a tutorial
We’ve prepared a set of tutorials to help you get started quickly with this new set of capabilities. You will learn how to design and train a convolutional neural network with Neural Network Modeler (part of Watson Studio), train it using the latest GPUs, deploy it, and finally embed the model in an iOS sample application.
In this tutorial you will learn how to create an iOS sample application that teaches little kids to write digits. The app speaks up the instruction and comments the result. Based on 10 tasks final score is being calculated.
Core ML lets you integrate a broad variety of machine learning model types into your app. In addition to supporting extensive deep learning with over 30 layer types, it also supports standard models such as tree ensembles, SVMs, and generalized linear models. Because it’s built on top of low level technologies like Metal and Accelerate, Core ML seamlessly takes advantage of the CPU and GPU to provide maximum performance and efficiency. You can run machine learning models on the device so data doesn’t need to leave the device to be analyzed.
Read more about the Apple ← → IBM partnership here.
1. Watson Studio provides a low-code, end-to-end collaborative environment that enables developers to quickly and easily catalog, classify, provision, and train their data and models.
2. Get started with the GoDigits tutorial here: https://github.com/pmservice/go-digits/blob/master/Tutorial.md
3. Watson SDK low latency, and offline process for custom Visual Recognition models using Core ML with the rich insights from the Watson services on the cloud.
We cannot wait to see what you build with Watson Studio and our new Core ML integration!