Member preview

IBM Watson Loves Apple to the CoreML

Part 1 — Setting up Image Recognition Models in Watson Studio

This Article Has Three Parts

Part 1 — Setting up Image Recognition Models in Watson Studio

Part 2 — Swift Coding with Watson SDK and CoreML

Part 3 — CoreML Image Effects (under construction)

The engineers at IBM and Apple have formed an exciting AI joint venture. This article will explain how these two giants have made your life, as a developer, easier and more productive.

The combo of Watson and CoreML has everything from friendly online model training tools to fast onboard image classification. It’s got the stuff we want.

Aliens vs AI

They came, they saw, we conquered

I’m working on an App for detecting those strange visitors from other worlds with just a mobile device. For example, you might want to know a persons origin before you marry them. Or perhaps you prefer an alien.

To win this battle, we need to expose those visitors with tools we mere Terrans do not have. With Artificial Intelligence and Machine Learning, we have a fighting chance. Our game plan is to use Watson and CoreML as our Alien Hunters. We shall defend our island.

Let’s dive into how Watson and CoreML work

Think of Watson as your classification model manager. Upload photos and train your models. Then use CoreML to classify photos right on the device. There is no need to upload photos to Watson in the cloud. You classify the images at their source, on the mobile device. No internet connection is even required with CoreML.

You ask, how does CoreML use the Watson models, if the trained models are in the IBM cloud? The trick is to grab the latest models and save them on the device for use with CoreML. Don’t worry, IBM has made that trick easy. There’s a function to download the latest models once to avoid uploading images for classification everytime.

It works something like this:

Here is a glance at some Swift Code using IBM’s visualRecognition to update your model, when connected to the wire, and then classify an image with CoreML.

There’s an intro on coding towards the end of this article.

Then checkout Part 2 — Swift Coding with Watson SDK and CoreML for more in-depth examples.

Now let’s look at using IBM Watson to build and train your models. Then we can dig deeper into some code for updating the models and finally classifying the images right on the device with CoreML.

Build a Visual Recognition Model

Step 1 For Watson Studio, first signup for the IBM Cloud. Click here to begin.

Important - You must be signed up and logged into the IBM cloud before you can signup for IBM Watson.

Step 2 After you have managed to login to the IBM Cloud, Signup for IBM Watson Studio.

In the lower right corner of the page below, there is a button for starting a conversation with IBM support people. Use this button on any pages where you get stuck. They are great.

The only issue I found with this help is that the conversation window does not always follow you around as you navigate to different pages. I found myself wanting to yell “are you still there?” Perhaps in some future update they can float the help.

Also, if you get lost, click the IBM Watson logo on the top left of any page to bring you back to the the Watson Studio home page.

Once you are in the Watson Studio the top of your screen should look like this:

For Swift code to access the recognition models, it is best to start by creating a new project. We will do that, but from my experience, the best place to go next is to Services and create a Visual Recognition Service.

Step 3 Create a Visual Recognition Service

From that studio menu, select Services, then Watson Services and then look for Visual Recognition as shown here and “add” it.

When adding your Visual Recognition Service, you will have the choice of a Free or a Standard pricing plan. I urge you to select the Standard plan. Otherwise, it is easy to get stuck. The Standard is very inexpensive to begin with and you will be able to train more than one custom model. And honestly, for now, things just do not work correctly with the new CoreML tech with the free plan. It gets confused if you delete the only custom model and try again.

Step 4 Create a new Project

Select + New Project

From the next screen you are able to include services in your project. You might even see the Visual Recognition Service. But to keep things clean for now, select Basic. That avoids adding any residual services that you do not need right now.

Step 5 Associate your Visual Recognition Service with your new Project

From your new Project menu select Settings.

Then scroll down to Associated Services and add your Image Recognition Service to your project. This is how you add any services to your project in the future.

Step 6 Create a Custom Model

From the Watson Studio->Services->Watson Services. Then select Launch tool for your service. You will see a screen with an option to create a Custom Model. Go ahead and Create the Model.

Step 7 Create classes, upload images in Zip Files and train your custom model and it’s classes.

It is important that you name your custom model before you begin training. 

So here I have a class for Aliens and another Class for Humans. This project will get more sophisticated eventually as we track down those off world invaders. But you can see how easy it is to start the machine learning process with Watson.

Now you might be wondering where you can get images for training if you don’t have them. I would recommend sites with open source images for any use like www.pixabay.com and www.unsplash.com

Major Update

If your service was created before May 23rd, 2018, to be GDPR compliant, you must recreate your service or it will be deleted. Now that seems simple enough. Just delete the old VR Service and build a new one. But then you will have a new API Key that is used differently. IBM changed the way you specify the API Key. See how to use the new API Key in the next section on coding. And don’t forget to create a new service if you already have one created before May 23rd, 2018.

How to update your models.

The best way to update an existing model is to zip any new images you want to add to an existing model class. If you try to delete an existing Class, you will get an error because it was already trained.

So upload your additional files in a new zip. Don’t include the prior images for that same class. Once uploaded, drag the zip with the new images to the related class. Watson Studio will then add those images to the class. Next Train your model again.

Don’t Dream It. Code It!

The best documentation I’ve found for setting up Watson frameworks in a Swift Xcode project are found here on this Quick Start Guide for the IBM Swift-SDK. Read this first.

Now look at an IBM sample project On GitHub. This is a good demo project for using Watson with CoreML. Just click this link to explore the Visual Recognition with Core ML sample.

There are two swift examples in the project. The one you will be most interested in is CoreML Vision Custom. This combines Watson with CoreML. You will need Xcode 9 or later and iOS 11.0 or later.

Step 1 Getting the files

As described on Github, clone the sample repository locally, or download the .zip file of the repository and extract the files.

Step 2 Add the Watson frameworks into this sample project

The sample Github page kind of tells you how to install the IBM libraries, but not well. Unlike many frameworks on Github that use CocoaPods, IBM uses Carthage. Not a problem if you use the right docs. Otherwise you might get lost.

Step 3 Provide your model key and classifier

Once you have the IBM swift-SDK and the Visual Recognition framework setup in your project you are ready to move on. The Github sample documentation talks about providing the Watson Model ID and the classifierID to the App. You will need to provide these for any models you create on Watson that you want to use in an App. Here’s what they tell you to do on Github.

Adding the classifierId and apiKey to the project

These docs are a little confusing because ModelID and classifierID are the same thing
  1. Open the project in XCode.
  2. Copy the Model ID and paste it into the classifierID property in the ImageClassificationViewController file.
  3. Copy your api_key and paste it into the apiKey property in the ImageClassificationViewController file.

Ok, so where to find your api-key and Model ID (classifierID)?

Back to Watson Studio. From the Watson Studio, you should see your Watson Services. Click on your visual recognition service, then select credentials and then on that page select view credentials. You should have a screen like the one below. The api-key is marked out in red. You want to copy the key between the quotes.

If you service is was created after May 23rd 2018, then find your apikey here:

You will still use the API key, but in your code there are two ways to initialize the VisualRecognition Class now. They just flipped the position of the key and version to provide two methods.

// Old Way before May 23rd 2018
self.visualRecognition = VisualRecognition(apiKey: apiKey, version: version)
// New Way after May 23rd 2018
self.visualRecognition = VisualRecognition(version: version, apiKey: apiKey)

For the Model ID, click the Overview tab. Then scroll down to your trained custom model and copy the model ID.

Step 4 Test the Custom Model with the App

From the Github docs follow these steps and have a go at it.

  1. Open QuickstartWorkspace.xcworkspace in Xcode.
  2. Select the Core ML Vision Custom scheme.
  3. Run the application in the simulator or on a device.
  4. Classify an image by clicking the camera icon and selecting a photo from your photo library. To add a custom image in the simulator, drag the image from the Finder to the simulator window.
  5. Pull new versions of the visual recognition model with the refresh button in the bottom right.
  6. Tip: The classifier status must be Ready to use it. Check the classifier status in Watson Studio on the Visual Recognition instance overview page.

Note* Only Custom Watson models are available for download and use with CoreML. The Watson built-in models will not work. After trying and failing, I got the official scoop from IBM tech support.

Be sure to checkout Part 2 — Swift Coding with Watson SDK and CoreML

Part 2 dives into coding our project and gets you started with the Watson SDK from IBM. I will post the Xcode project on Github when complete.

In Part 3 (under construction) I’ll show you how to wow your users with CoreML style transfer effects. We need to see what that off world visitor looks like when exposed.

If you have something else you would like included in a lesson on Watson and CoreML, please let me know.

Thanks for reading!

Like what you read? Give Rob Adamson a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.