IBM Watson Loves Apple to the CoreML

Rob Adamson

Part 2 — Coding with Watson SDK and CoreML

This Article Has Three Parts

Part 1 — Setting up Image Recognition Models in Watson Studio

Part 2 — Swift Coding with Watson SDK and CoreML

Part 3 — CoreML Image Effects (under construction)

Coding the App.

Let’s have some fun speculating about this App. Otherwise skip to Let’s get Coding.

“I don’t even know who you are.” Small drops of water rolled down her cheeks and over her pouty lips.

“It’s me, Rob. What’s wrong Ellen?”

She snapped another picture. Within seconds, the Watson/Core ML enabled App spun through its image recognition models and returned the same result. “Not From Earth!”

She burst into tears.

“Did you press the Refresh Models button?” Rob asked.

“What?”

She wiped away the tears and glanced at her screen. There it was. The Refresh button. With shaking finger, she managed to tap. A strange looking progress spinner appeared on screen. Then it completed. “ET Models Updated”

She browsed her albums and selected her previous photo. The App classified the image again.

“Likely Human.” Replied the App

She flew into his arms, closed her eyes and shared her tears on his Alien face.

The visitor from Alpha Centauri knew the App would soon correct itself. AI was unlike anything they had ever faced in their travels. But for now, his secret was safe.

“Please delete that silly App,” he said

She froze in his arms. Her eyelids flew open.

Let’s get coding! The survival of the Earth is at stake!

It took me a while to get my head around the IBM Watson Cloud technology. But once I did, I thought wow! The IBM developer cloud has all kinds of services such as Text to Speech, Watson Assistant, Visual Recognition and more. These are available from the IBM Cloud website. This is where the Watson Studio resides. Remember, we used the Studio in Lesson 1.

Now you might be asking: where’s the beef? How does a Swift App developer use this amazing technology? Enter the Watson Developer Cloud Swift SDK. The Watson Swift SDK is an IBM open source project hosted on Github. Feel free to participate in support of this project.

The Watson SDK is the beef that will get you up and running with Watson cloud services. It’s a time saver! In Lesson 1, I referenced a good article for using Carthage to install the Watson SDK into your Xcode projects. Here’s the link again: Watson Quick Start Guide. Read this carefully before installing.

The SDK is composed of many libraries related to each of the IBM Cloud services. Activate only the ones you want in your own Xcode projects.

IBM precompiles the Swift SDK libraries with the latest version of Swift. When you call "carthage update --platform IOS" in your terminal, you will pull down the source along with those latest compiled modules. So your version of Xcode and Swift need to be up to date with IBM or you will find yourself in a confusing pickle with messages like "Incompatible Swift version..."

Update! Version 33 of the Swift-SDk supports Cocoapods. So much easier. Here’s the libs you can include with your pod file.

Run the pod install command, and open the generated .xcworkspace file. To update to newer releases, use pod update.

When importing the frameworks in source files, exclude the IBMWatson prefix and the version suffix. For example, after installing IBMWatsonAssistantV1, import it in your source files as import Assistant.

— — — — —

If you want to use Carthage instead of Cocoapods read this, but there’s a Gotcha with pre-compiled libs.

What if Apple updates the IOS and your Xcode version needs to match. Now your Xcode is newer than the version used by IBM to compile the framerworks.

When you run $ carthage update — platform IOS , you will get a compatibility error and the compiled frameworks will not download. So now what? Just wait for an update from IBM?

Here’s a trick that often works. From Github, select releases and look at the latest.

Next notice howthe framwork is available as .zip file. So if necessary, download the framework yourself and replace the libs you are using with the new ones.

Once installed It’s time to code. Code examples are in Swift.

Swift is music to programmers coding both mobile Apps and server APIs

Music is the universal language of mankind

Could Swift become the universal language of programmers? Not sure but theWatson SDK and these code samples are in Swift.

Visual Recognition of ET — the coding. The VisualRecognition library, in the Watson SDK, gives you access to the online Watson Studio trained models and calls Core ML for image classification. That’s right! The Watson SDK does both. It downloads Watson Models and performs local Core ML classifications.

Let’s explore some of the VisualRecognition methods:

listLocalModels — Returns all the local models on the device

updateLocalModel — Downloads the requested model and stores it in the device file storage as a Core ML model. It replaces an existing model if present.

classifyWithLocalModel — This function uses a downloaded local model(s) and then calls the necessary Core ML functions to classify. Notice you are able to specify an array of models, an array of classifierIDs.

Also there is a function to deleteLocalModel from the local device and another to getLocalModel for use directly with Core ML.

I’ve been looking for a way to send images up to a Watson Model and retrain the model, but I’ve not found that yet. I’m nagging IBM about that cool feature.

let’s review some basic items when classifying images in an App

  1. Check for local models on the device and update(install or replace) if necessary.
  2. Add a Refresh Models button to the UI. Why would you want this? Perhaps just automate this periodically instead. Regardless, provide a progress indicator when updating because updateLocalModel makes a remote download.
  3. Add a nice camera and photos browser.
  4. Write your Code for a Classification function.
  5. Analyze the results of the classification. Your classification results are based on percentages, like 70% ET. If 100% ET is returned tell the user to run!
  6. Display the classification result. Human or Not From Earth.

Wait, there’s more! With your new found knowledge of the Watson SDK, why not include more than just Image Recognition. IBM has given us a treasure chest. Checkout these imports. There’s the VisualRecognitionV3. But notice the TextToSpeechV1.

Include this TextToSpeech lib with your install for use in your App. Then instead of just displaying classification messages, let Watson explain them in his own voice. Here’s some example code. Check the IBM Cloud for more on TextToSpeech.

I will Open Source this fun project on Github soon. Just needs polish & testing first.

In summary, with the Watson visualRecognition class, you have the best of both worlds. Online training of models and fast local classification. Now go classify those funny animals at the zoo. That would be a great App.

Good luck with this exciting new technology from IBM and Apple. See you in Part 3.

Rob Adamson

Written by

Programmer, Mtn Biker, Writer & Blogger. Wrote: BASE SciFi Novel, Mediaforge, Instant Replay, Gener/OL, Patents. robadamson.net

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade