A Robot Befriends Classic Monsters Using Watson APIs — Part 1

Introduction To Watson Visual Recognition Custom Classifiers Using Python

Josh Zheng
IBM watsonx Assistant
7 min readMay 22, 2017

--

You can find the code to this entire project here.

And the rest of of the series here:

Part 2Introduction To Watson Visual Recognition Custom Classifiers Using NodeJS

Part 3 Integrating Watson Visual Recognition With RaspberryPi Camera

Part 4Putting It All Together: TJBot Befriends Some Classic Monsters

Introduction

Dracula, Frankenstein, the Wolfman, and the Mummy — these monsters aren’t evil, just misunderstood. All they really need is some extra empathy from the rest of us. Fortunately we’ve built a robot that cares, who sounds like the perfect candidate to befriend these supposedly scary monsters.

In this episode of Teaching Robots How To Love, we’ll build upon our previous Build a Chatbot That Cares tutorial and teach TJBot how to show some love to a few classic monsters. In our previous series, we taught TJBot how to listen, speak, and empathize. But we’re still missing a key step of any interaction — recognizing who you’re interaction with.

Part 1 (this article) and Part 2 of this tutorial series will show you how to use Watson Visual Recognition to recognize four of the most popular classic monsters using the API’s custom classifiers feature.

The later parts will demonstrate how to integrate Visual Recognition with the code we wrote for the Build a Chatbot That Cares tutorial. I recommend completing that tutorial first if you plan to finish the entire series. If you’re only interested in learning about Visual Recognition, this tutorial is a good place to start.

In the end, you’ll have something like this:

Amazing, I know.

What’s Ahead

If you’re a regular, you’ll notice that this builds upon my previous Build a Chatbot That Cares tutorial. If you followed that tutorial to completion, you would have already learned how to use Watson Speech To Text, Tone Analyzer, Conversation and Text To Speech. We’re now adding the Visual Recognition to the mix.

Here’s the complete list of tutorials required to finish this project.

The code for Part 1 — Part 4 can be found here.

Understanding Watson Visual Recognition Custom Classifiers

Here’s the official demo and documentation for custom classifiers. Essentially, the API let’s you create your own visual classifier using the visual classes you’ve defined with your own training set. The process usually involves these 4 steps:

  1. Collect training images
  2. Create a custom classifier using the training set
  3. Test the classifier against the test set
  4. If necessary, update the classifier’s training data until desired accuracy is achieved with the test data

For the rest this tutorial, we’ll go through these steps, using Python, to train an endpoint that’ll recognize four of the most popular classic monsters: Dracula, Frankenstein, the Wolfman, and the Mummy.

Step-By-Step Tutorial

Step 0. What You Need

  1. Python (2.7 or 3.x) development environment. I’m using 3.5.2
  2. Credential for Watson Visual Recognition

Step 1: Bluemix Account and Service Credentials

In order to use the the Watson services, you need to create the services and its credentials on IBM Bluemix. Bluemix is IBM’s PaaS offering that let’s you deploy and manage your cloud applications.

If you prefer deploying your applications somewhere else, that’s not a problem. You can use all the Watson services via our RESTful API. However, the Bluemix platform does give you an easy way to integrate your deployed apps with your Watson services. Either way, go ahead and sign up for a Bluemix account to at least get your credentials.

After getting your Bluemix account, login to your Bluemix Dashboard. Go to the Catalog and you should find the Visual Recognition service under Watson.

After creating the service, you should be able to find your API key by clicking View Credentials in the Service Credentials sections.

Step 2. Set Up Python Environment

If you’re new to Python, I highly recommend setting up virtualenv and virtualenvwrapper. These tools let you set up sandboxed Python environments with everything you need, including pip. If you’d rather use something else, that’s fine, but make sure you have pip installed properly.

We’ll interact with the Visual Recognition service using the Watson Developer Cloud Python SDK. You’ll install it using pip (which should be available to you once you’re in a virtual environment).

pip install --upgrade watson-developer-cloud should do the trick.

Note: I’m also using python-dotenv to manage the API key. Simply update the value of .env_example to your API key and change the filename to .env.

Step 3. Collect Images For Training

This is the most tedious step in the process. You’ll have to gather images to be part of your training set. I used a Chrome extension to download 50–60 images for each character from Google Images.

I’ve also made the images available here. You’re welcome. Notice that the training set needs to be compressed into zip files. One zip file per visual class.

Note: It’s a good idea to train your classifier with images similar to what you’ll encounter post deployment. In this case, the training set would ideally be photos taken by the Raspberry Pi camera. But since these characters are fairly distinct, stock images from the Internet should suffice.

Make sure to read up on this blog post for best practices when training a custom classifier.

Step 4. Create Custom Classifiers

Now that you have the necessary training data, it’s time to create the custom classifier. I’ll show you how to do this using the Watson Python SDK.

Reminder: Part 2 of this tutorial will be an introduction to custom classifiers using NodeJS. If that’s your language of choice, you’re in luck.

Using the Python SDK to create a custom classifier is quite simple. The entire code can be found here.

  1. Notice the instantiation that happens on line 5.
  2. You’ll have to modify the file paths to your own image zip files.
  3. Line 16 — line 20 creates the classifier. On line 16, we’re naming the classifier monsters.
  4. The parameter name, e.g. <parameter_name>_positive_examples, translates to the name of the class. In this case, there are four classes inside the monsters classifier: dracula, frankenstein, wolfman, and mummy.

You should see something like this returned in your terminal:

Notice that the classifier name you specified is NOT the classifier_id. The classifier_id will always be <classifier_name>_<number>. Later on, this classifier_id is what you use to specify the classifier used to classify incoming images.

Note: Inside the python folder, I’ve included a piece of sample code on how you’d query the API directly. I recommend using the SDK but hopefully by showing the code underneath (also reference the code from the SDK), you’ll have more flexibility in adapting this tutorial to your own project.

Step 5. Classify Image

Finally it’s time to test how well our training went.

Here’s the code to classify a new image:

  1. The classification call is on line 10–12.
  2. classifier_ids is type list, which allows you to include a list of classifiers you’d like to use to classify a given image. If classifiers_ids is not given, all your classifiers (including the default classifier), will be used.
  3. The classifier_id is classifier name we specified earlier followed by _<number>. That’s your unique classifier ID. It should have been returned to you after your classifier was created. You can also retrieve your list of classifier IDs using the GET /v3/classifiers endpoint or the list_classifiers function from the SDK.

After running the script, you should get back a JSON that looks something like this:

As you can see, we’re able to accurately classify the Elmo picture with high confidence (line 6–7).

If you’re training custom classifiers using your own data, I recommend reading this blog post by the Watson Visual Recognition team on best practices when creating custom classifier.

Next Steps

That’s it! You just trained a Watson custom visual classifier and classified an image against it. Hopefully you found this helpful in your own getting-started process.

Move on to Part 2 of this tutorial if you want to learn how to create custom classifiers in NodeJS. It’s also helpful to look over it if you’re interested building the final TJBot because the final code will also be written in NodeJS.

As always, if you have any questions, feel free to reach out at joshzheng@us.ibm.com, connect with me on LinkedIn, or follow me here on Medium.

--

--

Josh Zheng
IBM watsonx Assistant

Head of DevRel @ Great Expectations. Previously DevRe Lead at Shopify and IBM Watson. Hates writing.