Train TJBot to See in Node-RED

JeanCarl Bisson
2 min readAug 7, 2017

--

Today we’ll train TJBot to see using the Raspberry Pi camera and the Watson Visual Recognition service.

Before we begin, let’s talk a little bit more about the hardware that makes this happen. The Raspberry Pi camera is connected to the Raspberry Pi via a cable that slots into the connector situated between the Ethernet and HDMI ports, with the silver connectors facing the HDMI port. It’s important to remember to enable this hardware in the Raspberry Pi configuration. This was covered in an earlier video.

TJBot can recognize objects and colors with the help of the models available from IBM Watson’s Visual Recognition service. Create a Watson Visual Recognition service and copy the API key into the Visual Recognition section.

Create the Watson Visual Recognition service in IBM Bluemix
Locate the API Key under the Service Credentials page
Copy the API into the API Key field in the TJBot configuration node in Node-RED

Here’s a video of how to train TJBot to speak with the speak node.

That’s it for today’s skill. What can you train TJBot to do now that it can see and recognize objects and colors?

We’ve covered all of the nodes that interface with hardware. Come back tomorrow and we’ll talk about using the Watson Tone Analyzer service to analyze emotions.

This post is part of a series of skills you can train TJBot to perform.

--

--

JeanCarl Bisson

I’m an IBM Technical Innovation Lead. I love to build prototypes and then share how I designed and built what I made so others can try it too.