Train TJBot to See in Node-RED
Today we’ll train TJBot to see using the Raspberry Pi camera and the Watson Visual Recognition service.
Before we begin, let’s talk a little bit more about the hardware that makes this happen. The Raspberry Pi camera is connected to the Raspberry Pi via a cable that slots into the connector situated between the Ethernet and HDMI ports, with the silver connectors facing the HDMI port. It’s important to remember to enable this hardware in the Raspberry Pi configuration. This was covered in an earlier video.
TJBot can recognize objects and colors with the help of the models available from IBM Watson’s Visual Recognition service. Create a Watson Visual Recognition service and copy the API key into the Visual Recognition section.
Here’s a video of how to train TJBot to speak with the speak node.
That’s it for today’s skill. What can you train TJBot to do now that it can see and recognize objects and colors?
We’ve covered all of the nodes that interface with hardware. Come back tomorrow and we’ll talk about using the Watson Tone Analyzer service to analyze emotions.
This post is part of a series of skills you can train TJBot to perform.