Creating Custom Node-RED Nodes for your API: The Easy Way

A ‘Node Generator’ Crash Course

--

Hi there!

here, checking in with my first post of 2019. Since this is a little different than the types of projects I typically write about, I wanted to start off with a little bit of context and a brief introduction to Node-RED. However, if you’re already familiar and want to jump straight to the instructional part of this blog post, you can do that right here.

Node-RED provides an easy-to-use visual interface for working with data across all kinds of devices.

New Year, New Projects

One of the first areas of programming that really got name excited was playing around with IoT devices like the Raspberry Pi and Arduino. Being able to control the speed and color of a blinking LED brought the effects of my code into the physical world in a way that I hadn’t experienced until that point, and was enough to get me hooked.

This year, one of my New Years goals has been to revisit some of these old favorites, and the project has already led me to something really cool: Node-RED. It’s a visual programming tool that lets you graphically connect devices and services together with functions and modules that can process the data they collect. It’s based on Node.js, but that gets abstracted away (for the most part) and you don’t need any programming experience to be effective. For a deeper dive into getting started with Node-RED, I recommend taking a look at the official documentation here or joining the official Slack channel at this link.

Some nodes in a Node-RED flow, connected by wires.

Putting the ‘Node’ in Node-RED

When you look at a Node-RED flow you’ll see a series of nodes connected by wires as in the example above. Nodes can accept input, produce output, or both, and are the basic building block for applications in Node-RED. There’s a wide variety to choose from, each with their own specific purpose, alongside plenty of community creations available through npm. For users familiar with JavaScript, creating new nodes and sharing them is a pretty straightforward process. With the Node Generator tool though, that process gets even easier, and I was able to use this to create some of my own nodes to leverage the power of deep learning models from the Model Asset eXchange (MAX). That process is what I want to illustrate in the rest of this blog post.

Node Generator

At a high level, the purpose of Node Generator is to make it even easier to create custom nodes that can be re-used or shared in other projects. In practice, this is typically done either by bundling code you already have in a JavaScript ‘function’ node as a standalone package, or by generating JavaScript code for you based on some existing API documentation. I’ll be focusing on the first method since this is what I used to create the MAX Audio Classifier node I needed.

Thankfully, most API’s these days are documented according to the OpenAPI specification, which came from and is still used by tools like Swagger to auto-generate human readable API docs. Node Generator parses this same specification document but instead produces the source code for a new node!

This excerpt from the OpenAPI spec for the MAX Audio Classifier shows the format in which it expects input.

Sometimes the output from Node Generator is ready to use with little or no modification, and there are tons of examples of this already that have been created by the community. For example, the package node-red-contrib-model-asset-exchange was created by the user Zuhito and contains 5 ready to use nodes that work with the 5 MAX models of the same name that all take images as input. I wanted to work with something a little different, the MAX Audio Classifier, which identifies the source of sounds from short audio clips.

After capturing the audio, the WAV file data needs to be submitted to the model’s API through a POST request, in a format called multipart/form-data that’s common with web forms but isn’t quite as compatible with Node-RED or other non-browser environments. Thankfully, it’s nothing that can’t be solved with node-red-nodegen and a little JavaScript, so I thought I’d create the node myself and document the process along the way.

The 3-Step Journey from API to Node

The first thing we will need for this process is to identify an API that we want to use, that has some type of OpenAPI specification to go along with it. Thankfully, in the case of the models on the Model Asset eXchange, all models are packaged in Docker containers that run dependency-free and contain all the documentation we’ll need. To start our model’s API server run the command:

$ docker run -it -p 5000:5000 codait/max-audio-classifier

After running this command it should then display some output indicating that the server is up and running. For more information on getting started with MAX Models and how to run them, take a look at the official tutorial here.

Step 1: Obtaining The API specification

To build our node, we need to first locate and obtain a copy of the OpenAPI spec which is in JSON format. Many times you’ll see a file called swagger.json in a project, like you will in each of the MAX Models. To download the file I use curl. Assuming we already have the Audio Classifier model running on port 5000, the command to download this file is:

$ curl localhost:5000/swagger.json > audio-classifier.json

This command will save the file locally on your machine.

Step 2: Generating the Boilerplate Code

This is the part of the process where we need to use our new tool. Before we go any further, we’ll need to install it, so if you haven’t already globally installed node-red-nodegen do that now with the command:

$ npm install -g node-red-nodegen

You’ll need to wait for the installation to finish. Once that process is done, make sure we’re in the same directory as the JSON we just downloaded, and run:

$ node-red-nodegen audio-classifier.json --name 'audio-classifier'

This will generate our node’s code with an appropriate title. You’ll see a ‘Success’ message on the next line once this part is done.

Step 3: Making the Necessary Edits

A new directory is created for us when we generate our node, with the standard naming convention of adding the prefix node-red-contrib before the node name. This directory contains all the code we’ll need to use, package, and distribute our node, and is also where we need to make a couple changes before the Audio Classifier node in this example will be ready for use. My favorite editor is VS Code, so this is the point where I would enter the command code node-red-contrib-max-audio-classifer to open the directory in a new window. I’ll then see the file structure for this project, which should look a lot (if not exactly) like this:

The standard file structure for nodes created from an OpenAPI Specification.

There are 3 files that we need to be concerned with here: lib.js, node.js, and node.html. I’ll talk about each one individually in the order I made the changes, with a summary of the code at the end of each. For a closer look, you can view the code in full on GitHub (with some added comments to help you follow along) or try out the node yourself by installing node-red-contrib-max-audio-classifier in your Node-RED flow.

1. lib.js

This file defines the methods available to our node. Think of it like the functions that our node can use, which are usually represented as separate routes in the API. You can define special behavior for different method types here, which I need to do to properly build the request object for our POST request. As I mentioned above, the standard JSON-type body that gets created by default isn’t going to work for our Audio Classifier model. First, we need to remove the default POST body (near line 53) by removing the key/value pair named body from the req object, which we can then replace with our own formData object. Next, we need to replace the line with req.form = form by appropriately assigning the multipart/form-data structure we need. Your results may vary for different API’s, but for this example I referred back to the spec document and replaced the line with:

req.formData = { audio: { value: form.audio, options: { filename: ‘audio.wav’ } } };

You should see the similarities between this new structure we’ve defined and the structure laid out in the OpenAPI spec example from above. What I’ve learned is that when attempting to emulate a file upload, you need to structure the object like this, as an object containing the data itself in value and a placeholder filename in the options object.

The changes needed to create our multipart/form-data request in lib.js.

After these two changes, we‘re done with this file, so let’s save our progress and move on.

2. node.js

This file defines the actions our node will take when it receives data. We only need to change one thing here, which is the the line assigning the value to parameters.audio located on or near line 34. Simply replace it with parameters.audio = msg.payload to feed our input data into the formData structure we defined previously.

Shown here are the instructions for handling input to our node in node.js.

3. node.html

Last but not least, this file defines the UI for the node’s side panel, but it’s more than just looks. This node also has an effect on the default values and settings for our request, so we need to make just a couple adjustments. I’ll start from the top of the file and work my way down. First, on line 7, I’d like to set the method so this node will always use the predict function. This can be done by replacing the line with:

method: { value: 'predict', required: true },

Next, we want to remove the lines (near 53 and 62) that contain $('#predict_audio').show();. No need to replace them, just take them out. We want our input data to be read through the flow, into our node’s input, and won’t be modified here. For my final step, I’ve removed the option to select routes in the sidebar. This isn’t exactly required, but it can be done easily by adding an id like ‘method-select’ to the div near line 86, and then adding a similarly named call to $('#method-select').hide(); near the beginning of the showParameters function.

node.html contains several locations that need adjustments, which are all contained above.

This covers the basics to get our node up and running for this particular API. There’s lots more we could do, like providing support for different methods or customizing the UI, the icon, etc, but those details can all be dealt with later, if you deem them necessary for your project.

You should now see your new node in the sidebar of your Node-RED flow.

Using Your New Node

Congratulations, you’ve created a new node for use in Node-RED! We even custom-fitted it for our use case that needs a specific request format. The only thing left to do now is import it into our flows and put it to use.

Let’s start by installing the node locally into our instance of Node-RED. First, navigate back to the directory for our node if you’re no longer still there. Then, run the command:

$ sudo npm link

In effect, this makes a local package available globally throughout your system, as if we’d installed with npm install -g. The last step to use it is to go to your Node-RED directory (which by default is ~/.node-red) and install the new node with the command:

$ npm link node-red-contrib-max-audio-classifier

Make sure to replace the package name in this command with whatever you used for your node. Then, start Node-RED with the command node-red and open your browser. You should now see the node we’ve created available in the sidebar! If everything worked as intended, and you think others might also benefit from this new node, publishing to the npm registry is a trivial step from this point. Should you ever want to do this, simply navigate to the directory for your node and run npm publish.

Go Forth and Create!

I hope that this article has been informative and has helped you to create the node you needed for your flows. There’s so much that can be done with this platform, and for me the customization available is a big part of that. The only real limitation is your creativity, and the Node-RED community is already full of helpful, creative folks who have been helpful to me throughout my learning process. I’d recommend taking a look at their official documentation, but if you’re feeling social I see lots of community members helping each other in the official Slack channel as well. In case, like me, you also happen to be working with models from the Model Asset eXchange, you can join that Slack community here. Happy Hacking!

--

--