Node-RED solution deployment on GCP

Neil Kolban
Google Cloud - Community
6 min readMar 21, 2021

In this story we will assume that you have designed and implemented a Node-RED solution that you wish to put into production hosted on Google Cloud Platform (GCP). Here we will look at some practices that we can employ to make that possible in a secure manner.

When we deploy a Node-RED solution, we will actually be deploying two parts. We will be configuring the Node-RED runtime and also the solution we want to host on that run-time. The technique we will be illustrating here is the creation of a custom Docker image that will package together:

  • Node-RED.
  • Any custom nodes that are dependencies of our solution.
  • The solution as a flows.yaml file. This is the description file for the flow you developed.
  • The configuration settings for our Node-RED instance.

Let us start with the fundamental Node-RED environment. A base Docker image is distributed by the Node-RED project team and can be found on Docker Hub as nodered/node-red. This will be the foundation on which we will build.

Next, we will look at how to add any required dependencies. When we work with Node-RED, we can add packages that are found in the Node-RED registry. As an example, we will include the GCP nodes found in the node-red-contrib-google-cloud package. To include this in our new Docker image, we will add:

RUN npm install node-red-contrib-google-cloud

This will be inserted into our Dockerfile. What this will do is perform an installation of the dependencies using npm. We would repeat this command for each package upon which we depend.

Our Node-RED flows are described in JSON in a file called flows.json. When the image starts executing, it will look for this file in the /data directory. This means that we should insert the flows.json that we wish to have executed into the image. We can do this using the Dockerfile command:

COPY flows.json /data

The basic Dockerfile becomes:

FROM nodered/node-red
RUN npm install node-red-contrib-google-cloud
COPY flows.json /data

Having a Dockerfile is a great start but we need a way to create the Docker image. We can of course use Docker on our local workstation but since this is a GCP story, we have a better way. First, we will create a repository in GCP Artifact Registry. This is where the image will be stored.

Once the repository has been created, we still need to build the image and add it there. This is where GCP Cloud Build comes into play. Cloud Build is a GCP solution for building applications in the Cloud where we supply the recipe for building and GCP does the rest. The configuration of Cloud Build is done through a file called cloudbuild.yaml. Here is a suitable version:

steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'us-central1-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY}/${_IMAGE}', '.' ]
images:
- 'us-central1-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY}/${_IMAGE}'

The way to interpret these instructions is that we are going to execute a Cloud Build step that involves running Docker. This step will supply the content of our current directory which will include our Dockerfile. It will then build an image and tag that image with the name of our repository. Next we will push that image to the repository and we will be done.

The command to submit our request to Cloud Build is:

gcloud builds submit \
--config=cloudbuild.yaml \
--substitutions=_REPOSITORY="repo1",_IMAGE="my-node-red" .

The _REPOSITORY value is the name of the Artifact Repository we created earlier. The _IMAGE value is the name of the image we will create as output.

The end result of this will be the image in the repository. What remains is for us to create a Compute Engine that runs the image.

Since the Compute Engine knows the image by its name in the repository, each time the Compute Engine starts or is restarted, it will pick up the most recent image. To deploy a new version of our Node-RED flow, we would update our flows.json file and re-run the Cloud Build step which would result in a new image.

There are a few other items we need to cover. The core operation of Node-RED is described in a file called settings.js. When we build our own custom Node-RED image, we are likely going to want to supply a modified version as the out of the box values are great for getting up and running but wide open from a security perspective.

To supply our own version, we assume you have a local copy of the file that has been customized. We would then add the following into our Dockerfile:

COPY settings.js .

The settings.js file is a JavaScript file rich in comments. Here we will look at some of the key components that we should change immediately. The first item we will look at is the ability to attach to Node-RED and perform development tasks. In our current story, there really should be no need to do that at all. We have built and tested our flows in a separate Node-RED environment and now we are concentrating exclusively on running Node-RED in production. The ability to directly login and perform development should be disabled. Consider the negative ramifications if we left it enabled and a bad actor managed to login and inject their own code into the environment.

To disable the editor, set the following in settings.js:

httpAdminRoot: false,

In a deployment, if we then try and access the editor after we have disabled it we will get a message of the form:

Cannot GET /

If we look at the docker log, we will also see a message from Node-RED:

21 Mar 14:50:50 - [info] Admin UI disabled

If for some reason we do wish to allow editing against the flows, we should define a userid/password pair to provide authentication and authorization. The password is added into our settings.js file and is stored within in a hashed format. This means that if someone were to get their hands on the file, they would still not know the password. The hashed password is created using:

node-red-admin hash-pw

We are then prompted for a password and the hash value for the password is returned. Once we have a hashed password, we can add the following to our settings.js:

adminAuth: {
type: "credentials",
users: [{
username: "admin",
password: "$2b$08$awfc2.vPIIPniD/DfmgcFePfFGR9goVfOMXXcuyQnNG2xGXkBoX3O",
permissions: "*"
}]
},

In the example, the password is the hashed value. Once we have made these changes, any attempt to login to the editor will result in a security prompt:

One last area related to security is traffic encryption. The default is that Node-RED can be reached by HTTP. This is distinct from the more secure HTTPS which uses TLS for transport encryption. We can set up Node-RED to use HTTPS. In order to do this, we need some TLS certificates. During development, you can use self generated certificates.

We can set up SSL support by going to the /data directory and running:

openssl req -x509 -newkey rsa:4096 -keyout privkey.pem -out cert.pem -days 365 -nodes -subj "/C=US/ST=Denial/L=Springfield/O=Dis/CN=www.example.com"

This will generate a privkey.pem and a cert.pem. These should be copied into the image. Make sure that the files are set to have an owner and group of “node-red”:

COPY --chown=node-red:node-red privkey.pem /data
COPY --chown=node-red:node-red cert.pem /data

We can then edit settings.js and change:

https: {
key: require("fs").readFileSync('/data/privkey.pem'),
cert: require("fs").readFileSync('/data/cert.pem')
},

The final Dockerfile for our configuration became:

FROM nodered/node-red
RUN npm install node-red-contrib-google-cloud
COPY --chown=node-red:node-red flows.json /data
COPY --chown=node-red:node-red settings.js /data
COPY --chown=node-red:node-red privkey.pem /data
COPY --chown=node-red:node-red cert.pem /data

If a deployment isn’t working correctly, we may have to look at the logs of the container. The easiest way to do this is to SSH into the Compute Engine and run:

docker ps

From here we will be able to find the container ID and then we can run:

docker logs <ContainerID>

And finally … here is a video walking through each of the steps described in this article. The video is designed to accompany this article so please review the article first before watching the video.

See also:

--

--

Neil Kolban
Google Cloud - Community

IT specialist with 30+ years industry experience. I am also a Google Customer Engineer assisting users to get the most out of Google Cloud Platform.