Developing NodeJS microservice for an OpenShift Cluster —Part I
Abstract
This is the first article in a series that describes the development and deployment of a microservice in NodeJS for an OpenShift cluster. For the sake of simplicity, this series considers minishift as the OpenShift (v3.11) cluster running on a Mac laptop with 8 GB RAM. Further, this article focuses more on OpenShift concepts than on production quality NodeJS code. After finishing this series of articles, the reader should be able to invoke the APIs hosted in containers.
Here is an index of the articles:
- Part I —Developing NodeJS microservice for an OpenShift Cluster — Part I (this article) — Project descrption
- Part II — Developing NodeJS microservice for an OpenShift Cluster — Part II — OpenShift artifacts creation with CLI and NodeJS considerations
- Part III — Developing NodeJS microservice for an OpenShift Cluster — Part III — OpenShift artifacts creation with web console and NodeJS considerations
Introduction
This series considers a set of APIs to maintain a list of customers. Given a customer, the APIs will add the customer, read and return the customer details, edit customer details and remove the customer. These APIs are hosted on an OpenShift cluster and an external Redis instance. This article describes the OpenShift artifacts needed to implement the APIs.
Here is the description of APIs : OpenShift Demo — APIs
Long story short
If you have a working minishift installation and a running Redis instance outside of the minishift cluster, then use the following steps to launch the application.
To test, use the Postman scripts from here: https://github.com/nsubrahm/openshift-demo-postman
Design
The design considerations belong to two broad categories:
- OpenShift
- NodeJS application
Let’s start with the easier one first — NodeJS application.
NodeJS Application Design
- For the sake of simplicity, this article considers Redis as our data store such that the customer ID will be the key and the details of the customer as the value of that key.
- The APIs will be exposed over port number
8080.
Therefore, the application will need to know the IP address and port of the Redis data store and the port number of the API end-point itself. This information will be passed via a configuration file (JSON) when starting up the APIs end-point. This configuration file will be eventually used for setting up environment variables that the pod will use when the end-point is started up.
Here is the configuration file api-server-config.json The authorization with Redis is left out.
Why is the host and port number for Redis missing in the configuration file above? Read the section on Redis — Container or Server below.
OpenShift cluster
The design of the cluster with respect to number of physical machines, high-availability, separate machines for etcd, separate machines for internal registry, etc. is not in scope of this article. As mentioned earlier, this article uses minishift and the article assumes that the reader has a working installation of minishift with the oc client installed. Nevertheless, some options for setting up of a cluster are listed below:
Mini installations:
- v3.11 —
minishift— RedHat Container Development ToolKit - v4.2 — Code Ready Containers — RedHat OpenShift 4 on your laptop
Full installations:
- Installing a Highly-Available OpenShift Cluster
- OpenShift v3.11 installation
- OpenShift v4.2 installation
- Managed OpenShift — OpenShift Online
On cloud:
Others
Redis — Container or Server
The world of DevOps identifies servers as cattle or pets. Here is an elevator pitch on cattle versus pets, copied from this blog:
In the old way of doing things, we treat our servers like pets, for example Bob the mail server. If Bob goes down, it’s all hands on deck. The CEO can’t get his email and it’s the end of the world. In the new way, servers are numbered, like cattle in a herd. For example, www001 to www100. When one server goes down, it’s taken out back, shot, and replaced on the line.
The blog goes on to define pets and cattle as follows (abridged):
Pets
Servers or server pairs that are treated as indispensable or unique systems that can never be down. Examples include mainframes, solitary servers, HA loadbalancers/firewalls (active/active or active/passive), database systems designed as master/slave (active/passive), and so on.
Cattle
Arrays of more than two servers, that are built using automated tools, and are designed for failure, where no one, two, or even three servers are irreplaceable. Examples include web server arrays, multi-master datastores such as Cassandra clusters, multiple racks of gear put together in clusters, and just about anything that is load-balanced and multi-master.
I am personally of the opinion that, compute servers should be classified as cattle i.e. containers. Whereas, storage including databases, caches and even messaging engines e.g. Apache Kafka, should be classified as pets i.e. standalone servers.
Therefore, this article assumes a running Redis server outside of the OpenShift cluster — either on the same host or external — even though, OpenShift supports both approaches for Redis — container versus server.
Therefore, the overall deployment looks like this:

OpenShift artifacts
Before going further, browsing the Core Concepts of an OpenShift cluster might be helpful. To keep this article simple, the following list of artifacts assumes the ‘path of least resistance’ i.e. the absolute, bare minimum, number and types of artifacts needed to get the API end-point up and running.
ProjectConfigMapsServiceandEndpointRouteDeploymentConfig
- The
Projectis a namespace for all the other OpenShift artifacts listed above. - The
ConfigMapswill help us push the configuration data, as shown inapi-server-config.jsonabove, to the pods that will run the API end-point. - The
ServiceandEndpointtogether will enable communication with the Redis data stored. - The
Routewill expose theServicefor the API end-point so that external systems can connect to the end-point. TheServiceobject is visible within the cluster only. Note that, thisServiceis different from the one for Redis; thisServiceexposes the API end-point. - The
DeploymentConfigwill reference theConfigMapso that the environment variables are correctly injected to the running pods.
Conclusion
This part describes the various OpenShift artifacts that need to be developed for this project. It also hints the changes that are required in the NodeJS application. The Part II article will describe how to put the artifacts and NodeJS application together for a completed project.
