Cloud Native Starter

Grant Garland
Nov 1 · 10 min read

In this post we will set up and deploy a basic cloud native Node.js application from scratch. Specifically, we will:

  • Create an Express application
  • Add Prometheus metrics
  • Configure a Docker container
  • Package app via Helm
  • Deploy to Kubernetes
Here is a representation of our application’s final architecture

If you want to skip all the manual setup and simply play around with a cloud native application suite, check out the accompanying NPM module cloud-native-node-starter, a CLI tool which builds the template application described in this post in just a few commands.


Requirements

MOTIVATION

As today’s software developers begin to take on more of the responsibilities held previously by operations teams, security engineers, and even system admins, many technologies we find ourselves using require a level of domain knowledge that had previously been foreign to us (for example, networking was a topic I never had to think about before). We will be exploring what I believe to be the key architectural concepts and components that a software developer needs to understand in order to begin working within a cloud native environment.

Disclaimer: the code generated here is far from production ready and is intended only to help you get a cloud native application up and running as quickly as possible. I will briefly outline a few of these limitations at the end of post.

What does “cloud native” mean?

To be “cloud native” simply means being able to exploit the capabilities of the cloud. Seen from another angle, traditional applications have difficulty taking advantage of all that the cloud has to offer without significant modification. So what exactly are these cloud capabilities? Generally speaking, they can be categorized into four groups:

  1. Availability. Cloud native applications can be easily deployed across multiple availability zones, thus ensuring that if your application ever does fail (and it will), users are still able to access it.
  2. Scalability. The key selling point behind many cloud providers is the ability to automatically scale your application to variable demand. Rather than needing to manually spin up and bring down servers in estimation of demand, a cloud native application will increase or decrease instances in response to usage needs.
  3. Observability. How is your application performing? How long does it take users to get what they need? Did you application crash at 3am this morning? Cloud native applications are designed with observability first in mind.
  4. Orchestration. What do you do when your application crashes? Must we manually restart our application every time it falls into an unresponsive state? Cloud native applications solve this problem for us by monitoring the health of our applications and responding accordingly when any worrisome states do occur.

While cloud providers have provided excellent tools to help our applications utilize these capabilities, we still need to design our applications in a way that flexibly integrates with them.

Create the Express App

While there are many excellent languages that lend themselves to cloud-native development, we will be using the Javascript runtime Node.js due to its performant IO capabilities, fast startup speed, low memory footprint, and extensive cloud tooling support. As well, the Express Node.js framework is a minimalistic and unopinionated framework ideal for micro-service development.

  1. Create a project directory

2. Install Express generator and create application

3. Install dependencies

Add Health Checks and Metrics

By exposing a health “liveliness” route from within our application, we are able to probe the endpoint to determine how our application is behaving. If the service becomes unresponsive, remediation actions can be prescribed and administered autonomously with Kubernetes. We can quickly add health checks to our application via the CloudNativeJS health-connect package.

  1. Install the dependency

2. Import the package and instantiate inside app.js

3. Register and expose the health probe endpoint

You can now view the health endpoint response by starting the application server and navigating to localhost:3000/health. Feel free to add more health probes (e.g. startup and liveliness checks, or shutdown handling) by consulting the package README

Prometheus

The rise of cloud platform development has challenged the traditional service monitoring model. For example, how would we monitor ephemeral services that may be taken down and re-instantiated at any moment? Prometheus has emerged as the preeminent monitoring tool in the cloud native environment, partly in light of the following design characteristics:

  • Label-based metric tagging. This means that when volatile services pop in and out of existence with different instance hashes, Prometheus will still see them as the same service because its metrics are queried by label.
  • Target auto-discovery and metric pulling: When services do crash and restart, Prometheus will find them again automatically and pull all metrics at a regular, configurable interval.
  • Modular components. The prometheus server itself can easily be leveraged alongside supporting components such as a the push gateway for monitoring more difficult-to-track processes (think batch jobs) or the alert manager for firing off alerts into your medium of choice.

We will be using Prometheus to not only monitor metrics collected by the application but also metrics defined by the Kubernetes cluster itself.

With the AppMetrics Prometheus package, we can begin exporting metrics from our app in a few steps.

  • Add the appmetrics-prometheus dependency
  • Import the library and attach it to the app server. This creates a singleton inside our application that will collect metrics, the Prometheus server will scrape.

While we will not be adding custom metrics in this post, the code we have included has enabled our application to expose low-level metrics such as HTTP response data and CPU utilization via the /metrics endpoint, all in one line of code. You can take a look at these metrics by starting the application and navigating to localhost:3000/metrics.

Docker

While the container revolution has certainly lended itself to faster and more robust development, in order to take full advantage of container offerings developers must consider the following design principles:

  • Immutability. Containerized applications should strive to obey image contracts as tightly as possible and externalize configuration code (ideally tracked in source control).
  • Disposability. Since containers can be destroyed at any time, your application should avoid holding on to state longer than necessary. Data that must be persisted should be put inside a container volume while longer running processes should be sent off into their own workers.
  • High observability. Containers should expose APIs for the runtime that enable observation and probing of container health.
  • Ports. You actually need to think about these; which are exposed inside the container and which are exposed outside. While intuitive to me now, this was a concept that really tripped me up when I first began working with containers.

With this in mind, let’s create and deploy an application Docker image.

  1. Copy a stable Dockerfile into your project

2. Add a .dockerignore file (like a .gitignore file but for a container).

3. Build the image. While you can give the image any name you want, it’s a good practice to suffix with version numbers for tracking image updates.

4. Now we can run the application from inside the container. The following command will expose port 3000 from the container and map it to your local machine port 3001.

Helm

While certainly not a necessary tool to add to your cloud application stack, Helm charts are useful by providing all the configuration Kubernetes needs to build and package Docker images, define replicas, and manage service resources. For the sake of simplicity, we will be using a best-practice chart template maintained by the CloudNativeJS project. Just like the Dockerfile, your Helm chart will live inside your application folder and is used to deploy your containerized app into Kubernetes.

  • Download the Helm chart.
  • Unzip the file. This will not only unpack the chart but also a README and examples that may be useful to explore.
  • For now we are going to move the chart folder into the root of our project. You can then delete the remaining files inside the helm-master folder.
  • Modify the repository field in chart/nodeserver/values.yaml to point to your local Docker image. We also need to set pullPolicy to ‘IfNotPresent’ so that Kubernetes will pull local images instead of searching in Docker Hub. My values.yaml file would look like this:

Feel free to change the replicaCount field to 3 which would instruct Kubernetes to deploy 3 instances of your container inside the cluster. Now our application has all that it needs for deployment to Kubernetes.

Kubernetes

If you’re like me, you probably hear the word “Kubernetes” a dozen times a day but haven’t actually taken the time to learn it. Fortunately the maintainers have exposed a simple CLI tool, kubectl, that makes interacting with the Kubernetes API simple. For the sake of our demo, we need be acquainted with the following Kubernetes concepts:

  • Pod. Pods are the smallest unit of abstraction in a Kubernetes cluster and usually (although not always) wrap a single Docker container.
  • Service. Services group together sets of pods and define and expose the contract for accessing them. For example, each pod exposes an IP address but that address will change when the pod dies and is reborn. Services map pods dynamically so that you can access them without worrying about this implementation detail.
  • Control Plane. The engine that is constantly monitoring your cluster’s state and administering appropriate actions to bring changed state back into alignment with the one defined in the deployment contract (which in our case is defined inside the Helm chart).

In order to deploy our application to Kubernetes, we will use Helm to define a name for our service and pass the chart which contains the configuration defining the desired cluster state.

If successful, this command will have created Pod and Service resources and our app is now being orchestrated by the Kubernetes Control Plane. But since our application is deployed on some unknown Google server, we will need to forward the service port into our local machine in order to access the application.

We can now access the Kubernetes-hosted node server from our browser on port 3000.

Just as we deployed a node server service into our cluster, we will be doing the same for Prometheus and Grafana.

Note the adminPassword argument we passed to the Grafana install command. We will need this to log into Grafana later.

Similar to how we forwarded our node server service port, we can do the same to map the prometheus server into our browser on port 9090 using the following command:

Prometheus should now be collecting (pulling) metrics from any services inside the cluster that have exposed a /metrics endpoint. Now let’s source those metrics into Grafana to help make more sense of the data.

  • Forward the Grafana service to your browser on port 3000

You might be wondering, where is this URL coming from? The Prometheus image we installed with Helm earlier exposes this proxy address so that we do not need to populate the actual IP of the Prometheus server (which in the cloud, remember, can fluctuate at any time).

  • Click Save and a notification should let us know that the datasource is connected.

Now we need to add a dashboard that will query Prometheus metrics and display the results. Fortunately for us, we do not have to do this manually as the community has shared many great dashboard templates suited for our needs.

  • Select the + from the side panel and then Import.
  • Enter “1860” in the Grafana.com Dashboard field. This will import the “Node Exporter Full” community dashboard.

After importing, navigate to the dashboard panel and you should see a series of graphs populated with metrics collected from our Prometheus server. Currently these metrics are being exposed by the Kubernetes API, which means we are seeing how our Kubernetes services are performing but not our node application itself. Let’s add a dashboard to see how our application is performing.

  • Click Add panel on the top toolbar.
  • Make sure your datasource is set to Prometheus.
  • Insert the following query into the metrics field. This will average the HTTP response time from our app over a 5 minute period.

And there you have it, we have built a cloud native Node.js micro-service, exposed metrics, packaged it into a Docker image, deployed to Kubernetes with Helm, and added Grafana monitoring. While we could go much deeper into each of the technologies mentioned here, I hope that this post has helped lay the groundwork for your next cloud native application development project.


Limitations

As mentioned earlier, I wanted to list out some production limitations of the current implementation.

  • Prometheus images should use volumes to store metrics, otherwise all data will be lost if the service dies. For production deployment, it is recommended to use the Data Volume Container pattern to ease managing the data on Prometheus upgrades.
  • The Prometheus and Grafana web UI’s are exposed freely on the internet. In production, you would ideally want to secure these endpoints.
  • Grafana supports authentication but is not configured for SSL.
Grant Garland

Written by

Software engineer and Russian language buff

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade