Creating Liveness Probes for your Node.js application in Kubernetes

Kirill Goltsman
Supergiant.io
Published in
6 min readFeb 1, 2019

Kubernetes is extremely powerful in tracking the health of running applications and intervening when something goes wrong. In this tutorial, we teach you how to create liveness probes to test the health and availability of your applications. Liveness probes can catch situations when the application is no longer responding or unable to make progress and restart it. We address the case of HTTP liveness probes that send a request to the application’s back-end (e.g., some server) and decide whether the application is healthy based on its response. We’ll show examples of both successful and failed liveness probes. Let’s get started!

Benefits of Liveness Probes

Normally, when Kubernetes notices that your application has crashed, the kubelet will simply restart it. However, there are situations when the application has crashed or deadlocked without actually terminating. That’s exactly the situation when liveness probes can help! With a few lines in your pod or deployment spec, liveness probes can turn your Kubernetes application into a self-healing organism, providing:

  • zero downtime deployments
  • simple and efficient health monitoring implemented in any way you prefer
  • identification of potential bugs and deficiencies in your application

Now, we are going to show these benefits in action walking you through examples of successful and failed liveness probe.

Tutorial

In this tutorial, we create a liveness probe for a simple Node JS server. The liveness probe will send HTTP requests to certain server routes and responses from the server will tell Kubernetes whether the liveness probe has passed or failed.

Prerequisites

To complete examples in this tutorial, you’ll need:

  • a running Kubernetes cluster. See Supergiant docs for more information about deploying a Kubernetes cluster with Supergiant. As an alternative, you can install a single-node Kubernetes cluster on a local system using Minikube.
  • A kubectl command line tool installed and configured to communicate with the cluster. See how to install kubectl here.

Step 1: Creating a Node JS App Prepared for Liveness Probes

To implement a working liveness probe, we designed a containerized application capable of processing it. For this tutorial, we containerized a simple Node JS web server with two routes configured to process requests from the liveness probes. The application was containerized using Docker container runtime and pushed to the public Docker repository. The code that implements basic server functionality and routing is located in the server.js file:

'use strict';const express = require('express');// Constantsconst PORT = 8080;
const HOST = '0.0.0.0';
// Appconst app = express();
app.get('/', (req, res) => {
res.send('Hello world');
});
app.get('/health-check',(req,res)=> {
res.send ("Health check passed");
});
app.get('/bad-health',(req,res)=> {
res.status(500).send('Health check did not pass');
});
app.listen(PORT, HOST);
console.log(Running on http://${HOST}:${PORT});

In this file, we’ve configured three server routes responding to client GET requests. The first one serves requests to the server’s web root path / that sends a basic greeting from the server:

app.get('/', (req, res) => {
res.send('Hello world');
});

The second path named /health-check returns a 200 HTTP success status telling a liveness probe that our application is healthy and running. By default, any HTTP status code greater than or equal to 200 and less than 400 indicates success. Status codes greater than 400 indicate failure.

app.get(‘/health-check’,(req,res)=> {
res.send (“Health check passed”);
});

Finally, if a liveness probe accesses the third route named /bad-health the server will respond with a 500 status code telling kubelet that the application has crashed or deadlocked.

app.get(‘/bad-health’,(req,res)=> {
res.status(500).send(‘Health check did not pass’);
});

This application is just a simple example to illustrate how you can configure your server to respond to liveness probes. All you need to implement HTTP liveness probes is to allocate some paths in your application and expose your server’s port to Kubernetes. As simple as that!

Step 2: Configure your Pod to use Liveness Probes

Let’s create a pod spec defining a liveness probe for our Node JS application:

apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-http
spec:
containers:
- name: liveness
image: supergiantkir/k8s-liveliness
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health-check
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
failureThreshold: 2

Let’s discuss key fields of this spec related to liveness probes:

  • spec.containers.livenessProbe.httpGet.path — a path on the HTTP server that processes a liveness probe. Note: by default, spec.livenessProbe.httpGet.host is set to the pod’s IP. Since we will access our application from within the cluster, we don’t need to specify the external host.
  • spec.containers.livenessProbe.httpGet.port — a name or a number of the port to access the HTTP server on. A port’s number must be in the range of 1 to 65535.
  • spec.containers.livenessProbe.initialDelaySeconds — number of seconds since the container has started before the liveness probe can be initiated.
  • spec.containers.livenessProbe.periodSeconds — how often to perform the liveness probe. Default value is 10 seconds and the minimum value is 1
  • spec.containers.livenessProbe.failureThreshold — a number of tries to perform the liveness probe if the probe fails on pod start. Giving up any attempts to perform a liveness probe means restarting the pod. The default value for this field is 3 and the minimum value is 1 .

Let’s save this spec in liveness.yaml and create the pod running the following command:

kubectl create -f liveness.yaml
pod “liveness-http” created

As you see. we defined /health-check as as a server path for our liveness probe. In this case, our Node JS server will always return the success 200 status code. This means that the liveness probe will always succeed and the pod will continue running.

Let’s get a shell to our application container to see responses sent by the server:

kubectl exec -it liveness-http — /bin/bash

When inside the container, install cURL to send GET requests to the server:

apt-get update
apt-get install curl

Now, we can try to access the server to check a response from the /health-check route. Don’t forget that the server is listening on port 8080:

curl localhost:8080/health-check
Health check passed

If the liveness probe passes (as in this example), the pod will continue running without any errors and restarts triggered. However, what happens when the liveness probe fails?

To illustrate that, let’s change the server path indicated in the field livenessProbe.httpGet.path to /bad-health . First, exit the shell from the container typing exit and then change path name in the liveness.yaml. Once necessary changes are made, delete the pod.

kubectl delete pod liveness-http
pod “liveness-http” deleted

Then, let’s create the pod one more time.

kubectl create -f liveness.yaml
pod “liveness-http” created

Now, our liveness probe will be sending requests to the /bad-health path that returns a 500 HTTP error. This error will make kubelet restart the pod. Since our liveness probe always fails, the pod will be never running again. Let’s verify that the liveliness probe actually fails:

kubectl describe pod liveness-http

Check pod events at the end of the pod description:

Events:Type     Reason                 Age                From               Message----     ------                 ----               ----               -------Normal   Scheduled              1m                 default-scheduler  Successfully assigned liveness-http to minikubeNormal   SuccessfulMountVolume  1m                 kubelet, minikube  MountVolume.SetUp succeeded for volume "default-token-9wdtd"Normal   Started                1m (x3 over 1m)    kubelet, minikube  Started containerWarning  Unhealthy              57s (x4 over 1m)   kubelet, minikube  Liveness probe failed: HTTP probe failed with statuscode: 500Normal   Killing                57s (x3 over 1m)   kubelet, minikube  Killing container with id docker://liveness:Container failed liveness probe.. Container will be killed and recreated.Warning  BackOff                56s (x2 over 57s)  kubelet, minikube  Back-off restarting failed containerNormal   Pulling                42s (x4 over 1m)   kubelet, minikube  pulling image "supergiantkir/k8s-liveliness"Normal   Pulled                 40s (x4 over 1m)   kubelet, minikube  Successfully pulled image "supergiantkir/k8s-liveliness"Normal   Created                40s (x4 over 1m)   kubelet, minikube  Created container

First, as you might have noticed, the liveness probe started exactly after three seconds specified in the spec.containers.livenessProbe.initialDelaySeconds Afterward, the probe failed with a status code 500 that triggered killing and recreating the container.

That’s it! Now you know how to create liveliness probes to check the health of your Kubernetes applications.

Note: In this tutorial, we used two server routes always returning either success or error status codes. This is enough to illustrate how liveness probes work, however, in production, you’ll need to have one route that will evaluate the healthiness of your application and send either success or failure response back to kubelet .

Step 3: Cleaning Up

Our tutorial is over, so let’s clean up after ourselves.

kubectl delete pod liveness-http
pod “liveness-http” deleted

2. Delete the liveness.yaml where you saved it.

Conclusion

As you saw, liveness probes are extremely powerful in maintaining your applications healthy and ensuring their zero downtime. In the next tutorial, we’ll learn about readiness probes — another important health check procedure in Kubernetes. Kubelet uses them to decide when a container is ready to start accepting traffic. Stay tuned for our blog updates to find out more!

Originally published at supergiant.io.

--

--

Kirill Goltsman
Supergiant.io

I am a tech writer with the interest in cloud-native technologies and AI/ML