Deploy NextJs App to Kubernetes

Varun Chauhan
NE Digital
Published in
8 min readAug 31, 2020

Learn to deploy a service on Kubernetes like a pro using Docker, Bitbucket Pipeline and Google Cloud Platform

K8
Picture Credit: https://kubernetes.io/

In NE Digital we look for scalable solutions as our traffic is dynamic, hence we opted for Kubernetes. With COVID-19 pandemic this year, we appreciate our decision even more of using Kubernetes as our platform’s fairprice.com.sg demand has surged 3X from what we could have estimated. Let’s, deep dive into the world of Kubernetes

Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.

In this article, we will focus on deploying a NextJs application as a service on Kubernetes using bitbucket pipeline. For better understanding, We have divided this article into four key sections.

  1. Initial pipeline setup.
  2. Build a docker image.
  3. Upload image to docker registry.
  4. Deploy the application to Kubernetes.

Initial Pipeline Setup

Let’s start by creating a simple bitbucket pipeline to install node module and build package.

Each bitbucket pipeline step runs inside docker image, so we need to specify the image associated with it. There are two ways to define a pipeline image.

  1. Globally, we can define an image on the top and all steps can use this image. image: node:12.18.3
  2. We can define a docker image at each pipeline step level, this will be discussed in detail further in this article.
Bitbucket pipeline view
Individual pipeline view
Live logs in the pipeline
Build tear down step

Build a docker image

Lets first create a docker file for our project.

This Docker file has the following key components.

FROM: This instruction specifies the underlying OS architecture that you are going to use to build the image

FROM node:12.18.3-alpine

ARG, ENV: They are used to declare a variable that can be set as an environment variable.

ARG NODE_ENV="production"ENV NODE_ENV=${NODE_ENV}

WORKDIR: The WORKDIR command is used to define the working directory of a Docker container at a given point of time. Any Docker command like RUN , CMD , ADD , COPY, or ENTRYPOINT will be executed in the specified working directory.

WORKDIR /app

COPY: Copy command will copy files into inside the docker image while build time.

COPY . .

EXPOSE: It lets you expose a docker port for the external world to connect.

EXPOSE 3000

CMD: To execute a command.

CMD ["npm", "run", "start"]

So our next step is to build the Docker Image using the docker file we have created above, which can be easily done by updating the bitbucket pipeline.

For our initial setup we have used node:12.18.3-alpine, where “12.18.3” is nodeJs version and alpine, is a base image but this image only supports basic commands required to install, build and run nodeJs server.

Our current requirement is to build a docker image and for that, we will use a gcloud image “google/cloud-sdk:271.0.0”.

Key Points: Some key points related to pipeline

  1. When bitbucket pipeline starts it will clone the underline code from the repository. So, we have a codebase for the specific branch.
  2. Artifacts let you choose files or folder which you want to carry forward to the next steps in the pipeline.

This is how the pipeline looks now.

Package step in the pipeline

Bonus Point: To improve the performance of pipeline, bitbucket has a feature to cache your content that doesn’t change frequently like node modules, docker images etc. Once an attribute is declared in the cache, the pipeline will try to fetch that content first.

Upload image to docker registry

Excited so far 😎, we built a docker image in bitbucket pipeline. For deploying this docker image to start our service in Kubernetes, we need to push it to a registry. We will use GCR(Google cloud registry), but you can use any other registry like Azure, AWS or docker hub.

For pushing the image to GCR, we need a service account to access it. We cannot access this service-account file from the repository for security reason. So we will use another cool feature of BitBucket Pipelines called Repository Variables. We have defined all the variables and service account with secured flag.

Repository variable view

This our updated pipeline file.

We will use COMMIT_ID to uniquely identity each docker image, whereas CONTAINER_REGISTRY is the registry of your choice. For me, it is GCR with the location as south-east Asia so its value is asia.gcr.io. PROJECT_ID will be the google cloud project under which your registry and Kubernetes cluster is created. APP_NAME will be your app which will be deployed to Kubernetes.

We have created an artifact image-tag.txt for passing image name to future pipeline step where we need this to deploy our Docker image to Kubernetes.

That’s how our pipeline looks after adding these changes.

Docker image pushed to the registry

We got our first docker image uploaded to Registry. 👻👻

Image in the docker registry

Deploy the application to Kubernetes

In the final section of our document, we will be deploying our service to Kubernetes. If you already have a service created in Kubernetes then you can skip the setup part and move to deployment part.

Setup: To create a service you can either get in touch with your DevOps team or follow the steps below.

Create a YAML file which contains service and deployment config as mentioned below.

A Service in Kubernetes is an abstraction which defines a logical set of Pods and a policy by which to access them. Services enable a loose coupling between dependent Pods. A Service is defined using YAML (preferred) or JSON, like all Kubernetes objects.

Part of this YAML file is service config and another part is attached to the pod.

In the service config, we have a port section which has the following key aspects.

  1. Port: Incoming port number where service will be accepting the request and forward to the respective pod.
  2. Protocol: It’s the HTTP protocol used to communicate with the service. We are using TCP for our service.
  3. Target Port: Target port is the port number at which your application is running inside the pod.

In the container config section, we have the following key attributes.

  1. Image: Path for Docker image registry.
  2. Liveness Probe: Kubernetes uses liveness probes to know when to restart a container. To do this load balancer will hit the path section defined in httpGet on the port number mentioned. We can add a delay in seconds to let our application start using attribute initialDelaySeconds. If there is the delay in response it will wait for a time defined under timeoutSeconds.
  3. Readiness Probe: Kubernetes uses readiness probes to decide when the container is available for accepting traffic.
  4. Ports: It contains the port number where your application will be accepting traffic.
  5. Resources: It contains information about pod configuration.

Our YAML file is ready, let's create our service in Kubernetes. Along with Kubernetes create command, we need to pass our YAML file and our service will be created as per the configuration in our YAML file.

kubectl create -f kubernetes.yaml

To check the status of pods we can use get pod command.

kubectl get pods | grep 'nextjs-app'

To check the logs within the pod use get logs command.

kubectl logs nextjs-app-6575f8c774-27lvd

To verify our changes we can do port forwarding to the respective pod of our service in the cluster.

kubectl port-forward nextjs-app-6575f8c774-27lvd 5000:3000

In the above command for port forwarding, port 5000 represents the local port number and 3000 is server-side port. We can change the first port number as per our connivance.

Port forwarding

This is how our application looks when we access it locally after port forwarding.

Our App view

Deployment: In the above section we have created our service manually. Now we will add another step in our pipeline to deploy our changes to Kubernetes. For this step, we need the image tag name to set image and service account to get access to Kubernetes cluster.

We have to activate the service account first to access the cluster.

gcloud auth activate-service-account --key-file {service_acc_file}

Once the service account is activated we can access the cluster.

gcloud container clusters get-credentials cluster-1 --zone southamerica-east1-c --project lyrical-catfish-287707

We got the access guys. Let’s deploy our service with set image command, where the tag name will be extracted from the image-tag.txt file and record true will create an entry in Kubernetes rollout history.

kubectl set image deploy/nextjs-app nextjs-app=$IMAGE --record=true

To check the deployment status we need to look rollout status of our service.

kubectl rollout status deploy/nextjs-app

For checking the history of deployments we have done we use rollout history command.

kubectl rollout history deployment/nextjs-app

This is how rollout history will look and it contains list of all image set command executed with respect to a deployment name.

Rollout history

Let’s look at our pipeline after these changes.

Final pipeline overview
App rollout status

Summary

To deploy NextJs application we need to build a docker image, push it to a docker registry, create a service in Kubernetes and set or update the image either manually or via bitbucket pipeline to deploy the latest version of our NextJs application. Each change to the bitbucket pipeline file can be validated using bitbucket pipeline validator.

This article gives a brief overview of deploying your application to Kubernetes cluster in google-cloud using bitbucket pipeline. For more details refer links shared in the references section.

Thank you for reading!

References

--

--