Building your very own Service Mesh

2018 — Microservices (MS) take the centre stage for backend architecture and API infrastructures.

2019 — Service mesh gives its additional benefits to MS making it powerful than ever.


What’s a Service Mesh?

Service Mesh is a configurable infrastructure layer on top of your MS architecture that provides communication services to your microservices. The mesh provides load balancing, service discovery, canary deployments and much more without adding extra code in service level. To understand more about Service Mesh have a look at nginx.com’s blog on Service Mesh that explains almost everything you need to know about it to get started.

What would you need?

  • NodeJS — To build a very basic MS application (you can use whatever language you favour as we’ll be containerizing it anyways 😎).
  • docker — Docker to containerize microservices.
  • kubectl — The command line interface for running Kubernetes commands.
  • istioctl — We’ll be using something called Istio. Istio is an open service mesh and to use it we’ll be needing istioctl, the command line interface for Istio.
  • Google Cloud Platform(GCP) — The deployments are going to happen on GCP (or one can use any other platform or even do it locally!).

Let’s build!

  • NodeJS MS

Build a simple ping-pong — healthCheck MS using whatever framework you like. This post is not focussed on the functionality of the MS and you can use whatever language you are comfortable with. For your convenience here is the sample I used for my purpose made using express in node.

Ping route!

Health Check route!

  • Docker

Before proceeding further, make sure you have Docker installed and running. Dockerize the MS by writing up a Dockerfile. For the above code you can use the following Dockerfile. Make sure you place the Dockerfile in the root folder of your MS.

Before building up the container you’ll be needing a GCP project. So head on to GCP console and create a new project.

After you have created a new project note the projectID or rather export it to the terminal constants by typing export PROJECT_ID=<your_project_id>. Fire up your terminal and type the following in the root directory of your MS.

docker build -t gcr.io/${PROJECT_ID}/service-mesh:v1 .

docker build is the command to build your container with specifications put in your Dockerfile. As we are using GCP, we will be using Google Container Registry (GCR) to store the image. There is a specific way to tag the images in order to save them in the GCR and they are private repositories associated with your project, that’s why the projectID in the name of the image. The part after projectID is the name you want to give to your image. v1 specifies the version of the current image. It will be required when you go for version upgrades or canary deployments (another amazing feature provided by service mesh). "." specifies the path of your files, here, it being the root directory.

You’ll have an image created with the name and version tagged to it which you provided. You can verify that by typing docker images in your terminal. Now, verify that your docker build is working properly by running the container locally:

docker run -p 8001:8001 gcr.io/${PROJECT_ID}/service-mesh:v1

Hit http://localhost:8001/ping or http://localhost:8001/healthCheck and check if the container is running as expected. If yes, you are ready to deploy!

To push your image to GCR, use the docker push command as:

docker push gcr.io/${PROJECT_ID}/service-mesh:v1

This will place your image in GCR and you can access it in your GCP project.

  • GCP

To start using GCP you have two options:

  1. Setup gcloud on your local computer using this guide.
  2. Use GCP’s shell in the GCP console itself.

We’ll go with the second option as you don’t have to worry about installing kubectl (CLI for kubernetes) and the access to the cluster (which we’ll be creating in a while) becomes easy. Nevertheless if you wish to build the service mesh locally or on some other platform you can use the following guides:

> gcloud — Google Cloud SDK for accessing projects on your system.

> kubectl — CLI for kubernetes.

Once you have these, next steps will be similar. The only difference would be they’ll be running in the cloud shell.

Now that you have all the components installed follow the below instructions:

In order to create clusters you need to enable the Kubernetes Engine API in the console. To do that open the API & Services section in the console and enable Kubernetes Engine API. You will also have to enable Billing from the menu as the clusters will have a cost. If you are a new user then you’ll get $300 worth GCP credits which should be enough for this but just make sure to disable billing as soon as you’ll not be needing the clusters and re-enable them once needed.

Now, set your project and compute/zone in gcloud config.

gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b

Create clusters on GCP by running the command:

gcloud container clusters create service-mesh --num-nodes=3

The above command creates a cluster on GCP with the name “service-mesh” having three nodes. The process will take around 5–10 minutes. So if you want to grab a cup of coffee, now is the time :)

After the process is up you can head on to your GCP console to verify if your clusters are up. You can check it in Kubernetes > Clusters or in your terminal run this command gcloud compute instances list to see your running clusters.

Now that you have your clusters running, you need to install Istio.

Istio is an open source tool to create service mesh. Istio makes it easy to create network of deployed services with load balancing, service-to-service authentication, monitoring, and more, without any changes in service code. Istio deploys a special sidecar proxy, an extended version of Envoy proxy — a sidecar proxy for services that is used by a lot of tech giants — into each of your pods that intercepts all network communication between MSs. To install istioctl (Istio CLI), in your system with cluster access, run the command

curl -L https://git.io/getLatestIstio | sh -

You’ll have an istio-1.0.x folder created in the same location. Navigate to the folder with extracted files, make sure you are able to access your kubernetes cluster and install istio-system with

kubectl apply -f install/kubernetes/istio-demo.yaml

The istio components will be installed in their own namespace, i.e., istio-system, and these services can access your MSs in other namespace. Also make sure you export the istioctl command whilst in the root directory of istio installation by running export PATH=$PWD/bin:$PATH to be able to access istioctl command.

To verify your istio installation run the command:

kubectl get service -n istio-system

You’ll get an output similar to this:

To see the pods booting up, run the command

kubectl get pods -n istio-system

Now that your istio-system is ready, let’s deploy the actual services.

For deploying, istio needs a config file in .yaml format. The config file decides what and how your deployment is gonna look. There are two files mainly that you require — deployment config and routing config.

  • Deployment Config: Decides how your deployment looks. It specifies the name of the deployment, it’s replicas, what image(docker) to choose, labels for features like canary deployments, name of the service, port to expose and much more. There is not a lot of material on the internet on writing these config files. Even I got it right after a series of hit and trials, so no worries 🙂. Do let me know by commenting if there is any good documentation for writing these. For our example the Deployment-Service config will look like below. Name this file my-website.yaml.

A lot of specifications above are self-explanatory but the fashion they are written is a bit hard to catch. There is one Deployment named web-v1 with other attributes and one Service that selects the deployment using the selector on the basis of the tag app. It will be a bit difficult when you go for a much more complex service but as I mentioned earlier, there is no good documentation for it. So we’ll prefer good ol’ Hit and Trial ✌🏻unless you’ve found some docs and would love to mention in the response section for other people and also me to know.

  • Routing Config: The routing config decides how your traffic is going to be routed amongst your services. Let’s say you want 10% traffic on v2 of your service and rest on v1, you can set it in this config. Although I haven’t got any luck in implementing this feature yet but I am constantly trying and I will write a separate blog on “Canary Deployments” as soon as I am successful. A basic routing config will include a Gateway specification which specifies the gateway to select, i.e., the istio’s ingressgateway so as to pass traffic through istio and a VirtualService specification which specifies the gateway to choose, uri’s to match and destination host and port. The config file will be called website-routing.yaml and it will contain the following content:

Now that you have these manifests to setup your istio service-mesh run these commands in order to let your istio-system know your required configuration.

kubectl apply -f <(istioctl kube-inject -f my-website.yaml) 

Confirm that the application has been deployed correctly by running:

kubectl get services 

You’ll get the services as defined in your my-website.yaml file.

Also check your pods running as soon as your services are up by running:

kubectl get pods or watch kubectl get pods as this may take some time.

Finally define your ingress gateway routing for your application as defined in the website-routing.yaml file.

kubectl apply -f website-routing.yaml 

Your application will be deployed!

To use your application, find the port and ingress IP as follows:

kubectl get svc istio-ingressgateway -n istio-system 

The output will have the External IP of the ingress-gateway which you can use to contact your application. Just curl the address and port with the specified route (in this case /ping):

curl http://${EXTERNAL_IP}:${PORT}/ping 

and voila, you’ll get your output (in this case “Pong!”).

Congratulations 🎉! You have your service-mesh implemented and running. Do explore more about Service Mesh on the reference links mentioned below and on the internet and create better architecture in your projects.

References