ACME WhatsApp Banking — GKE setup

Victor Paulo
8 min readAug 29, 2019

--

This post is related to ACME WhatsApp banking, if you didn’t read the initial part, please click here to start from the beginning.

Creating the Cluster

You can create a cluster of different ways, the idea here is show some options.

Create a cluster using Google Cloud console

GKE cluster creation

For now, just let's create a simple environment with default options. In order to cut down the costs associated to this demo, let's create a pre-emptible VM machines.

GKE kubernetes cluster configuration

Click save and then create button to start provisioning your Kubernetes cluster.

Using Compute Instances Pre-emptible

I decided to use Pre-emptible VMs to avoid high costs even knowing my cluster would be available for some days only and also to test the resilience of my solution 😛.

Pre-emptible Virtual Machines (PVMs) instances according to Google:

Preemptible VMs are highly affordable, short-lived compute instances suitable for batch jobs and fault-tolerant workloads.

This is very nice idea, it’s worth to check the Sandeep Dinesh post about that here.

Using PVMs also creates a natural chaos monkey in your system

The same feature is available on Amazon as EC2 Spot instances and Azure as Low-Priority VMs.

Creating the k8s cluster by command line

In the Google Cloud console there is an option to see the generated command line along with REST option, you can wrap it up in a shell script and customise it according to your taste.

For command line, install the CLI sdk for gcloud
REST command to create gcloud k8s cluster

Create using Hashicorp Terraform

This option is the recommended one since you can keep the state of your infrastructure by having it in an Orchestrated, Immutable and Declarative way (not procedural), you can keep your configuration in a SCM for versioning which helps to keep the track of changes done.

To get started with Terraform, please check this link.

Create a IAM user and get the service account credentials to be used by your Terraform script file.

$ PROJECT_NAME=$(gcloud config get-value project)
$ gcloud iam service-accounts create terraformuser
$ gcloud iam service-accounts list
$ gcloud iam service-accounts keys create terraformuser.json --iam-account terraformuser@$PROJECT_NAME.iam.gserviceaccount.com
Terraform script for creating GKE cluster

Once you created the terraform file above, it's time to issue the following commands to provision your infrastructure, the project name is passed dynamically when provisioning the environment.

$ PROJECT_NAME=$(gcloud config get-value project)$ cd <directory_script_tf>$ terraform plan  -var 'project_name=$PROJECT_NAME'
$ terraform apply -var 'project_name=$PROJECT_NAME'

To destroy your cluster and all resources associated to it, just run the following command:

$ terraform destroy

Setting up kubectl

The command below will configure the kubectl to point to your cluster by adding the public certificates into the file .kube/config.

$ gcloud container clusters get-credentials <cluster_name> --zone <zone_name>#example:
$ gcloud container clusters get-credentials acme-banking-cluster --zone us-central1

If you don't have gcloud CLI installed and you have only the kubernetes config file, you can configure your kubectl in the following way:

$ export KUBECONFIG=~/.kube/config 

Setting up the Helm

Once you have your kubernetes up and running, it's time to configure your helm client and server (tiller) to deploy and manage releases seamless in your kubernetes environment.

If you are not familiarised with helm, please get started by checking here.

Now that you have Helm installed, let's create RBAC resources (service account, cluster role bindings, etc).

Helm tiller RBAC

Run the following command to configure tiller and point helm client to it.

$ helm init --service-account tiller

This step will not be needed in the upcoming helm version 3.

Configuring the Container Registry

This step is required for Draft to be able to push container images after building them locally. After that, Kubernetes will pull container images from the registry and install them on work nodes.

First let's configure docker daemon to point to the Google Cloud registry:

$ gcloud auth configure-docker
$ gcloud components install docker-credential-gcr
$ docker-credential-gcr configure-docker

Draft
Draft is a scaffolding tool, it creates for you the helm charts and docker descriptor file using multi-stage deploy based on the programming language you are using.

Draft is written in go-lang and it uses a library called Linguist under the hood to detect the program language used to create a customised Dockerfile and Helm charts automatically for you.

https://github.com/Azure/draft/blob/master/cmd/draft/create.go
  • Configuring Draft

Draft relies on Helm and Docker registry. Check if the registry name has the google project name appended to it, otherwise it won’t work.

$ PROJECT_NAME=$(gcloud config get-value project)
$ draft config set registry gcr.io/$PROJECT_NAME
$ draft init
$ cd <microservice directory>
$ draft create

If you have created the Dockerfile previously, the draft create command will not change it.

  • Performing the deploy

Now that you have configured all pre-requisites for draft, the deployment task should be straightforward, we just need to issue a command below:

$ draft up
Draft deployment

If you get an error when releasing the application, it's due the helm is not able to upgrade the release, to solve this, run the following command:

$ helm del --purge <release_name>

Ingress

At this point we have a Kubernetes cluster and we are able to deploy our micro-services using draft in a seamless way. The deploy will create the pods and services resources. Now it’s time to configure the Ingress component responsible for distributing the load, routing the incoming requests to different services.

  • Installing the Nginx Ingress

The following command will create the Nginx Controller and backend pods.

$ helm install stable/nginx-ingress --namespace=default --name=nginx-ingress
  • Creating the configMaps with the routing rules for the Ingress

You can create the configmap yaml file manually or you can use ingress.yaml file generated by the Draft create command as part of the chart creation. In the image below, we just need to change the directive from "enabled: false" to "enabled: true" in the values.yaml file to activate Ingress rule for your service.

If you leave it as false, the Draft will not create the Ingress rule configmap.

Ingress configuration
  • Routing to services in a different namespace

The Nginx ingress has some tricks to be able to route to a service deployed in a different namespace from it. I had to create a external service as describe below:

apiVersion: v1
kind: Service
metadata: null
name: prometheus
namespace: monitoring
spec: null
type: ExternalName
externalName: prometheus.monitoring.svc.cluster.local

The external service above must be created in the ingress namespace you chose. The externalName must follow the convention:

 <serviceName>.<namespace>.svc.cluster.local
  • Testing the Nginx Ingress

The Nginx Ingress can route based on different rules, but the main one is evaluates the Host header, if there is a match then the route happens.

$ curl -H 'Host: acme-banking.com' http://node-port-ip:port/context

Or you can use a chrome extension ModHeader as shown below:

ModHeaders chrome extension

MongoDB

The step is required for our solution for storing and retrieving the product information to enrich the chatbot conversation.

The MongoDB configuration used is based on StatefulSet and sidecar as described in the blog post Running MongoDB on Kubernetes with StatefulSets

$ git clone https://github.com/thesandlord/mongo-k8s-sidecar.git
$ cd ./mongo-k8s-sidecar/example/StatefulSet
$ kubectl apply -f googlecloud_ssd.yaml
$ kubectl apply -f mongo-statefulset.yaml
  • Solving some problems along the way
  1. If you face problems related to small-files, it was solved by updating mongo image to version 4 (mongo:4.0).
  2. If you get the error “code : 13435, codeName : NotMasterNoSlaveOk"
$ mongo
> rs.slaveOk()
#Or if you have more replicasrs.initiate(
{
_id: "myReplSet",
version: 1,
members: [
{ _id: 0, host : "mongodb0.example.net:27017" },
{ _id: 1, host : "mongodb1.example.net:27017" },
{ _id: 2, host : "mongodb2.example.net:27017" }
]
}
)

3. If you get the error "Error: Replication has not yet been configured", the solution might be running the command below.

$ mongo
> rs.initiate()

4. Change the Mongo StatefulSet definition to bind on address 0.0.0.0;

MongoDB binding on any network
  • Creating MongoDB collections
$ kubectl  exec -it mongo-0 /bin/sh
$ mongo
> use <db_name>
> db.<collection_name>.insert({id:1, name: 'Product 01'})
Where:
* <db_name> - Name of your database, it will create if does not exists
* <collection_name> - Name of your collection

Observability with Prometheus & Grafana

Observability comprises distributed tracing, monitoring/alerting and logging aggregation and analytics. We are covering just the monitoring part including infrastructure and Business Activity Monitoring.

#Installing Prometheus
$ helm install --name prometheus stable/prometheus --namespace monitoring --set server.persistentVolume.enabled=true
#Installing Grafana
$ helm install --name grafana stable/grafana

You can provide more options in your helm command if you want more customisation for the Prometheus and Grafana, check out the documentation here and here.

  • Configuring Prometheus

https://gist.github.com/victorpaulo/9c4399cc4ffb22bd5847a6e9d0a1406b

The important part is highlighted below, the section “scrape_configs” and “targets”, this one represents the services which you want the prometheus to scrape based on endpoint, the default is “/metrics”.

scrape_configs:
- job_name: 'ACME Banking hackathon'
scheme: http
static_configs:
- targets: ["10.0.14.59:8080", "10.0.12.103:8080", "10.0.14.245:8080"]

I recommend to replace the IP address to DNS name like:

 <service_name>.<namespace>.svc.cluster.local
  • Configuring Grafana Dashboard

Grafana is straightforward to setup, first we need to add the datasource to prometheus and then create a dashboard.

Grafana Datasource configuration to Prometheus

I exported the definition of my dashboard which is available below.

https://gist.github.com/victorpaulo/82daf3f33b5cf4cfe1d74286744eeb46

My dashboard- Sorry for the screenshot :-)

The next post will describe how to setup a chatbot on IBM Watson Assistant, how to translate its responses to voice using IBM Watson Text-to-Speech API and how to use AWS S3 to store the audio files. Please check it out here.

--

--

Victor Paulo

“We are what we repeatedly do. Excellence then, is not an act but a habit.” — Aristotle