Fastly launch an EKS cluster with eksctl deploying a docker Flask web app with cluster autoscaler and Prometheus metrics from scratch

Pablo Perez
Pablo Perez
Published in
4 min readJun 22, 2019

This Web app shows Donald duck picture stored in S3 when accessing /donalduck and the time in the moment of request in Cohimbra City (Portugal) when accessing /cohimbra.

The App has built-in prometheus exporter providing metrics of the total number of requests to /donalduck uri and total number of requests to /cohimbra uri

PART I : Launch an EKS with cluster scaler spread in two nodegroups in different AZs

I use eksctl to create first the K8s control plane with the proper tags for cluster autoscaler.

eksctl create cluster --config-file=cluster-1.yaml

I ensure kubectl is well configured

aws eks --region eu-west-1 update-kubeconfig --name cluster-1

Then I create two nodegroups, each on a specific AZ as cluster autoscaler does not support Auto Scaling Groups which span multiple AZs.

I add the necessary tags and Iam permissions for the cluster autoscaler as well.

eksctl create nodegroup --config-file=nodegroups.yaml

I create a Service Account for tiller to interact with the K8s cluster API.

kubectl create -f rbac.yaml
kubectl apply -f rbac.yaml

Then , I install helm and initialize it to install tiller in the EKS cluster

helm inithelm repo updatehelm fetch stable/cluster-autoscalertar -zxf cluster-autoscaler-0.13.2.tgz

But before installing autoscaler we need to take into account that Tiller runs inside your Kubernetes cluster, and manages releases (installations) of our charts needing access to the Kubernetes API. By default, RBAC policies will not allow Tiller to carry out these operations, so we need to :

- Create a Service Account tiller for the Tiller server (in the kube-system namespace). Service Accounts are meant for intra-cluster processes running in Pods.

kubectl create serviceaccount namespace kube-system tiller

- Bind the cluster-admin ClusterRole to this Service Account in order for Tiller to manage resources in all namespaces.

The cluster-admin ClusterRole exists by default in the Kubernetes cluster, and allows superuser operations in all of the cluster resources. The reason for binding this role is because with Helm charts, we can have deployments consisting of a wide variety of Kubernetes resources.

kubectl create clusterrolebinding tiller-cluster-rule  
--clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p ‘{“spec”:{“template”:{“spec”:{“serviceAccount”:”tiller”}}}}’

Finally we install the autoscaler

helm install stable/cluster-autoscaler -f values.yaml --name my-release

We test the autoscaler and we verify it balances across both nodegroups in different AZs

kubectl run test --image=11111111111.dkr.ecr.eu-west-1.amazonaws.com/flasks3test --port=80 --replicas=30

Part II : Create a Web App

Create an app in Flask with the routes :

/donalduck will invoke a function that uses boto3 to start a session with S3 and generate a presigned URL. The client browser will be redirected to this presigned url to see the image therefore the S3 bucket will be kept private. An instance profile will be used instead to generate the presigned url.

/cohimbra will invoke a function that converts utc time to Lisbon time and this formatted time is rendered with the template.

Part III : dockerize the app

$cat Dockerfile
sudo docker run -d -p 80:5000 --name=my_flask_app_container flasks3.test

Part IV: Create an ECR repo and store the image there

sudo docker tag flasks3.test 11111111111.dkr.ecr.eu-west-1.amazonaws.com/flasks3testaws ecr get-login --no-include-email --region eu-west-1

(copy and execute the output)

sudo docker push 11111111111.dkr.ecr.eu-west-1.amazonaws.com/flasks3test

Part V: Create a service for the app

kubectl create -f flaskdeployment.yaml
kubectl create -f flask_service.yaml

Part VI: Install prometheus

For this just take a look at this link below in order to configure the NodePort type for the server and be able to access prometheus from the worker node ip:

https://eksworkshop.com/monitoring/deploy-prometheus/

In the prometheus values file ensure you have this excerpt, if not add it

Finally execute

helm install -f prometheus-values.yaml stable/prometheus --name prometheus --namespace prometheus

You will find the app metrics endpoint in targets and in the graph section you will be able to see the total number of requests.

--

--