Geek Culture
Published in

Geek Culture

Introduction to helm Charts

Photo by Joshua Aragon on Unsplash

Till now we’ve seen that we have a lot of services to offer to our customers and each service has its own application. As discussed in Part I of this article, 50 ASGs mean 50 different types of servers — managing which became a manual task using ec2 instances. We have 5 different types of applications and roughly 10 different types of servers running for each application — categorized as http servers and workers.

Since k8s runs only containerized applications we need to build an image for each application and manage it. To containerize the app we added Dockerfile to our app and stored it on our AWS Elastic Container Registry (ECR). We then created a k8s cluster on AWS Elastic Kubernetes Service (EKS) and connected to it from our local following this guide. If you need to brush up the basics of Docker or Kubernetes, refer my previous blog.

To push or pull images to your AWS ECR, you need to first authorize AWS with docker using the below command. Refer this doc for further guidance.

aws ecr get-login-password --region <aws-region> | docker login --username AWS --password-stdin <aws_account_id>.dkr.ecr.<aws-region>.amazonaws.com

Then use the docker push/pull command as shown below.

docker pull 11xxxxxxxxxx.dkr.ecr.ap-south-1.amazonaws.com/backend:latest

Now that we have our k8s cluster setup it’s time to create some objects. We start with creating a deployment for our application, and we’ll use our image stored in the ECR. For this example I’m using a sample image nginx:1.14.2 so we have a working code to run.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: backend-server
name: backend-server
spec:
replicas: 2
selector:
matchLabels:
app: backend-server
template:
metadata:
creationTimestamp: null
labels:
app: backend-server
spec:
containers:
- env:
- name: ENV
value: dev
image: nginx:1.14.2
imagePullPolicy: Always
name: backend-server
ports:
- containerPort: 80
protocol: TCP
dnsPolicy: ClusterFirst

Creating this file and running the command $ kubectl apply -f deployment.yaml inside the cluster will spin up 2 pods running the code specified in the ECS image. You can check the logs or enter the pods using the commands mentioned in Part I of the article and make sure the server is functioning correctly and is accessible on localhost.

We now add the below ingress file to make the servers accessible from the outside world as well.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
finalizers:
- ingress.k8s.aws/resources
name: backend-server-ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: backend-server
port:
number: 3000
path: /
pathType: Prefix

Applying this file will generate a url which you can curl on to reach your application form anywhere. Using the get or describe command, as discussed in previous article, will show you the url.

So we have our application up and running in minutes, but wait I need to add another application — with a different image of course, because they have different code running inside of it. I’ll create another deployment file and another ingress.. and yet another for the third project and so on.

Do I manually have to create the same files over and over just to change one attribute?

Introducing Helm Charts…

In most simple way, helm chart is a way to reuse your object files using variables i.e once your create all the k8s objects you need, deployment and ingress file in our case, you can use a variable instead of hardcoded values for dynamic attributes and use the same set of objects over and over again.

A Helm Chart in itself is just a folder, when you install helm, a folder is created with a few files inside it which looks something like below. For now let’s just focus on 3 things — the values file, the Chart.yaml file and the templates folder (other files are optional to keep in folder). To learn Helm in detail you can start here.

wordpress/
Chart.yaml # A YAML file containing info about the chart
LICENSE
README.md
values.yaml # The default config values for this chart
values.schema.json
charts/
crds/
templates/ # All the k8s object files
templates/NOTES.txt

The Chart.yaml file only has metadata, like the version and description of the helm chart.

The values.yaml file is a collection of key-value pairs written in yaml format to be used as variables in all the k8s objects. Below is the values file we will keep in our project, since we only need to change the application and image name for all our applications and rest all configurations are the same, we’ll keep just that inside this file.

image:
repository: nginx:1.14.2 #link to ECS image with tag
pullPolicy: Always
application:
name: backend-server

So now our deployments.yaml and ingress.yaml file looks like below, that now sit in the template folder in the helm chart project.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{.Values.application.name}}
name: {{.Values.application.name}}
spec:
replicas: 2
selector:
matchLabels:
app: {{.Values.application.name}}
template:
metadata:
creationTimestamp: null
labels:
app: {{.Values.application.name}}
spec:
containers:
- env:
- name: ENV
value: dev
image: {{.Values.image.repository}}
imagePullPolicy: {{.Values.image.pullPolicy}}
name: {{.Values.application.name}}
ports:
- containerPort: 80
protocol: TCP
dnsPolicy: ClusterFirst
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
finalizers:
- ingress.k8s.aws/resources
name: {{.Values.application.name}}-ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: {{.Values.application.name}}
port:
number: 3000
path: /
pathType: Prefix

This is the final structure of our helm chart folder.

You can save these files and delete all the resources currently running in your k8s cluster to test our Helm Chart.

Once you have no resource in your cluster run the below commands in sequence.

helm package <foler_name> : move up a folder and run this command, a zip file is created at this location with the name and version specified in your Chart.yaml file.

helm install <release_name> <zip_file_name_with_version_tag> : you can use the zip file to install helm in our cluster.

That’s it, it just takes one command — helm install — to install all application objects required to run your application without any hassle. Now if you kubectl the objects, you’ll find all the configured objects.

To deploy a different application that needs all the same resource objects and different values, like different image in our case — fork this repo, change the values and install helm! Done.

That’s helm for you. One last problem we had was adding worker instances for each application i.e instances that share the same image/code but run different commands. We need a dynamic way to create deployment file for each service that can be done directly by the developer and needs minimum involvement from the DevOps team. Hence, we need to add worker config to the values file itself .. but in a loop?

This is something I had to play around to get the solution to, understanding how variables, loops and conditions inside loops work in a yaml file. Here’s the solution.

One interesting feature of a Kubernetes Object file is that you can create multiple objects inside the same file using the --- characters. For example, to create an ingress and a deployment object in one file you can write something like below.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{.Values.application.name}}
name: {{.Values.application.name}}
spec:
replicas: 2
selector:
matchLabels:
app: {{.Values.application.name}}
template:
metadata:
creationTimestamp: null
labels:
app: {{.Values.application.name}}
spec:
containers:
- env:
- name: ENV
value: dev
image: {{.Values.image.repository}}
imagePullPolicy: {{.Values.image.pullPolicy}}
name: {{.Values.application.name}}
ports:
- containerPort: 80
protocol: TCP
dnsPolicy: ClusterFirst
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubernetes.io/ingress.class: alb
finalizers:
- ingress.k8s.aws/resources
name: {{.Values.application.name}}-ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: {{.Values.application.name}}
port:
number: 3000
path: /
pathType: Prefix

Convenient, isn’t it? Here’s how I used this feature to dynamically create deployment files.

I added the below code to the values.yaml file:

workers:
- name: update
replicas: 1
command: ["export QUEUE_NAME=updates-queue && node worker.js"]
- name: cron-events
replicas: 1
command: ["export QUEUE_NAME=cron-events && node worker.js"]

so the complete file looks like below.

image:
repository: nginx:1.14.2 #link to ECS image with tag
pullPolicy: Always
application:
name: backend-server
workers:
- name: update
replicas: 1
command: ["export QUEUE_NAME=updates-queue && node worker.js"]
- name: cron-events
replicas: 1
command: ["export QUEUE_NAME=cron-events && node worker.js"]
# NOTE: the worker commands won't run with this values file because we have provided nginx image and nodejs is not installed on the container that is being spin up. This is just an example code to refer.

and I created a new deployment file for workers, workersDeployment.yaml, that looks like this.

{{- range $worker := .Values.workers }}
{{- with $worker }}
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{.name}}
name: {{.name}}
spec:
replicas: {{ .replicas }}
selector:
matchLabels:
app: {{.name}}
template:
metadata:
creationTimestamp: null
labels:
app: {{.name}}
spec:
containers:
- env:
- name: NODE_ENV
value: staging
image: {{$.Values.image.repository}}
imagePullPolicy: {{$.Values.image.pullPolicy}}
name: {{.name}}
command: ["/bin/sh","-c"]
args: {{.command}}
ports:
- containerPort: {{$.Values.service.targetPort}}
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
---
{{- end }}
{{- end }}

{{- range $worker := .Values.workers}}

range keyword is an iterator, it is how you run a loop in a yaml file like the for keyword in languages like python, java or c++ . The above line is similar to writing for worker in .Values.workers in python.

{{- with $worker }}

The next control structure to look at is the with action. This controls variable scoping. Recall that . is a reference to the current scope. So .Values tells the template to find the Values object in the current scope.

We have defined workers in the values file as a list of maps, for each of the list element we set the current scope to that element so we can then use the . followed by the keyname of the map to refer to its keyvalues.

{{.name}}

Which brings us to this. . is the current element of the workers list and .<keyname> will populate the value of element key. It is similar to writing worker[name] in python.

{{$.Values.image.repository}}

Now that we’ve changed the current scope, notice that we have to use $ in front of the Values variable to access the values.yaml file to make the compiler understand that we need to access a global/parent variable rather than a local worker variable.

{{end}}

end is simply the keyword to end a scope like we use curly braces } to end scopes in some languages like c++ and java.

To learn more on variables and scopes in helm you can refer this official doc.

When helm install is run with this file in repo, it creates multiple deployment files with names as provided in the workers list present in values file and spins respective pods for each deployment.

So we have our servers ready with just one command — no hassle, but wait do we have pull the code, build an image, push it to ECR registry, package helm again and then install it again everytime I make the most minor change in my code? Sounds like a lot of labor again but I think we’ve solved this problem, time to make some tweaks in our GitHub Actions workflow to manage CI/CD for k8s.

Let’s solve this and build a testing environment for each branch that we create in our GitHub repo for every individual developer to test their code in prod-identical yet isolated environment for Component Testing before merging it to the staging branch for Integration Testing — yet another advantage of using k8s — in the next article!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store