Elixir + Kubernetes = đź’ś (Part 3)

Drew Cain @groksrc
10 min readJul 26, 2019

--

How To Set Up and Auto-Scaling Elixir Cluster with Elixir 1.9

Photo by Kent Pilcher on Unsplash

This is Part 3 of a three part series on how to create and deploy an Elixir application that will automatically scale on Kubernetes. Be sure to visit Part 1 and Part 2 if you need to catch up. If you just want to see the source code it’s available here: https://github.com/groksrc/el_kube and if you just want a summary of the commands the slide deck from the original talk is available here.

In Part 1 of this series we created an Elixir app named el_kube that is configured to automatically join an erlang cluster and in Part 2 we created a Docker container for it so that we can run it on Kubernetes (k8s) and tested it, ending on Step 15. Here in Part 3 we’re going resume our previous work and run the container in k8s by way of minikube. Again, all of the commands I show below are issued from the root of the project directory. Let’s get started!

16: Create a directory for your k8s files

Kubernetes is configurable in a number of different ways, but generally, I recommend using a file-based approach over the CLI. The primary reason is that often you’ll want a history of the commands that have been run on your cluster that the CLI can’t provide you as well as the ability to more easily reason about how you’re configuring the cluster. The CLI command and arguments can become exceedingly verbose when doing anything more than just trivial examples, so I’ll rely on the file-based approach going forward.

$ mkdir k8s

17: Create a Persistent Volume Claim

If you think about what a k8s cluster is, it’s essentially a set of physical machines, all with some of their own internal storage, as well as a network connection to shared storage, usually like a SAN or something. A database needs a permanent spot to put files that will continue to exist no matter what happens to the running containers or the nodes of the k8s cluster, and that’s what a Persistent Volume Claim (PVC) gives you. Let’s create a new file called k8s/pvc.yaml and then apply it to our minikube cluster.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pvc
labels:
app: postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

This file defines a PVC, sets its name to postgres-pvc and applies the app: postgres label. The labels are arbitrary key/value pairs (which is true on all k8s configs) and the spec that follows sets the accessModes for the container and defines the amount of space. In our case, we’re defining a ReadWriteOnce mode which means that only one connection at a time is allowed, which is fine because we’re only going to have one container (the Postgres DB container) accessing the PVC and we ask for 1 GB of storage.

Once you have created and saved the file above, issue the following command to add the PVC to the cluster:

$ kubectl create -f k8s/pvc.yaml
persistentvolumeclaim/postgres-pvc created

18: Create the DB using the PVC

Next, let’s define a Deployment for the Postgres database and call it k8s/db.yaml. This config will start a Postgres container and mount the PVC to the mountPath defined under volumeMounts. This way, any DB container that boots will find the files in the same location, which is the location the container expects, and it will pick up where it left off. Here is the content for the db.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
name: db
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: postgresql
template:
metadata:
creationTimestamp: null
labels:
app: postgresql
spec:
containers:
- env:
- name: POSTGRES_DB
value: el_kube_prod
image: postgres:9.6
name: db
ports:
- containerPort: 5432
resources: {}
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: data
hostname: db
restartPolicy: Always
volumes:
- name: data
persistentVolumeClaim:
claimName: postgres-pvc

Notice again we’re passing the POSTGRES_DB environment variable to the container and we’re also declaring a volume named data that uses the PVC we defined above. We’re also applying the app: postgresql label to this container. We’ll see how this comes in to play in the next config file.

Notice again we’re passing the POSTGRES_DB environment variable to the container and we’re also declaring a volume named data that uses the PVC we defined above. We’re also applying the app: postgresql label to this container. We’ll see how this comes in to play in the next config file.

Now that the file is saved, let’s create the deployment on the cluster:

$ kubectl create -f k8s/db.yaml
deployment.extensions/db created

At this point you should be able to see that the pod is up and running:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
db-84c67d88cf-d2xss 1/1 Running 0 13s

Great! But there’s a problem. This container has a dynamic name and IP address that’s assigned to it and we need a static IP address or DNS name to give to our Phoenix nodes so they know where to connect. What to do? A k8s service will do the trick!

19: Create the DB service

Back in your editor, let’s now create a file named k8s/db-svc.yaml and add the following content to it:

apiVersion: v1
kind: Service
metadata:
labels:
app: postgresql
name: db
spec:
ports:
- name: postgres
port: 5432
targetPort: 5432
selector:
app: postgresql

This file defines a k8s Service and gives it the name db . The key point here is that inside a container on the cluster the DNS name of db can be resolved once this service is in place. The spec says that the service will have a port named postgres and containers outside the service will connect on port 5432 while containers under the service will use the targetPort, also 5432. How are containers included as a part of the service? With the selector. Like I mentioned previously, the label app: postgresql is applied to our database container. When the k8s management engine sees that container (pod really) labeled with that key/value pair it will automatically start sending traffic directed at that service to those containers.

Now that we understand what this does, let’s fire it up on the minikube cluster:

$ kubectl create -f k8s/db-svc.yaml
service/db created

20: Create the el_kube private and public services

Next, let’s create a private Service for the app. This one is going to be a little different than the service for the database though. Remember, the last service assigned a static IP address to the database server. In this case, when we query DNS we don’t want back a static IP. Instead what we’d like to get is a list of IP addresses to give to peerage. That’s how peerage will know what containers in the cluster are running the beam and it will attempt to join them together.

Create a file name k8s/el-kube-private-svc.yaml and add the following content:

apiVersion: v1
kind: Service
metadata:
name: el-kube-private
spec:
clusterIP: None
ports:
- name: epmd
port: 4369
selector:
app: el-kube

Here the main difference is that we’re setting the clusterIP to none. This way a single static address is not assigned to the service, but instead, any pods with the label app: el-kube will be returned in the DNS query.

With this file in place let’s create the service on the minikube cluster:

$ kubectl create -f k8s/el-kube-private-svc.yaml

Now let’s create a public service so that we can reach the Phoenix app from our host. This service will expose the containers running the elixir application on port 4000 to our host.

apiVersion: v1
kind: Service
metadata:
name: el-kube-public
spec:
ports:
- name: http
port: 4000
selector:
app: el-kube
type: LoadBalancer

And now create the service in minikube.

$ kubectl create -f k8s/el-kube-public-svc.yaml

21: Push the container to the minikube cache

At this point we’re almost ready to deploy the app container. But there’s another problem. We haven’t pushed it to a cloud repo anywhere that minikube can pull it from. That’s ok, let’s use a shortcut instead. We can actually just push the image straight into the minikube image cache. Execute the following command to do this:

$ minikube cache add el_kube:latest

22: Create the el_kube Deployment

Now that the container is in place on the k8s cluster, we’re ready to create the deployment. Switch back to your editor and create a file named k8s/el-kube.yaml. Add the following content to the k8s/el-kube.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
name: el-kube-deployment
labels:
app: el-kube
spec:
replicas: 3
selector:
matchLabels:
app: el-kube
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 34%
maxUnavailable: 34%
template:
metadata:
name: el-kube
labels:
app: el-kube
spec:
containers:
- name: el-kube
image: el_kube:latest
imagePullPolicy: Never
env:
- name: APP_HOST
value: el-kube.com
- name: DB_URL
value: ecto://postgres:postgres@db/el_kube_prod
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: PORT
value: "4000"
- name: RELEASE_COOKIE
value: el-kube-secret-cookie
- name: SECRET_KEY_BASE
value: super-secret-key-base
- name: SERVICE_NAME
value: el-kube.default.svc.cluster.local
resources: {}
securityContext:
privileged: false
procMount: Default
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30

There’s a lot going on here, probably too much to explain in a blog post. Much of this I picked up from other configs, but you should recognize the important bits. We’re creating a Deployment named el-kube-deployment and we’re matching containers with the label app: el-kube We’re also configuring 3 replicas, this is our 3 node cluster, and we’re setting our update strategy. This will tell k8s how many containers to create/terminate at a time so you get a smooth rollout. The spec uses our container label and note we set the imagePullPolicy: Never so that it will use the image we built locally.

With this file in place let’s fire up the app. Run the following command from your terminal:

$ kubectl create -f k8s/el-kube.yaml
deployment.apps/el-kube-deployment created

Let’s see if it worked!

23: Check your work

First thing first, make sure your containers (pods) are running:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
db-84c67d88cf-d2xss 1/1 Running 0 9m56s
el-kube-deployment-ffc6db99c-9p7n5 1/1 Running 0 12s
el-kube-deployment-ffc6db99c-dt4f4 1/1 Running 0 12s
el-kube-deployment-ffc6db99c-qqd22 1/1 Running 0 12s

You should have 4 pods, one DB and 3 el-kube. You should issue this command a couple of times to make sure that they aren’t crashing. If they stay up for 30s or so you should be good.

Next, let’s see if we can reach the phoenix web application. Minikube has a built-in command that will expose LoadBalancer services. This should open the browser on your machine and show you the running application. To view the phoenix app:

$ minikube service el-kube-public

Now let’s connect to one and see if the peerage configuration worked. Issue the following command to connect to a running container. Make sure to use the pod's names your system created, they will be different than mine:

$ kubectl exec -it el-kube-deployment-ffc6db99c-9p7n5 sh

The sh at the end of the command overrides the default CMD command from the container, so you should be dropped into the shell on that running container in the WORKDIR directory.

Now, let’s see if we can connect to the beam as we did before in Part 1.

/usr/local/el_kube # bin/el_kube remoteErlang/OTP 22 [erts-10.4.4] [source] [64-bit] [smp:2:2] [ds:2:2:10] [async-threads:1] [hipe]Interactive Elixir (1.9.0) - press Ctrl+C to exit (type h() ENTER for help)iex(el_kube@172.17.0.9)1>

If you see this that is a good sign. You’re now connected remotely to the beam cluster. Let’s list the Nodes and see if everything is hooked up.

iex(el_kube@172.17.0.9)1> Node.list
[:"el_kube@172.17.0.7", :"el_kube@172.17.0.8"]

If you get two nodes back, stop and breathe in that sweet, sweet success. You’ve done it! Don’t be surprised that there are only two nodes in the list. Node.list/0 returns remote nodes, not the node you’re currently connected to, so everything is working.

Next, let’s make sure the database is connected:

iex(el_kube@172.17.0.9)2> ElKube.Repo.query("select 1 as testing")
{:ok,
%Postgrex.Result{
columns: ["test"],
command: :select,
connection_id: 95,
messages: [],
num_rows: 1,
rows: [[1]]
}}

If you get back the ok tuple then you’re golden. Everything is connected and beers are to be had all around. 🍻

But wait! There’s more! Wouldn’t it be incredible if we could just change the el-kube-deployment config and automatically add new nodes to the erlang cluster? Well, we can!

24: Grow your cluster’s cluster

Let’s change the number of deployment replicas to 5 and make sure that the erlang cluster connects:

$ kubectl scale deployment el-kube-deployment --replicas 5
deployment.apps/el-kube-deployment scaled

Let’s see if the new pods came up:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
db-84c67d88cf-d2xss 1/1 Running 0 12m
el-kube-deployment-ffc6db99c-4nw5k 1/1 Running 0 6s
el-kube-deployment-ffc6db99c-9p7n5 1/1 Running 0 2m49s
el-kube-deployment-ffc6db99c-dt4f4 1/1 Running 0 2m49s
el-kube-deployment-ffc6db99c-gzwqf 1/1 Running 0 6s
el-kube-deployment-ffc6db99c-qqd22 1/1 Running 0 2m49s

They did! Did they connect??

$ kubectl exec -it el-kube-deployment-ffc6db99c-9p7n5 sh
/usr/local/el_kube # bin/el_kube remote
Erlang/OTP 22 [erts-10.4.4] [source] [64-bit] [smp:2:2] [ds:2:2:10] [async-threads:1] [hipe]
Interactive Elixir (1.9.0) - press Ctrl+C to exit (type h() ENTER for help)
iex(el_kube@172.17.0.8)1> Node.list
[:"el_kube@172.17.0.6", :"el_kube@172.17.0.7", :"el_kube@172.17.0.9",
:"el_kube@172.17.0.10"]

They did!!! 🎉 🎉 🎉

Ahh, that’s the stuff. That’s why I do this job. So I hope you’ve enjoyed this series. Here are a few resources to follow up with if you’d like to dive a little deeper.

Resources

The In Action series by Manning was incredibly helpful in getting me up to speed on all of the technologies I’ve covered in this series. I highly recommend them:

If you’re stuck on versions of Elixir before 1.9, don’t worry, there’s hope. I actually originally implemented this in 1.8.1 with Distillery right before 1.9 dropped. It’s a little harder to work with but this is still definitely doable using the concepts applied here. Do check out Distillery if that’s the way you need to go: https://github.com/bitwalker/distillery

I also found these articles helpful in my quest, I stand upon the shoulder of giants:

And that’s about it. I hope you’ve enjoyed reading this series and it’s helpful in your day-to-day. It took me quite a while to piece all of this together, so hopefully, you don’t have to. Enjoy!

-g

--

--