Connect Livebook to Elixir in Kubernetes

Danny Hawkins
Quiqup Engineering
Published in
4 min readApr 19, 2022

I’ve been playing around with Livebook recently, it’s an amazing tool, even more so considering how new it is.

I had a specific use case where I needed to connect Livebook to a service running inside a private Kubernetes cluster where my QA environment is running. So that Livebook can be used to setup and teardown common QA Scenarios and make it a lot easier to document and execute.

Livebook has several ways to configure the connected runtime, we’re going to be using the Attached node mode.

we’re going to be focused on setting up a remote service to be ready to connect to, as well as making a script that allows us to boot livebook already connected to our remote service.

Preparing your service

We will assume that your Elixir service is not setup for remote connections / distributed erlang, but if it is you can probably skip this section.

Update Deployment env vars

First you need to edit the Kubernetes deployment spec, we need to add in some additional environment variables that will be used to configure the node name and address, we need to add add the additional values for:

RELEASE_COOKIE this should be a secret that is basically the required value to connect to the node later

NAMESPACE we get the pod namespace by using k8s internal valueFrom env vars

POD_IP this will be used in our env script to figure out what to set the node name to, so we can ready the service node

Update for release and change env script

Update mix.ex for releases (this is not 100% essential but keeps things clean and avoids building windows executables in my case):

Then run the release init script to create the necessary files

mix release.init

Edit the created env.sh.eex file, it will use the previously configured environment variables to set the release name. Re-deploy the service and you should be good to go.

In my case the service name on the terminal now looks like something I can address via k8s dns:

In my case my addressable node is test_svc@10–8–7–66.dev.pod.cluster.local this is using Kubernetes DNS. I use this strategy because it’s recommended for this kind of setup from libcluster:

The benefit of using :dns over :ip is that you can establish a remote shell (as well as run observer) by using kubectl port-forward in combination with some entries in /etc/hosts.

Using :hostname is useful when deploying your app to K8S as a stateful set. In this case you can set your erlang name as the fully qualified domain name of the pod which would be something similar to my-app-0.my-service-name.my-namespace.svc.cluster.local

Livebook!!

OK now for the fun part. I decided wrap everything in a simple script to be able to connect. It’s almost too easy!!

What we are doing here, is getting the POD_NAME, RELEASE_NODE and RELEASE_COOKIE using some lookups and execs with kubectl, then we are running a pod in the cluster preconfigured to access the running node.

Run the script to create the pod in the cluster.

In order to access it, we need find the token, we can do that by looking at the logs:

kubectl logs danny-livebook# [Livebook] Application running at http://localhost:8080?token=<tokenishere>

then we need to port forward using kubectl

kubectl port-forward danny-livebook 8080:8080

And there we have it, a connected Livebook inside Kubernetes, accessible at localhost:8080

Clean Up

When you are finished with your livebook, you can just close the port-foward. Then delete the pod

kubectl delete po/danny-livebook

Bonus: Standalone Node

What I’ve covered so far, is the ability to connect to a running node, but if you want to use the same approach to run a standalone node in Kubernetes, all you need is to adjust the Livebook script (and don’t need to run other configuration):

Another Bonus: Cloud Storage

We can add a little bonus to our startup script, if you have an s3 bucket and the credentials you can setup Livebook to always be connected to remote storage when it starts

Just add this line to the startup command:

--env LIVEBOOK_FILE_SYSTEM_1="s3 ${BUCKET} ${ACCESS_KEY} ${ACCESS_KEY_SECRET}" \

Then you will have access via the Liveview interface

Original post: https://blog.dannyhawkins.me/posts/livebook-to-kubernetes/

--

--

Danny Hawkins
Quiqup Engineering

I’m a CTO and Co-Founder of a company called Quiqup, a fan of clean architecture and code, and Golang my language of choice.