Quick Note on Configuring DigitalOcean Floating IPs and Load Balancers with Terraform for use with Kubernetes LoadBalancer objects

One thing that makes Kubernetes so simple to make use of on platforms with rolled-in support for platform-specific resources like external load balancers, and elastic IPs on AWS and GCP, among others, is that they can be provisioned and deprovisioned as part of your Kubernetes YAML config, which isn’t the case on platforms where you may be running your own hardware, or is just a provider that doesn’t have that support yet.

My platform of choice is DigitalOcean, which, through excellent control planes like Stackpoint.io (which preconfigures this functionality for you), and free tooling like Kubicorn (my preferred method), or any number of other platform agnostic (commercial or otherwise) control plane suites, it’s trivially simple to spin-up a cluster. That said, because of this lack of native support for other parts of the platform via your object definitions, some things need to be done manually.

Let’s assume you know how you’d like to install Kubernetes, and just provisioned the droplets with Terraform (in my case, everything is bootstrapped from a series of scripts that configure the node, and connects to my control plane; your process will likely differ, but for the sake of example):

resource "digitalocean_droplet" "k8s-node" {
name = "${format("k8s-node-%02d", count.index)}"
image = "ubuntu-14-04-x64"
size = "${var.size}"
count = "${var.count}
ssh_keys = ["${var.ssh_key_fingerprint}"]
connection {
user = "root"
private_key = "${file("${var.priv_ssh_key_path}"
}
provisioner "file" {
source = "files/k8s.install.sh"
destination = "/root/k8s-install.sh"
}
provisioner "remote-exec" {
inline = "chmod +x /root/k8s-install.sh; sh /root/k8s-install.sh"
}
}

So, after Kubernetes is running (however you configured it), let’s take this YAML for example:

---
apiVersion: v1
kind: ReplicationController
metadata:
name: app1
spec:
replicas: 3
selector:
app: app1
template:
metadata:
name: app1
labels:
app: app1
spec:
containers:
- name: app1
image: app1
ports:
- containerPort: 4567
kind: Service
apiVersion: v1
metadata:
name: app1
spec:
selector:
app: app1
ports:
- protocol: TCP
port: 80
targetPort: 4567
nodePort: 30061
type: LoadBalancer

This will expose the app on http://${master_IP}:30061 , however, because you may be making use of DigitalOcean firewalls, or just provision with a ruleset in place on your droplets, you may want to leverage something like the Floating IP feature:

Your Terraform script could be appended to do something like:

resource "digitalocean_floating_ip" "foobar" {
droplet_id = "${digitalocean_droplet.k8s-node-00.id}"
region = "${var.region}"
}

with that address pointing to your current Kubernetes master, meaning that if you were to lock down access to the master IP to certain types of traffic and source addresses, you could access it using http://${floating_IP}:30061 which isn’t super pretty, so another option might be to use a Load Balancer, which is helpful if you have a multi-master setup, for example, but also includes functionality like health checks and adding SSL to the access point:

resource "digitalocean_loadbalancer" "public" {
name = "k8s-lb.${var.region}"
region = "${var.region}"
forwarding_rule {
entry_port = 80
entry_protocol = "http"
target_port = 30061
target_protocol = "http"
}
healthcheck {
port = 30061
protocol = "http"
}
droplet_ids = ["${digitalocean_droplet.k8s-node-00.id}"]
}

There’s a lot of ways to abstract this approach into something production-ready, however, this approach makes managing infrastructure for your cluster a little simpler, quicker to deploy, and easier to replace if things like a node fails, or even if you rebuild the entire cluster and redeploy services.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.