Running Octant as a Container on vSphere With Kubernetes

David Adams
The Startup
Published in
7 min readJul 14, 2020

Disclaimer: Everything in this post is taken from my initial investigation and proof of concept work in my lab. I think it may be helpful for others but know that this is not intended to be a production-ready architecture.

I’ve seen some cool Tweets about Octant for visualizing Kubernetes workloads recently and had to check it out for myself for the Kubernetes clusters I have on my Dell Technologies Cloud Platform (DTCP) lab which is comprised of VMware Cloud Foundation (VCF) 4.0 on VxRail HCI. Thanks to Vineeth and Ben for getting my attention!

While I’m probably more at home with kubectl, there are a lot of cool benefits to visualizing your Kubernetes objects, especially when it comes to debugging. So I jumped in to see what Octant was was all about. As I looked at the installation steps, which are all client side, my first reaction was that it seemed kind of painful to access it from anything other than localhost. After I thought about it a bit though, as a developer tool driven by your current config context, it made sense to run it locally most of the time. Still, I wanted to come up with an easier way to make it available outside of a single machine since I often times do demos for people or need to provide them easy access.

I started by trying to just run the binary on my Ubuntu machine with OCTANT_LISTENER_ADDR specified to allow connecting outside of localhost:

$ OCTANT_LISTENER_ADDR=0.0.0.0:80 ./octant --kubeconfig ~/.kube/config --disable-open-browser

But I kept getting this permission error:

unable to start runner: use OCTANT_LISTENER_ADDR to set host:port: failed to create net listener: listen tcp 0.0.0.0:80: bind: permission denied

I tried running it as root but got other errors and quickly got frustrated with the usual “my system is slightly different than what they have in the documentation or any tutorials I can find” problem. A more savvy Linux user could probably figure all that out, but this was when I decided to just build a container and run it that way because it always seems to “just work” for me.

Build an Octant Container

The first step was to build an Octant container because I couldn’t find one available anywhere. I grabbed the latest pre-built Linux binary, octant_0.13.1_Linux-64bit.tar.gz, and stuck it in the same directory where my Dockerfile would reside. The Dockerfile can be very short since I’m just sticking the binary in there and running it.

FROM alpine:3.12
COPY . /.
WORKDIR /.
EXPOSE 80
CMD OCTANT_LISTENER_ADDR=0.0.0.0:80 ./octant --kubeconfig ./.kube/config --disable-open-browser

It’s important to note that I use the —-kubeconfig flag to tell Octant where to look for the kubeconfig file. To test it out I built the config file (which I got from ~/.kube/config on my local machine) into the container image. This config file is populated with the connection information for all the Kubernetes clusters you have access to, in my case the vSphere Supervisor cluster which I connected to with the downloaded vSphere Kubectl binary (more on this later):

$ kubectl vsphere login --server=100.80.26.1 --insecure-skip-tls-verify

Later in the post I’ll show you how I externalized it with a ConfigMap in Kubernetes so I don’t have to rebuild the image every time I changed my kubectl configuration (i.e. connecting to a new cluster). Here’s what my directory tree looks like right now:

$ tree -a
.
├── Dockerfile
├── .kube
│ └── config
└── octant
1 directory, 3 files

Next, I build my container locally and test it out to make sure I can run it as a single container from my local machine.

$ sudo docker build -t davidadams3/octant:0.1 .
$ sudo docker run --name octant -d -p 80:80 davidadams3/octant:0.1

Now, putting in the IP or hostname of my local machine (not localhost) allows me to access Octant from anywhere that has a route to this machine! This is a viable solution as is if I just want an easy way to access Octant from somewhere other than localhost so feel free to stop here if that makes your life easier. Otherwise, you can keep reading to see how I ran my new container on Kubernetes and externalized the Kubeconfig file.

Run it on Kubernetes

As previously mentioned, I’m doing this on my VCF 4.0 setup so I am able to leverage all the goodness it offers for building and operating your private or hybrid cloud. If you’re not familiar with VCF and VxRail you can check it out here. For an introduction to all the new Kubernetes capabilities VCF 4.0 brings I’d recommend Cormac Hogan’s blog. It’s worth the time to get familiar with it all the new capabilities, but I won’t rehash that here since Cormac does a great job with that. I’m picking up here with vSphere for Kubernetes already enabled, a namespace created, a TKG cluster created and the Kubectl + vSphere plugin CLI.

Previously I talked about moving the kubeconfig file out of the image because I knew I could use a Kubernetes ConfigMap to mount it to the container and make it much more flexible. So the first step was to remove the file from the directory tree where I built my container image from and rebuild the container then push it up to my preferred Docker registry.

After that’s done we need to create our ConfigMap object now. This file is automatically created in a hidden file in your home directory when kubectl is connected to your cluster(s) and that’s how Octant is able to get your cluster info. So if you haven’t already you need to login to your Supervisor cluster and/or your TKG cluster depending on which one you want visualize in Octant. You’ll be prompted for credentials to connect. For the rest of the demo I’m going to stay logged into the Supervisor cluster since that’s where I plan to deploy Octant as a vSphere pod.

$ kubectl vsphere login --server=100.80.26.1 --insecure-skip-tls-verify
$ kubectl vsphere login --server=100.80.26.1 --insecure-skip-tls-verify --tanzu-kubernetes-cluster-namespace=test-01 --tanzu-kubernetes-cluster-name=tkg-cl-01

Now your ~/.kube/config file has the connection information required and can be the source for your ConfigMap

$ kubectl create configmap octant-config --from-file ~/.kube/config

Now we are able to create the Kubernetes deployment and service to expose it externally. The service is type LoadBalancer which is possible because vSphere with Kubernetes configures everything required with NSX-T during the deployment including the pool of ingress IPs I provided. This is a really nice feature which delivers the same type of experience you’d get from a public cloud managed Kubernetes experience (GKE, AKS, EKS, etc.).

apiVersion: v1
kind: Service
metadata:
name: octant-service
labels:
run: octant
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
run: octant

The deployment object is also quite straight forward but note that we have to create a volume for the ConfigMap and mount it into the directory the Octant container is expecting it to be (from the Dockerfile above it’s /.kube/config).

apiVersion: apps/v1
kind: Deployment
metadata:
name: octant-deployment
spec:
selector:
matchLabels:
run: octant
replicas: 1
template:
metadata:
labels:
run : octant
spec:
containers:
- name: octant
image: docker.io/davidadams3/octant:0.1
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /.kube
ports:
- containerPort: 80
volumes:
- name: config
configMap:
name: octant-config

I put this all together in a single octant-deployment.yaml file:

apiVersion: v1
kind: Service
metadata:
name: octant-service
labels:
run: octant
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
run: octant
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: octant-deployment
spec:
selector:
matchLabels:
run: octant
replicas: 1
template:
metadata:
labels:
run : octant
spec:
containers:
- name: octant
image: docker.io/davidadams3/octant:0.1
imagePullPolicy: Always
volumeMounts:
- name: config
mountPath: /.kube
ports:
- containerPort: 80
volumes:
- name: config
configMap:
name: octant-config

Apply the yaml with Kubectl.

$ kubectl apply -f octant-deployment.yaml
service/octant-service created
deployment.apps/octant-deployment created

Checking the deployment I can see the Octant pod is ready and available.

$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
kuard 1/1 1 1 19h
octant-deployment 1/1 1 1 33m

Now I just need to check for the automatically assigned EXTERNAL-IP for the service, in this case 100.80.26.3 and it can be accessed on port 80.

$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kuard-service LoadBalancer 10.96.0.29 100.80.26.2 80:32681/TCP 18h
octant-service LoadBalancer 10.96.0.203 100.80.26.3 80:31177/TCP 40s
tkg-cl-01-control-plane-service LoadBalancer 10.96.1.96 100.80.26.4 6443:31498/TCP 140m

We now have Octant running in vSphere Pod on the Supervisor cluster!

Thanks for reading this far and I’d like to re-iterate what I mentioned in the disclaimer at the beginning: I’m not convinced (yet) that this is the best way to deploy Octant but it was a fun exercise to work through so I thought I would share it. The obvious concern is around security and sharing access to your Kubernetes clusters without any sort of login screen (since it leverages the kubeconfig file for access to your clusters). I’ll do a little more tinkering to see if there are more features in Octant I’m not taking advantage of in this regard.

--

--

David Adams
The Startup

Christ follower, husband, father, @NUTrackandField All-American, sub-4 miler, Tech Marketing for @DellTechCloud. Views are my own. #iwork4Dell