Locking Down Kubernetes Workers: hardening Kubernetes security

Jussi Nummelin
Kontena Blog
Published in
5 min readApr 26, 2018

--

Kubernetes has seen a flabbergasting speed of adoption in the last year or so. It has become the de-facto standard platform to run containers on despite the numerous challenges of setting up and managing it effectively.
One of the trickiest parts in setting up a proper Kubernetes cluster is making it secure. There are numerous communication paths that must be secured, most of them with certificates, as well as many different components running in different roles across multiple nodes and nodes in different roles. It’s really a matrix from hell from a security standpoint.

In this article we focus on locking down the worker nodes in the cluster. In practice this means locking down kubelet and the various “sidekick” services it uses on the nodes. We also take a look at what should be taken into account on the nodes themselves.

NIST (National Institute of Standards and Technology) has also developed a guide for securing your container based application environment. They have some chapters specifically targeted for securing the orchestration layer that we’ll also reflect on: https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-190.pdf.

Why not secure by default?

One thing to note, also from a security standpoint, is that Kubernetes is more like a framework intended to be utilized in building more higher level solutions.

What that means in practice is that plain Kubernetes does not lock down everything properly by default, instead it is expected to be configured by the operators setting it up.

Kubelet is the component that actually spins up new application containers on the platform. As kubelet interacts with the configured container runtime, quite often Docker, it is essential that there are no ways for an un-trusted party to be able to interact with kubelet. If such interaction would be possible, that said party would essentially have root capability on the system as it would be able to spin up new arbitrary containers to run pretty much any code they wish.

Setting up kubelet

When setting up kubelet on the nodes, special thought and care should be applied. By default kubelet leaves many things open on the node. The most critical being the kubelet API by which basically anyone can actually run arbitrary commands on your running pods. A great summary on how that can be exploited has been written by folks at Handy on their Medium blog.
If you are using kubeadm to spin up you a Kubernetes cluster you might be a bit safer. Kubeadm sets a more sensible default configuration for kubelet as it configures it with the —-client-ca-fileoption which makes access to the https://*:10250 on the nodes require a valid client cert signed by the Kubernetes CA. The bad part is that it still leaves the plain http API wide open on kubelet. While that does not allow any modification or executions through it, it will still disclose very sensitive information about the pods such as environment variables, etc.

The best advice is to completely disable the kubelet read-only port with --read-only-port=0 like we do with Kontena Pharos, in the kubelet startup. As this might have some side-effects considering the possible 3rd party add-ons that you might be using, you probably need to figure out how to configure those add-ons to talk to the verified HTTPS api with valid certificates in place.

Securing 3rd party cluster components

Quite often we need to run 3rd party components that take part in the overall container orchestration. Such components could be, for example, overlay network components deployed as daemonsets to the cluster or some helpers to scrape metrics from the system. These components need to talk with both the apiserver and also interact with other parts of the Kubernetes system. Quite often these kinds of components have their own APIs to interact with which might expose too much information. Make sure you properly configure each “add-on” to lock down the APIs they expose and/or firewall so that only trusted parties can actually access that API. For example, with heapster you’d need to add the --source=kubernetes.summary_api:https://kuberbetes.default.svc?kubeletHttps=true&kubeletPort=10250&useServiceAccount=trueoption to make it talk with the kubelet using a secured and authenticated endpoint.

Another good practice is to enable RBAC and use service accounts with proper rules for each of the add-ons. This makes the possible exploits easier to detect and also remediate. It would also allow easy revoking of credentials if any of the accounts were compromised.

Securing the worker nodes

As stated in the NIST SP 800–190 security guidance, we should be using minimalized OS’es to run our containers on. Usually the only thing we’re running on the hosts are indeed containers so essentially the only thing we need on the hosts is a container engine. Apart from the orchestrator, in this case Kubernetes, components. By having the OS level minimized we also minimize the attack surface of the host.

For inter component communication of various Kubernetes communications, consider firewalling the access only to trusted sources. For example, when using AWS security groups, one should really lock down the access to various Kubernetes components only within that group.

Great care has to be taken when using 3rd party add-ons, you need to know what they expose, how they communicate with various Kubernetes services and how to secure their communication.

Not only is the usage of Kubernetes as THE solution growing, at the same time we’re seeing the whole ecosystem around Kubernetes growing rapidly. Some tooling has been created, and is still being developed, to ensure the Kubernetes setup conforms to best practices. Sonobuoy is a diagnostics tool that runs a set of conformance tests on your Kubernetes cluster to understand its state. It also does does some security validation, but it’s mostly focusing on the functional aspects of the cluster. Kube-bench by AquaSecurity on the other hand focuses purely on the security aspects of the Kubernetes cluster. It runs a set of checks to validate that your cluster is deployed securely.

We’d strongly advise you to use these tools as part of the validation process of any Kubernetes cluster setup.

About Kontena Inc.

Kontena Inc. is specialized in creating the most developer friendly solution for running containers. Kontena’s products are built on open source technology developed and maintained by Kontena. Kontena was founded in 2015 and has offices in Helsinki, Finland and New York, USA. More information: www.kontena.io.

Image Credits: Photo by Dhruv Deshmukh on Unsplash.

Originally published at blog.kontena.io.

--

--

Jussi Nummelin
Kontena Blog

Engineer, Dad, Fly-fisher, Husband; in varying order. Currently fiddling with Kubernetes at Mirantis