Helm’s tiller in a multi-tenant kubernetes cluster

K8Spin
K8Spin
Apr 22 · 7 min read

Helm is a (good-enough) tool to deploy applications in Kubernetes but one of the main flaws is the server-side component called Tiller which in most places cluster-admin is the recommended role.

What if you want provide to your friends a namespace inside your Kubernetes cluster? What if you want to give access to a stranger? cluster-admin is not the role you want to give them…

In this article you will discover the problems of giving less permissions to Tiller and an alternative solution.

Scenario

We assume that all users are malicious hackers:

A normal user inside a multitenant Kubernetes cluster

A cluster is segmented by ‘namespaces’, each of them should be completely isolated from the other.

A user who has access to a namespace should not be able to identify or access other namespaces.

This is called by the Kubernetes community as: Hard multi-tenancy.

Official helm documentation

Helm

In the official helm documentation available @ github you can read about few configuration examples.

The one that fits in this use case is:

Deploy Tiller in a namespace, restricted to deploying resources only in that namespace

Easy, isn’t it?

Let’s review the role proposed by the official documentation for “isolated” namespaces:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager
namespace: tiller-world
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]

Apparently there’s nothing unusual about this role. The most striking thing is that it allows to do “anything” in “any resource” within the batch, extensions and apps api groups.

Wait a second…

A user can do anything in the api group ‘ ’? Let’s check what resources are available in that api group ‘ ’:

$ kubectl api-resources --api-group=''
NAME SHORTNAMES APIGROUP NAMESPACED KIND
bindings true Binding
componentstatuses cs false ComponentStatus
configmaps cm true ConfigMap
endpoints ep true Endpoints
events ev true Event
limitranges limits true LimitRange
namespaces ns false Namespace
nodes no false Node
persistentvolumeclaims pvc true PersistentVolumeClaim
persistentvolumes pv false PersistentVolume
pods po true Pod
podtemplates true PodTemplate
replicationcontrollers rc true ReplicationController
resourcequotas quota true ResourceQuota
secrets true Secret
serviceaccounts sa true ServiceAccount
services svc true Service

Uhmmmm. Interesting. With the role described by the official helm documentation regarding the use case we are interested in, it allows a user to create, modify and destroy namespaces. Anyone.

I am sure that we don’t want that.

Let’s search on the internet

A few google searches gave me the following links:

In short, it’s time for safety engineering.

Security first

We offer Kubernetes namespaces totally isolated from each other. This is achieved by using (not only) RBAC.

Let’s review a tiller role proposal:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tiller
rules:
- apiGroups:
- ""
resources:
- pods
- configmaps
- secrets
- serviceaccounts
- services
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- replicasets
- statefulsets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- extensions
resources:
- deployments
- ingresses
- replicasets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

As you can see, this role provides permissions on a few resources at namespace scope and nothing at cluster scope (this would allow seeing or modifying resources that are outside the user’s namespace)

The next step is to create a service account and link this role with a rolebinding:

ServiceAccount:

apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller

RoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tiller
subjects:
- kind: ServiceAccount
name: tiller
namespace: helm

Check the created resources:

$ kubectl get sa,role,rolebinding
NAME SECRETS AGE
serviceaccount/default 1 6m57s
serviceaccount/tiller 1 2m24s
NAME AGE
role.rbac.authorization.k8s.io/tiller 2m24s
NAME AGE
rolebinding.rbac.authorization.k8s.io/tiller 82s

Once these three resources are created, deploy the helm’s tiller in a namespace (helm in this example).

$ helm init --service-account tiller --tiller-namespace helm
$HELM_HOME has been configured at /home/angel/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Ok, we have indicated the namespace and the serviceaccount that we have created with the permissions mentioned above. But does the tiller work?

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
tiller-deploy-6ddc9d48d8-vz9fg 0/1 CreateContainerConfigError 0 3m14s
$ kubectl get event --namespace helm --field-selector involvedObject.name=tiller-deploy-6ddc9d48d8-vz9fg
LAST SEEN TYPE REASON KIND MESSAGE
4m1s Normal Scheduled Pod Successfully assigned helm/tiller-deploy-6ddc9d48d8-vz9fg to gke-k8spin-beta-k8spin-nodes-a-2c799cd8-v6vv
98s Normal Pulled Pod Container image "gcr.io/kubernetes-helm/tiller:v2.12.0" already present on machine
112s Warning Failed Pod Error: container has runAsNonRoot and image has non-numeric user (nobody), cannot verify user is non-root
tiller runs as root

The error is clear, our multitenant cluster does not allow to run containers as root. Maybe one of the reasons is the CVE found in runc.

The solution it to run tiller as non-root:

$ kubectl delete deploy,svc tiller-deploy
service "tiller-deploy" deleted
deployment.extensions "tiller-deploy" deleted
$ helm init --service-account tiller --tiller-namespace helm --override 'spec.template.spec.securityContext.runAsUser'='65534'
$HELM_HOME has been configured at /home/angel/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Let’s check tiller status:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
tiller-deploy-7b847cc544-jl8xg 1/1 Running 0 55s

But does it really work? Let’s try a very simple chart developed by bitnami to deploy a nginx server.

First, we configured bitnami repository at helm:

$ helm repo add bitnami https://charts.bitnami.com
"bitnami" has been added to your repositories

Then, we can deploy the bitnami nginx chart into our namespace.

$ helm install --name hello --set service.type=ClusterIP bitnami/nginx --tiller-namespace helm
Error: release hello failed: namespaces "helm" is forbidden: User "system:serviceaccount:helm:tiller" cannot get resource "namespaces" in API group "" in the namespace "helm"

For some reason, Helm is still calling the API to get details of the namespaces, as it is only failing on the ‘get’ command (this is only to get details of the user’s namespaces, not on all namespaces) we can add this permission.

Let’s modify the tiller role:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tiller
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get

- apiGroups:
- ""
resources:
- pods
- configmaps
- secrets
- serviceaccounts
- services
-----OMITTED----

Let’s try it again.

$ helm delete --purge hello  --tiller-namespace helm
release "hello" deleted
$ helm install --name hello --set service.type=ClusterIP bitnami/nginx --tiller-namespace helm
NAME: hello
LAST DEPLOYED: Wed Apr 17 20:01:49 2019
NAMESPACE: helm
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-nginx ClusterIP 10.67.241.80 <none> 80/TCP 0s
==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hello-nginx 1 0 0 0 0s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-nginx-646dd5f4b8-n45kl 1/1 Running 0 24s
tiller-deploy-7b847cc544-jl8xg 1/1 Running 0 16m

It works!

If we try to delete a namespace (kube-system) using the serviceaccount used by tiller, Kubernetes api will response that do not have permission to do so.

$ kubectl get ns helm --as system:serviceaccount:helm:tiller
NAME STATUS AGE
helm Active 43m
$ kubectl delete ns kube-system --as system:serviceaccount:helm:tiller
Error from server (Forbidden): namespaces "kube-system" is forbidden: User "system:serviceaccount:helm:tiller" cannot delete resource "namespaces" in API group "" in the namespace "kube-system"

Conclusions

Finally…

In an environment where safety is critical (is there an environment where it is not critical?) special care must be taken to understand the technology in order to avoid possible failures.

Reduce the attack surface as much as you can.

In particular, the case we are discussing, a multitenant Kubernetes cluster, needs additional care and to be quite strict with security policies. A bad configuration of the helm’s tiller could cause deletion of namespaces (tenants) of other users.

K8Spin

Written by

K8Spin

Kubernetes Namespace as a Service. Check out https://k8spin.cloud

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade