Getting Started with Helm/Tiller in Kubernetes — Part One

Tony G
3 min readOct 17, 2017

--

Kubernetes, as most of you know is the hottest thing in the DevOps/SRE/WebscaleUnicorn/WhateverTheFuck space these days.

After a few weeks of fiddling around with minikube and kube-aws/kops, I’ve gotten a solid grasp on the benefits of using deployments to control replica sets, been through the pain of losing a persistent volume claim on a Kafka Stateful Set through horrible cascading effects, and banging my head against a wall trying to add tags to ELB’s through annotations. What this resulted in was a lightbulb moment that this is definitely the tool of the future for building software. No doubt about it, Kubernetes is the best thing us operational ninjas have had since the launch of Ansible.

But something seemed missing. Sure we can version our application with tags in Docker registries, but what about pods that consist of multiple services? Do I really need to keep the version state of my application tied up in the same yaml I use to deploy? What happens if I update the version of Redis that I need and irresponsible me doesn’t commit these changes? Nobody adhoc changes deployment files without updating source control right?

Ok, seriously don’t do that.

I wish there was some sort of package manager for k8s that alleviates these issues for me. Wait a minute..there is!

Helm helps you manage Kubernetes applications — Helm Charts helps you define, install, and upgrade even the most complex Kubernetes application.

Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste madness.

This sounds like exactly what we need.

We have a few prerequisites before we continue. Homebrew, Minikube, VirtualBox and Helm need to be installed on your workstation. Assuming you’re using a Mac, this is straight forward.

Prerequisites

Installing Homebrew

$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Installing Minikube and Helm

Now that we have Homebrew installed, lets get Minikube, Virtualbox and Helm situated.

$ brew cask install minikube
$ brew cask install virtualbox
$ brew install kubernetes-helm

Tada! Aren’t package managers wonderful?

Now lets get started with the fun stuff.

Wiring up Helm and Tiller

So Helm is our package manager for Kubernetes and our client tool. We use the helm cli to do all of our commands. The other part to this puzzle is Tiller.

Tiller is the service that actually communicates with the Kubernetes API to manage our Helm packages.

First lets make sure we’re pointed at your local minikube instance.

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: ~/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: ~/.minikube/client.crt
client-key: ~/.minikube/client.key

Now that we confirmed kubectl is pointed at your minikube, let’s get Helm started.

$ helm initTiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!

Yeah, it was that easy. We just setup the helm backend, Tiller, as a deployment on your minikube instance. Let’s verify things.

$ kubectl describe deploy tiller-deploy --namespace=kube-system
Name: tiller-deploy
Namespace: kube-system
CreationTimestamp: Tue, 17 Oct 2017 10:08:04 -0400
Labels: app=helm
name=tiller
Annotations: deployment.kubernetes.io/revision=1
Selector: app=helm,name=tiller
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=helm
name=tiller
Containers:
tiller:
Image: gcr.io/kubernetes-helm/tiller:v2.6.2
Port: 44134/TCP
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: tiller-deploy-1936853538 (1/1 replicas created)
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
25m 25m 1 deployment-controller Normal ScalingReplicaSet Scaled up replica set tiller-deploy-1936853538 to 1

Great, so what do we have?

  • minikube installed and configured
  • helm cli configured to use minikube
  • tiller setup and deployed to minikube to interact with the k8s api

In the next post we’ll start the fun with dissecting a helm chart to see whats going on behind the scenes.

--

--