Deploying on Kubernetes #3: Dependencies
This is the second in a series of blog posts that hope to detail the journey deploying a service on Kubernetes. It’s purpose is not to serve as a tutorial (there are many out there already), but rather to discuss some of the approaches we take.
Assumptions
To read this it’s expected that you’re familiar with Docker, and have perhaps played with building docker containers. Additionally, some experience with docker-compose is perhaps useful, though not immediately related.
Necessary Background
So far we’ve been able:
Next, we need to add some dependencies!
Dependency Management
I know few complex applications who are deployed in isolation, and who do not require dependencies on other networked services. It could be a relational database, key/value database or blob storage layer — or something more bespoke such as a microservice that exposes an RPC API. At any rate, nearly all applications are dependent on other applications.
When thinking in terms of operations it’s easy to think in terms of hosts, and to think about planning the deployment of our required services on host. We allocate hosts as the “host-1” “host-2”, and deploy the ansible “mysql” or “redis” playbooks depending on that systems requirements.
Kubernetes obviates the need for hosts, and we can think just instead about the services that we need. Instead of needing to negotiate capacity planning for mysql or redis, we can simply mark that it requires a certain amount of resource and deploy it across the cluster.
Packages within packages
As hinted at in the last post we’ll be using helm to manage our Kubernetes deployment. Helm is to Kubernetes as aptitude (apt-get
) is to Debian/Ubuntu or yum
is to RHEL/Centos/Fedora. It’s a package manage — it allows easy distribution of software.
Importantly, it also allows introducing dependencies on other packages:
$ apt-cache depends dockerdocker
Depends: libc6
Depends: libglib2.0-0
Depends: libx11-6
Helm allows us to introduce dependencies in the same way. Further, helm’s packaging of application mean that there are large number of networked services that we can simply use as part of our application compilation.
Implementing our dependencies
Earlier, we created a super simple chart which installed a non-functional version of kolide/fleet
. This application requires some dependencies:
- MySQL (a relational database)
- Redis (an in-memory key/value store)
luckily, there is a helm package for this software! The canonical version of helm packages is the Kubernetes charts repo:
There are two publication streams:
- Stable
- Incubator
TLDR, only use stable. For more information, consult the dodcs.
Our Chart
Our chart is now in a state where we can add the required dependencies, and test whether they work. The key is the requirements.yml file:
---
## Here, helm keep track of what dependencies that are required for the application. Dependencies should be listed here,
## then the dependencies updated with:
##
## helm dep update
##
## and the requirements.lock file commited.
##
# dependencies:
# - name: "apache"
# version: "1.2.3-1"
# repository: "http://storage.googleapis.com/kubernetes-charts"
The documentation above is part of the starter chart that we used to create this resource. The format is as it describes. So! Let’s add MySQL and Redis as dependencies.
$ cat <<EOF > requirements.yaml
> ---
> dependencies:
> - name: "mysql"
> # This version is the version in the file:
> #
> # https://github.com/kubernetes/charts/blob/master/stable/mysql/Chart.yaml
> #
> # Tue 27 Mar 17:13:08 CEST 2018
> version: "0.3.6"
> repository: "https://kubernetes-charts.storage.googleapis.com"
> - name: "redis"
> # See above re. version constraints
> version: "1.1.21"
> repository: "https://kubernetes-charts.storage.googleapis.com"
> EOF
This creates a file of the syntax expected above. Once the file is created, simply run:
$ helm dependency update
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: getsockopt: connection refused
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 2 charts
Downloading mysql from repo https://kubernetes-charts.storage.googleapis.com
Downloading redis from repo https://kubernetes-charts.storage.googleapis.com
Deleting outdated charts
A new file appears called requirements.lock:
dependencies:
- name: mysql
repository: https://kubernetes-charts.storage.googleapis.com
version: 0.3.6
- name: redis
repository: https://kubernetes-charts.storage.googleapis.com
version: 1.1.21
digest: sha256:fa0c7bce5404153174d0fdd132227d71f950478594b2b2f6e7351a70bb01dfe7
generated: 2018-03-27T17:17:00.430874158+02:00
This is the lock file for our dependencies, and ensures that we won’t accidentally install the wrong dependency somehow.
Next step, installation!
$ helm upgrade --install kolide-fleet .
Release "kolide-fleet" does not exist. Installing it now.
NAME: kolide-fleet
LAST DEPLOYED: Tue Mar 27 18:25:07 2018
NAMESPACE: default
STATUS: DEPLOYEDRESOURCES:
==> v1/Secret
NAME TYPE DATA AGE
kolide-fleet-mysql Opaque 2 1s
kolide-fleet-redis Opaque 1 1s==> v1/ConfigMap
NAME DATA AGE
kolide-fleet-fleet 0 1s==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
kolide-fleet-mysql Bound pvc-69d44d21-31db-11e8-81e0-080027c1d0f5 8Gi RWO standard 1s
kolide-fleet-redis Pending standard 1s==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kolide-fleet-mysql ClusterIP 10.104.173.216 <none> 3306/TCP 1s
kolide-fleet-redis ClusterIP 10.106.51.61 <none> 6379/TCP 1s==> v1beta1/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kolide-fleet-mysql 1 1 1 0 0s
kolide-fleet-redis 1 1 1 0 0s==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
kolide-fleet-mysql-6c859797b4-gf6lk 0/1 Init:0/1 0 0s
kolide-fleet-redis-6d95f98b98-qswkz 0/1 ContainerCreating 0 0sNOTES:
fleet## Accessing fleet
----------------------1. Get the fleet URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the loadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace default -w kolide-fleet-fleet'
export SERVICE_IP=$(kubectl get svc kolide-fleet-fleet --namespace default --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo http://$SERVICE_IP:/loginFor more information, check the readme!
There’s a lot more going on now than there was! In the output above we can see:
- 2 pods (containers) created
- 2 services (kind of like DNS) created
- 2 persist volume claims (storage) created
- 2 secrets created
Also, the names give away that we’ve just installed MySQL and Redis on our cluster:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kolide-fleet-mysql-6c859797b4-gf6lk 1/1 Running 0 1m
kolide-fleet-redis-6d95f98b98-qswkz 1/1 Running 0 1m
For now, they’re not being used. Still! We have our dependencies up and running.
All that’s left now is to craft the actual application. ;) You can see the commit associated with this work at the following URL:
(I’ll try my best to keep it up to date)
In Summary
Helm allows us to make use of pre-existing software. This software is usually production ready and configured with sane defaults. It considerably shortcuts infrastructure management in the same way that ansible roles shortcut the deployment of VMs.
Necessary Caveats
I used a minikube installation in this test, which apparently supports storage provisioners. This is an advanced topic that we will not be covering — TLDR, use minikube for your testing, or another version of hosted Kubernetes.
Also, I had heaps of trouble with a rogue kubeadm installation on my desktop. Because the office runs on DHCP the IP changed, and kubeadm did not like that.
Thanks
- The kubernetes-cert group at work. I so relish time to work on this stuff, and you guys are an awesome reason. Everyone is getting started with the work, and I appreciate it’s hard — keep it up!
- My work crew for giving me work time to write this stuff up.
See the next in this series here: