Y1 Digital

Hello, we are Y1. The agency for valuable and sustainable digital commerce projects. https://www.y1.de/

Deploying on Kubernetes #7: Application Installation

--

This is the seventh in a series of blog posts that hope to detail the journey deploying a service on Kubernetes. It’s purpose is not to serve as a tutorial (there are many out there already), but rather to discuss some of the approaches we take.

Assumptions

To read this it’s expected that you’re familiar with Docker, and have perhaps played with building docker containers. Additionally, some experience with docker-compose is perhaps useful, though not immediately related.

Necessary Background

So far we’ve been able:

  1. Define Requirements
  2. Create the helm chart to manage the resources
  3. Add the MySQL and Redis dependencies
  4. Create a functional unit of software … sortof.
  5. Configure some of the software
  6. Configure the secret parts of the software

Installation

After software is “installed”, it usually needs to perform some additional tasking to set itself up in a production like way. Further, this installation needs to be performed exactly once, and successfully.

Helm attempts to solve this through the use of “lifecycle hooks”, or Kubernetes jobs that run a certain point in the lifecycle of the chart. We’ll be using the post-install and post-upgradehooks to run the command required for fleet to setup it’s database.

The Kubernetes job spec

Kubernetes provides a specification which is designed for either single use workloads, or workloads that are run on a schedule (i.e. CronJob). These workloads are perfect for the “run to complete” operation that we need for the installation, and are supposed by the helm lifecycle hooks

Creating the job

The creation of the job spec is fairly low overhead, as it is largely the same as the deployment job. Indeed, we can copy over the entire spec section and simply amend the args node to perform the prepare db job as required:

# templates/db-migrate-job.yml:1:55---
apiVersion: "batch/v1"
kind: "Job"
metadata:
labels:
app: {{ template "fleet.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
name: {{ template "fleet.fullname" . }}
spec:
template:
metadata:
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "0"
"helm.sh/hook-delete-policy": hook-succeeded
labels:
app: {{ template "fleet.fullname" . }}
release: "{{ .Release.Name }}"
spec:
volumes:
- name: "fleet-configuration"
configMap:
name: {{ template "fleet.fullname" . }}
containers:
- name: "fleet"
env:
- name: "KOLIDE_AUTH_JWT_KEY"
valueFrom:
secretKeyRef:
name: {{ template "fleet.fullname" . }}
key: "fleet.auth.jwt_key"
- name: "KOLIDE_MYSQL_USERNAME"
valueFrom:
secretKeyRef:
name: {{ template "fleet.fullname" . }}
key: "fleet.mysql.username"
- name: "KOLIDE_MYSQL_PASSWORD"
valueFrom:
secretKeyRef:
name: {{ template "fleet.fullname" . }}
key: "fleet.mysql.password"
image: {{ .Values.pod.fleet.image | quote }}
args:
- "fleet"
- "prepare"
- "db"
- "--config"
- "/etc/fleet/config.yml"
volumeMounts:
- name: "fleet-configuration"
readOnly: true
mountPath: "/etc/fleet"
restartPolicy: "Never"

Additionally, the restartPolicy was changed from Always to Never. Should the migrations fail, we want to investigate that manually rather than let another attempted migration corrupt data.

Marking the job as a hook

Helm uses annotations to mark the job as a “hook”. In particular, it uses:

# https://github.com/kubernetes/helm/blob/master/docs/charts_hooks.mdannotations:
# This is what defines this resource as a hook. Without this line, the
# job is considered part of the release.
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-5"
"helm.sh/hook-delete-policy": hook-succeeded

We can add these to our own migration job:

# templates/db-migrate-job.yml:13-18
metadata:
annotations:
"helm.sh/hook": post-install,post-upgrade
"helm.sh/hook-weight": "0"
"helm.sh/hook-delete-policy": hook-succeeded
labels:

Note that the hook is run both after the installation of the application, and after each upgrade. The upgrade operation appears to be idempotent, and can thus be run an arbitrary number of times.

In summary

That’s it! Super straight forward to create installation processes if we already have the deployment sorted out. However, it appears the application is still not working:

$ kubectl logs kolide-fleet-fleet-7c5f4999d7-7f7zf
Using config file: /etc/fleet/config.yml
{"component":"service","err":null,"method":"ListUsers","took":"729.727µs","ts":"2018-03-31T20:10:44.219065547Z","user":"none"}
{"address":"0.0.0.0:8080","msg":"listening","transport":"https","ts":"2018-03-31T20:10:44.221862662Z"}
{"terminated":"open /etc/pki/fleet/kolide.crt: no such file or directory","ts":"2018-03-31T20:10:44.222672397Z"}

It requires some TLS resources to get up and running. We’ll cover that next.

However, as usual, you can see the changes associated with this post here:

Checkout the next post here:

https://medium.com/@andrewhowdencom/deploying-on-kubernetes-8-tls-8af217d74483

--

--

Y1 Digital
Y1 Digital

Published in Y1 Digital

Hello, we are Y1. The agency for valuable and sustainable digital commerce projects. https://www.y1.de/

Responses (1)