Deploying to Kubernetes
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications
There are three key points to understand in Kubernetes:
- Cluster — a set of VMs (called nodes) that will be used to run your containers
- Deployment — an instance of your app. If your app does not need to expose any endpoints to the outside world then this will be all you need (e.g. your app only communicates with other containers in your cluster, only fetches data from another external API, or it connects to an amqp exchange and processes messages). This can also be replicated multiple times to scale your app.
- Pod — its best to imagine this as a single VM, with a running instance of your container or group of containers that benefit from being on the same machine.
- Service — you create this to make your app accessible from the public network (because by default it can’t be accessed by anything that isn’t running inside the same container). You should add a load balancer if you have multiple replicas of your deployment so that requests can be shared fairly between your app’s instances.
The first thing you need to do is define a kubernetes config file:
# Labels are used when determining which deployments this service will forward traffic to
# Type LoadBalancer will expose an external IP.
# The selector to match the labels
- name: "http"
# The 'port' refers to the EXTERNAL port (i.e. the public facing port) number.
# The 'target' port is the port on the container to which the service needs to forward the traffic.
# Labels which the Service uses when selecting services to forward traffic to.
- name: <service-name>
imagePullPolicy: Always # needed this so that my development images are always repulled even though they have the same tag
- name: 'http'
# The port your application is listening on.
- name: NODE_ENV
- name: AMQP_HOST
#!/bin/shcat kubernetes-deploy.yml \
| sed "s^<project-name>^$PROJECT^g" \
| sed "s^<service-name>^$SERVICE^g" \
| sed "s^<image-name>^$WERCKER_GIT_REPOSITORY^g" \
| sed "s^<codename>^$SERVICE-$ENVIRONMENT^g" \
| sed "s^<environment>^$ENVIRONMENT^g" \
| sed "s^<replicas>^3^g" \
| sed "s^<project-version>^$PACKAGE_VERSION^g" \
| sed "s^<port-internal>^8080^g" \
| sed "s^<port-external>^80^g" \
| sed "s^<NODE_ENV>^$NODE_ENV^g" \
| sed "s^<AMQP_HOST>^$AMQP_HOST^g" \
- You can add more lines to replace more variables as you see fit, or hardcode any variables you don’t forsee changing in you config.
- I used ‘^’ as the delimiter because ‘/’ was causing problems with urls.
Once we have these two files, we can finally add a wercker pipeline for deployment to kubernetes.
name: export package version
[ "$WERCKER_GIT_BRANCH" = "master" ] \
&& export PACKAGE_VERSION=$(node -p -e "require('./package.json').version") \
|| export PACKAGE_VERSION=development
name: create the deployment file
command: apply -f deploy-config.yml
This will apply your config to your cluster and deploy your application with the image tagged with your PACKAGE_VERSION env variable and injects other environment variables you have configured in your wercker setup.
The final step is to add the pipeline to your workflows in wercker’s online UI and you’re done! You now have automated deployment set up in your project.
Before we set this up at my work deployment would take a couple of hours and would require a lot of manual configuration and setup, also introducing the risk of making mistakes. Now, I can kick of a deployment from my phone with the click of a button.