This is the ninth in a series of blog posts that hope to detail the journey deploying a service on Kubernetes. It’s purpose is not to serve as a tutorial (there are many out there already), but rather to discuss some of the approaches we take.
To read this it’s expected that you’re familiar with Docker, and have perhaps played with building docker containers. Additionally, some experience with docker-compose is perhaps useful, though not immediately related.
So far we’ve been able:
- Define Requirements
- Create the helm chart to manage the resources
- Add the MySQL and Redis dependencies
- Create a functional unit of software … sortof.
- Configure some of the software
- Configure the secret parts of the software
- Install/upgrade the software automatically with release
- Supply the required TLS resources
In addition to providing primitives for running containers and injecting configuration into these containers, Kubernetes also provide a service discovery abstraction.
Service discovery is the automatic detection of devices and services offered by these devices on a computer network. A service discovery protocol (SDP) is a network protocol that helps accomplish service discovery. Service discovery aims to reduce the configuration efforts from users.
Kubernetes implements this in two parts:
The service abstraction
Kubernetes provides a service abstraction. We can think of a service as a super simple proxy that sits in front of pods. It gets assigned an IP, and passes traffic to that IP to 1 in a series of pods.
We have unknowingly been using the service abstractions provided by the Redis and MySQL charts. We can take a look at one of those to evaluate what a service looks like:
$ kubectl get svc kolide-fleet-mysql --output=yaml---
- name: mysql
We see the familiar metadata which describes where and how the service was created. But let’s take a look at the
spec block, and go over each component:
Earlier it was mentioned that all services get assigned an IP. That’s the IP!
- name: mysql
Services proxy traffic based on the configuration defined in
ports. They can proxy one or many ports, and can proxy one port to the other. We will take advantage of this behaviour later.
In this case, the configuration notes that a TCP proxy listening to 3306 must target the MySQL port.
Earlier it was mentioned that Kubernetes selects resources by label. This applies here — the service will proxy traffic on the defined port to any pod that matches the above selector. We can check which pods will be chosen:
$ kubectl get pods --selector app=kolide-fleet-mysql
NAME READY STATUS RESTARTS AGE
kolide-fleet-mysql-58d8f6c496-75v5w 1/1 Running 0 2h
Whether to enable sticky sessions.
There are several types of services:
A full description is available on the website. We’ll be using the ClusterIP (internal) and LoadBalancer (eehrm load balancer) types.
The service implementation
As mentioned earlier, each service gets assigned an IP. However, it’s not clear how each service should find this IP. There are a few different ways:
- Environment variables are injected into the container environment
- Querying the API directly
- Querying a DNS server running on Kubernetes
We’ll only be focusing on DNS here, but check the docs for more detail if this does not suit you.
Kubernetes runs a DNS server and makes it the default resolver for all pods injected into the cluster. We can see it by running the following command:
$ kubectl exec kolide-fleet-mysql-58d8f6c496-75v5w cat /etc/resolv.confnameserver 10.96.0.10 # <-- The important bit
search default.svc.cluster.local svc.cluster.local cluster.local
resolv.conf file is used by the system resolver to determine which upstream DNS server to query for DNS enquiries.
Kubernetes makes services available at the “FQDN” (fully qualified domain name) as follows:
For example, given the service name
kolide-fleet-mysql and the namespace
default the DNS record will be available at:
cluster.local is configurable by the Kubernetes administrators, but it’s an extremely common pattern for it to be called
However, this doesn’t quite match what we created earlier. We used simply:
# templates/configmap:25-26 redis:
This is possible thanks to the other configuration in the
# /etc/resolv.conf:2-3search default.svc.cluster.local svc.cluster.local cluster.local
In this file,
ndots:5 means “search up to 5 dots deep”. Additionally, the search domains are
This will express itself as several DNS queries in the following order:
Luckily, our DNS server has a record for one of those —
kolide-fleet-redis.default.svc.cluster.local. So, we can simply use
kolide-fleet-redis and it will work! Additionally, it will work across all namespaces and all clusters.
Our own personal service
The work required to implement a service given our starter chart is extremely minimal — indeed, there would likely be none, but it’s worth taking the opportunity to bounce the port around.
There’s a lot of things that the default start template takes care of:
- Deferring the choice of service type to the user, but defaulting on
- Surfacing the service to Prometheus for discovery and analysis
- If it’s a load balancer, using the
OnlyLocalannotation to direct traffic directly to the node on which a replica runs rather than any node and getting it bounced around through NAT.
NOTES.txtshows the appropriate access information depending on the service type.
The template explains its purpose fairly well. However, let’s stick to implementing things. First, we move the service from
$ mv .to-do/service.yaml templates/
Then, a simple bit of editing. Given the section:
# temlpates/service.yaml:27-30 ports:
- protocol: "TCP"
We swap it for:
# templates/service.yaml:27-31 ports:
- protocol: "TCP"
Changing ingress port to
443 means given the appropriate address browsers will automatically connect via HTTPS.
targetPort will map the
http to the
http in the deployment declaration — in this case 8080.
That’s it! Upon release, we can see the service:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kolide-fleet-fleet LoadBalancer 10.107.47.72 <pending> 443:31525/TCP 7m
Unfortunately I am running in Minikube, so a load balancer is not automatically created. However, we should be able to test on the exposed
NodePort. First, we need the IP of a node:
$ minikube ip192.168.99.100
Then we can simply use the IP and the port defined above in the
kubectl get svc call to connect to the service. It becomes:
We stick in the browser and: it works!
TLS validation fails as the self signed certificate earlier issued is … well, self signed, and not valid for the IP we provided. However, it should be trivial deploying this in an actual environment to resolve these issues.
As always, the commit for this work is here:
AD-HOC feat (Service, Deployment, Secret): Add service, fix Redis · andrewhowdencom/charts@01bc87c
This commit adds a service such that the deployment is discoverable. It uses the service defined by the starter…
Astute viewers will notice that there are additional changes there that aren’t discussed. Since the application now works I was testing it, and discovered I missed the Redis secret in the previous set of secret work. Oops.
This brings us to a “deployable” version of the application. Good news also, as I wanted to get this up into a work environment where I can demonstrate it for colleagues.
However, there is still work to make it production ready. Future work we will add further hardening to ensure the application is continually up, as well as start trimming back some of the unnecessary parts of the starter template. Lastly, we will use the learnings from this chart to improve the starter template for future charts.
The next version in this series is here: