Deploying on Kubernetes #9: Exposition via service

Andrew Howden
6 min readApr 9, 2018


This is the ninth in a series of blog posts that hope to detail the journey deploying a service on Kubernetes. It’s purpose is not to serve as a tutorial (there are many out there already), but rather to discuss some of the approaches we take.


To read this it’s expected that you’re familiar with Docker, and have perhaps played with building docker containers. Additionally, some experience with docker-compose is perhaps useful, though not immediately related.

Necessary Background

So far we’ve been able:

  1. Define Requirements
  2. Create the helm chart to manage the resources
  3. Add the MySQL and Redis dependencies
  4. Create a functional unit of software … sortof.
  5. Configure some of the software
  6. Configure the secret parts of the software
  7. Install/upgrade the software automatically with release
  8. Supply the required TLS resources

Service Discovery

In addition to providing primitives for running containers and injecting configuration into these containers, Kubernetes also provide a service discovery abstraction.

From Wikipedia:

Service discovery is the automatic detection of devices and services offered by these devices on a computer network. A service discovery protocol (SDP) is a network protocol that helps accomplish service discovery. Service discovery aims to reduce the configuration efforts from users.

Kubernetes implements this in two parts:

The service abstraction

Kubernetes provides a service abstraction. We can think of a service as a super simple proxy that sits in front of pods. It gets assigned an IP, and passes traffic to that IP to 1 in a series of pods.

We have unknowingly been using the service abstractions provided by the Redis and MySQL charts. We can take a look at one of those to evaluate what a service looks like:

$  kubectl get svc kolide-fleet-mysql --output=yaml---
# Abridged
apiVersion: v1
kind: Service
app: kolide-fleet-mysql
chart: mysql-0.3.6
heritage: Tiller
release: kolide-fleet
name: kolide-fleet-mysql
namespace: default
- name: mysql
port: 3306
protocol: TCP
targetPort: mysql
app: kolide-fleet-mysql
sessionAffinity: None
type: ClusterIP

We see the familiar metadata which describes where and how the service was created. But let’s take a look at the spec block, and go over each component:


Earlier it was mentioned that all services get assigned an IP. That’s the IP!

- name: mysql
port: 3306
protocol: TCP
targetPort: mysql

Services proxy traffic based on the configuration defined in ports. They can proxy one or many ports, and can proxy one port to the other. We will take advantage of this behaviour later.

In this case, the configuration notes that a TCP proxy listening to 3306 must target the MySQL port.

app: kolide-fleet-mysql

Earlier it was mentioned that Kubernetes selects resources by label. This applies here — the service will proxy traffic on the defined port to any pod that matches the above selector. We can check which pods will be chosen:

$ kubectl get pods --selector app=kolide-fleet-mysql
kolide-fleet-mysql-58d8f6c496-75v5w 1/1 Running 0 2h


sessionAffinity: None

Whether to enable sticky sessions.

type: ClusterIP

There are several types of services:

  • ClusterIP
  • NodePort
  • LoadBalancer
  • ExternalName

A full description is available on the website. We’ll be using the ClusterIP (internal) and LoadBalancer (eehrm load balancer) types.

The service implementation

As mentioned earlier, each service gets assigned an IP. However, it’s not clear how each service should find this IP. There are a few different ways:

  • Environment variables are injected into the container environment
  • Querying the API directly
  • Querying a DNS server running on Kubernetes

We’ll only be focusing on DNS here, but check the docs for more detail if this does not suit you.

Kubernetes runs a DNS server and makes it the default resolver for all pods injected into the cluster. We can see it by running the following command:

$ kubectl exec kolide-fleet-mysql-58d8f6c496-75v5w cat /etc/resolv.confnameserver # <-- The important bit
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

The resolv.conf file is used by the system resolver to determine which upstream DNS server to query for DNS enquiries.

Kubernetes makes services available at the “FQDN” (fully qualified domain name) as follows:


For example, given the service name kolide-fleet-mysql and the namespace default the DNS record will be available at:


The cluster.local is configurable by the Kubernetes administrators, but it’s an extremely common pattern for it to be called cluster.local.

However, this doesn’t quite match what we created earlier. We used simply:

# templates/configmap:25-26     redis:
address: kolide-fleet-redis:6379

This is possible thanks to the other configuration in the resolv.conf file:

# /etc/resolv.conf:2-3search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5

In this file, ndots:5 means “search up to 5 dots deep”. Additionally, the search domains are default.svc.cluster.local, svc.cluster.local and cluster.local.

This will express itself as several DNS queries in the following order:


Luckily, our DNS server has a record for one of those — kolide-fleet-redis.default.svc.cluster.local. So, we can simply use kolide-fleet-redis and it will work! Additionally, it will work across all namespaces and all clusters.

Our own personal service

The work required to implement a service given our starter chart is extremely minimal — indeed, there would likely be none, but it’s worth taking the opportunity to bounce the port around.

There’s a lot of things that the default start template takes care of:

  • Deferring the choice of service type to the user, but defaulting on LoadBalacer (see Values.yml).
  • Surfacing the service to Prometheus for discovery and analysis
  • If it’s a load balancer, using the OnlyLocal annotation to direct traffic directly to the node on which a replica runs rather than any node and getting it bounced around through NAT.
  • The NOTES.txt shows the appropriate access information depending on the service type.

The template explains its purpose fairly well. However, let’s stick to implementing things. First, we move the service from .to-do to templates:

$ mv .to-do/service.yaml  templates/

Then, a simple bit of editing. Given the section:

# temlpates/service.yaml:27-30   ports:
- protocol: "TCP"
name: "http"
port: 8080

We swap it for:

# templates/service.yaml:27-31   ports:
- protocol: "TCP"
name: "http"
port: 443
targetPort: "http"

Changing ingress port to 443 means given the appropriate address browsers will automatically connect via HTTPS. targetPort will map the http to the http in the deployment declaration — in this case 8080.

That’s it! Upon release, we can see the service:

$ kubectl get svc
kolide-fleet-fleet LoadBalancer <pending> 443:31525/TCP 7m

Unfortunately I am running in Minikube, so a load balancer is not automatically created. However, we should be able to test on the exposed NodePort. First, we need the IP of a node:

$ minikube ip192.168.99.100

Then we can simply use the IP and the port defined above in the kubectl get svc call to connect to the service. It becomes:

We stick in the browser and: it works!

TLS validation fails as the self signed certificate earlier issued is … well, self signed, and not valid for the IP we provided. However, it should be trivial deploying this in an actual environment to resolve these issues.

As always, the commit for this work is here:

Astute viewers will notice that there are additional changes there that aren’t discussed. Since the application now works I was testing it, and discovered I missed the Redis secret in the previous set of secret work. Oops.

In Summary

This brings us to a “deployable” version of the application. Good news also, as I wanted to get this up into a work environment where I can demonstrate it for colleagues.

However, there is still work to make it production ready. Future work we will add further hardening to ensure the application is continually up, as well as start trimming back some of the unnecessary parts of the starter template. Lastly, we will use the learnings from this chart to improve the starter template for future charts.

Hooray deployable!

The next version in this series is here: