Deploying on Kubernetes #5: Application Configuration

Andrew Howden
7 min readApr 7, 2018

--

This is the fifth in a series of blog posts that hope to detail the journey deploying a service on Kubernetes. It’s purpose is not to serve as a tutorial (there are many out there already), but rather to discuss some of the approaches we take.

Assumptions

To read this it’s expected that you’re familiar with Docker, and have perhaps played with building docker containers. Additionally, some experience with docker-compose is perhaps useful, though not immediately related.

Necessary Background

So far we’ve been able:

  1. Define Requirements
  2. Create the helm chart to manage the resources
  3. Add the MySQL and Redis dependencies
  4. Create a functional unit of software … sortof.

Configuration

Applications usually require some sort of configuration to work. Though we have deployed fleet, as well as it’s dependencies MySQL and Redis there is no way that fleet knows how to connect to these services, or the authentication details required.

Kubernetes supplies a couple of ways to inject configuration:

  • Via files
  • Via environment variables

Either way, theses are managed either with the ConfigMap object or the Secret object, and referenced as part of other objects.

Fleet

The fleet documentation indicates that it can accept configuration via files and environment variables also (as well, command line arguments, but we will not use this). This will likely be useful to us as a way to delimit secret configuration from non-secret configuration — but more on that later.

We are able to generate a full list of the configuration that fleet expects with the following command:

# Run via Docker as I don't need fleet locally$ docker run kolide/fleet fleet config_dump

This generates a fairly large configuration object:

# /dev/stdout:1-41mysql:
address: localhost:3306
username: kolide
password: kolide
database: kolide
tls_cert: ""
tls_key: ""
tls_ca: ""
tls_server_name: ""
tls_config: ""
redis:
address: localhost:6379
password: ""
server:
address: 0.0.0.0:8080
cert: ./tools/osquery/kolide.crt
key: ./tools/osquery/kolide.key
tls: true
tlsprofile: modern
auth:
jwt_key: ""
bcrypt_cost: 12
salt_key_size: 24
app:
token_key_size: 24
invite_token_validity_period: 120h0m0s
session:
key_size: 64
duration: 2160h0m0s
osquery:
node_key_size: 24
status_log_file: /tmp/osquery_status
result_log_file: /tmp/osquery_result
enable_log_rotation: false
label_update_interval: 1h0m0s
logging:
debug: false
json: false
disable_banner: false

Hmm. Lots going on there. Additionally, there are things that are clearly wrong. However, let’s get that file installed as the default configuration file before pursuing it further.

ConfigMap

Earlier as part of some initial work to define the chart specification, we generated the ConfigMap object. For reference, it currently looks like this:

# templates/configmap.yaml:1-16---
apiVersion: "v1"
kind: "ConfigMap"
metadata:
labels:
app: {{ template "fleet.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
heritage: "{{ .Release.Service }}"
release: "{{ .Release.Name }}"
name: {{ template "fleet.fullname" . }}
data:
# example.property.1: hello
# example.property.file: |-
# property.1=value-1
# property.2=value-2

To get the configuration to our application, we first need add it … somehow to this configmap. It turns out, this is not super hard — simply exploit the yaml block reference!

# templates/configmap.yaml:11-16data:
config.yml: |-
mysql:
address: localhost:3306
# ... and so on

This will create a key -> value pair where the key is config.yml and the value is the contents of the config file we need. You can verify this with:

kubectl get configmap --output=yaml

It should be there!

Consuming Configuration

Unfortunately, this doesn’t itself make that configuration available to the application process.

We need to modify our previously created deployment such that it adds the configuration to a directory in that container, as well as configures the fleet process to consume that configuration.

Consuming configuration as a filesystem

As mentioned previously, Kubernetes doesn’t store configuration files specifically, but rather configuration key→value pairs. So, we need to tell it to express this key as a file, with the value as the contents of that file.

Kubernetes allows this through the use of it’s volumes abstraction. It allows you to mount various random things as files on the filesystem, such as:

  • Actual files. Unoriginal I know, but it turns out some things need disk
  • Configuration
  • Network Filesystems
  • Information from the Kubernetes API itself

We must first declare our volume in the kubernetes pod spec file:

# templates/deployment.yaml:19,44:50template:
spec:
volumes:
# The name comes from the configmap. It's also shown earlier
- name: "fleet-configuration"
configMap:
name: {{ template "fleet.fullname" . }}

We then need to tell Kubernetes where to put that configuration volume in the container we want to use:

# templates/deployment.yaml:119,132:136          ports: 
# - the port stuff goes here
volumeMounts:
- name: "fleet-configuration"
readOnly: true
mountPath: "/etc/fleet"

This should make the entries in the configmap available at /etc/fleet/${KEY}. So, given config.yml it should be at /etc/fleet/config.yml .

Lastly, we need to modify the container arguments to that fleet knows where to find the configuration:

# templates/deployment.yaml      containers:
- name: fleet
image: {{ .Values.pod.fleet.image | quote }}
args:
- "fleet"
- "serve"
- "--config"
- "/etc/fleet/config.yml"

The args argument allows us to change what the cmd in the container would be.

Once committed, we redeploy and see … well, nothing — how do we know this configuration is applied? It is exactly as it was before. However, we can cheat — we will simply exec into the container and cat the file. First, we need to get the appropriate container:

$ kubectl get podsNAME                                  READY     STATUS    RESTARTS   AGE
kolide-fleet-fleet-88c9b5876-2dd26 1/1 Running 3 7m
kolide-fleet-mysql-6c859797b4-gf6lk 1/1 Running 4 3d
kolide-fleet-redis-6d95f98b98-qswkz 1/1 Running 4 3d

Next, we need to run the command cat /etc/fleet/config.yml:

$ kubectl exec kolide-fleet-fleet-88c9b5876-2dd26 cat /etc/fleet/config.yml---
mysql:
address: localhost:3306
username: kolide
password: kolide

There it is! Awesome.

Updating Configuration

We have taken a sample configuration file, and mounted it into the container where the application is consuming it. Awesome! But ah, it’s the wrong configuration.

We could simply update the configuration file in place, but this seems not such a nice option. Part of the reason of using helm is to create reusable deployment — not everyone will want the same configuration.

Luckily, helm also provides us the tools to solve this. Throughout the series there have been many references to {{ .Values.thing }}; we can simply add more of these!

Splitting out secrets from configuration

I should mention at the outset, we should not store secrets in configmap resources. There is another for that, which we will cover in a future part of the series.

For now, we’ll just be deleting secret things from the configuration to avoid confusion.

Known, consistent configuration

There is certain configuration that will always be consistent in releases managed by helm. This includes things like:

  1. Where to find MySQL
  2. Where to find Redis
  3. The status files for Fleet

So, let’s start with this first.

Both the MySQL and Redis dependencies expose services — a kubernetes object designed to make our resource discoverable. We will delve more deeply into it later, but for now it’s useful to know that the service names as output by:

$ kubectl get svc

will be available in most clusters at the DNS address ${SERVICE_NAME}. So, we can simply look up services:

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kolide-fleet-mysql ClusterIP 10.104.173.216 <none> 3306/TCP 3d
kolide-fleet-redis ClusterIP 10.106.51.61 <none> 6379/TCP 3d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d

and replace the appropriate sections in the configuration

# templates/configmap.yaml:14-17    ---
mysql:
# MySQL will resolve at `kolide-fleet-mysql`
address: kolide-fleet-mysql:3306
username: kolide

and

# templates/configmap.yaml:24-26redis:
address: kolide-fleet-redis:6379

In the case of the osquery logs, we’re going to push them to stdout

# templates/configmap.yaml:43-47    osquery:
node_key_size: 24
status_log_file: /dev/stdout
result_log_file: /dev/stdout

Docker records a processes stdout and stderr, and kubernetes allows reading those logs quickly and easily with:

kubectl logs ${POD}

I’m not 100% sure what they do yet. But I know that unless they’re logging to stdout or stderr then we’re never going to see them.

Inconsistent Configuration

For the rest of the configuration, we simply defer it to the user to supply the required data. Some we will leave for now or simply delete, as there are other ways to handle data. But let’s take a couple of examples:

# /dev/stdout:36-40

logging:
debug: false
json: false

This appears to modify logging behaviour. We can insert:

# templates/configmap.yml:49-53

logging:
debug: {{ default false .Values.fleet.logging.debug }}
json: {{ default false .Values.fleet.logging.json }}
disable_banner: {{ default false .Values.fleet.logging.disable_banner }}

which will let users customise properties by adding the required entries to the values.yml file. We then add these settings to the values.yml file such that users know what they’re looking for:

# values.yml:6-15## Fleet application specific settings
fleet:
logging:
## Whether to enable debug logging
debug: false
## Whether logging should be expressed in JSON
json: true
## Whether to disable the banner as part of the logs
disable_banner: true

That’s it! We can repeat that across all the settings we wish users to modify. A full list of the settings I have configured as part of this work is at:

In Summary

The configuration object allows us to supply various bits of application configuration outside the container, allowing extremely flexible and reusable applications.

In future work we will need to add additional configuration that is more “secret” than the current configuration. Additionally, we soon should then be able to see a preview of the fleet application actually running!

--

--