Managing Pod configuration using ConfigMaps and Secrets in Kubernetes

In this article, we are going to look at ConfigMaps and Secrets to handle the configuration of our URL Shortener app that we deployed in the previous article.

ConfigMaps and Secrets are 2 similar ways of decoupling the configuration from the Deployment manifests, which makes the latter more portable. The main difference between Secrets and ConfigMaps is that the former are recommended for sensitive information such as tokens, keys and passwords, whereas the latter should be used for any other kind of configuration. Besides, Secrets data will be stored in a base64 encoded form, which is also suitable for binary data such as keys, whereas ConfigMaps data will be stored in plain text format, which is fine for text files.

In the previous article, we configured the database Pod with the following environment variables to create the user and its database upon startup:

In the same manner, the webapp Pod was configured with the following environment variables to connect to the database:

Admittedly, this way of configuring the pods is both error prone and quite unsecured: firstly, the database credentials are repeated in the two manifests, so any change in one manifest must be applied to the other one. Secondly and more importantly, anyone having access to the manifests knows the credentials. This was fine for getting started in the last article, but we can certainly do better than that.

Defining Secrets for the sensitive information

The YAML manifest for Secrets is pretty straightforward, but as explained in the introduction, the values need to be encoded in base64:

$ echo -n "url_shortener_db" | base64 -
dXJsX3Nob3J0ZW5lcl9kYg==
$ echo -n "user" | base64 -
dXNlcg==
$ echo -n "mysecretpassword" | base64 -
bXlzZWNyZXRwYXNzd29yZA==

We can now use them in the Secretmanifest :

Once the Secret has been created, we can verify its content:

$ kubectl create -f database-secrets.yml
secret "database-secret-config" created
$ kubectl get secret database-secret-config -o yaml
apiVersion: v1
data:
dbname: dXJsX3Nob3J0ZW5lcl9kYg==
password: bXlzZWNyZXRwYXNzd29yZA==
username: dXNlcg==
kind: Secret
metadata:
name: database-secret-config
type: Opaque

Note that the kubectl describe command works also, but it does not display the base64-encoded values of the data:

$ kubectl describe secret database-secret-config
Name: database-secret-config
Namespace: sandbox
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
dbname: 16 bytes
password: 16 bytes
username: 4 bytes

Using the Secrets in the database deployment manifest

Now, we can adapt our database’s Deployment manifest to fetch the settings from the database-secret-config Secret, using the valueFrom/secretKeyRef reference:

As we’ve seen in the previous article, once the database Deployment object has been created from the manifest above, we can see the Pod a in runningstate:

$ kubectl create -f templates/database-deployment.yml
deployment "postgres" created
$ kubectl get all
NAME READY STATUS RESTARTS AGE
po/postgres-3585693371-xnzjv 1/1 Running 0 5s
NAME           CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
svc/postgres 10.0.0.191 <none> 5432/TCP 7d
NAME              DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/postgres 1 1 1 1 5s
NAME                     DESIRED   CURRENT   READY     AGE
rs/postgres-3585693371 1 1 1 5s

As a good verification that the credentials are properly configured, we can open a shell session on the container and access the database:

$ kubectl exec -it postgres-3585693371-xnzjv bash
root@postgres-3585693371-xnzjv:/# psql -U user -d url_shortener_db
psql (9.6.5)
Type "help" for help.
url_shortener_db=#

Defining ConfigMaps for the generic configuration

ConfigMaps can also be defined by a manifest with data in a key/value form, but for the sake of example, we’ll load the content from an external file, config.yaml which contains a single entry (cough!) to set the logger level:

$ cat config.yaml
level: info
$ kubectl create configmap app-config --from-file=./config.yaml
configmap "app-config" created
> kubectl describe cm app-config
Name: app-config
Namespace: sandbox
Labels: <none>
Annotations: <none>
Data
====
config.yaml:
----
level: info
Events: <none>

Above, the kubectl describe command showed the content of the config.yaml data file.

Using the ConfigMaps and Secrets in the webapp deployment manifest

In the case of the webapp’s Deployment manifest, we’re not only going to use environment variables for the database connection, but also a config file for the app logger. For this latter case, we are going to populate a Volumewith the data stored in the app-config ConfigMap:

In this Deployment description, we have the following notable elements:

  • a volume named config-volume whose content is filled with the app-config ConfigMap (here, our config.yaml file). This volume is mounted to /etc/config in the database container
  • some environment variables that can be hard-coded such as the HOST and PORT to the database, as well as the new CONFIG_FILE that contains the path to the config file on the mounted volume: /etc/config/config.yaml so the webapp knows where to load the config from
  • some environment variables that are fetched from the database-secret-config secret, as we’ve seen previously with the database deployment

Once the webapp Deployment has been created, we can inspect the webapp Pod and verify that there’s a config.yaml file in /etc/config as defined in the manifest above.

$ kubectl create -f templates/webapp-deployment.yml
deployment "webapp" created
$ kubectl get pods -l app=webapp
NAME READY STATUS RESTARTS AGE
webapp-3896943531-4qvfb 1/1 Running 0 4m
$ kubectl exec -it webapp-3896943531-4qvfb bash
root@webapp-3896943531-4qvfb:/go# ls /etc/config
config.yaml
root@webapp-3896943531-4qvfb:/go# exit
exit

Also, the logs confirm that the application started successfully:

$ kubectl logs webapp-3896943531-4qvfb
level=warning msg="loading config" path=/etc/config/config.yaml
level=warning msg="setting log level" level=debug
level=info msg="Connecting to Postgres database using: host=`postgres:5432` dbname=`url_shortener_db` username=`user`"
level=info msg="Adding the 'uuid-ossp' extension..."
____    __
/ __/___/ / ___
/ _// __/ _ \/ _ \
/___/\__/_//_/\___/ v3.2.1
High performance, minimalist Go web framework
https://echo.labstack.com
____________________________________O/_______
O\
⇨ http server started on [::]:8080

Et voilà! We now have our webapp and database that run with their configuration externalized in Kubernetes Secrets and ConfigMaps \o/