Kubernetes ConfigMap hot-reload in action with Viper
This is a follow-up to my article about “Managing Pod configuration using ConfigMaps and Secrets in Kubernetes”. Here, we’re going to see how changes in files in ConfigMaps can be taken into account by a Go application without having to re-deploy it.
One benefit of using a configuration file compared to environment variables is that file changes can be taken into account by an application without having to force the rollout of new pods. This assumes that the application supports config file monitoring and reloading the Kubernetes way, which is now* the case with spf13/viper, the popular configuration management library for Golang.
But before going into the example, let’s take a brief look at how Kubernetes handles ConfigMap files mounting and updating.
ConfigMap updates in pods
When a ConfigMap changes, the real path to the config files it contains changed, but this is kinda “hidden” by 2 levels of symlinks:
> oc get pods
NAME READY STATUS RESTARTS AGE
postgres-5457758cd8-mhl7q 1/1 Running 0 1h
webapp-5c6db95b69-t7tlg 1/1 Running 0 1h> oc rsh webapp-5c6db95b69-t7tlg
sh-4.2$ cd /etc/config
sh-4.2$ ls -al
drwxrwsrwx. 3 root 1000130000 78 Oct 1 20:30 .
drwxr-xr-x. 1 root root 20 Oct 1 19:37 ..
drwxr-sr-x. 2 root 1000130000 25 Oct 1 20:30 ..2018_10_01_20_30_03.939924911
lrwxrwxrwx. 1 root root 18 Oct 1 19:37 config.yaml -> ..data/config.yaml
lrwxrwxrwx. 1 root 1000130000 31 Oct 1 20:30 ..data -> ..2018_10_01_20_30_03.939924911> more config.yaml
log.level: info
> exit# update the ConfigMap with a modified `config.yaml` file
and apply the new YAML manifest
> oc apply -f templates/webapp-config.yaml
configmap "app-config" configured# return in the container, wait and see the changes
oc rsh webapp-5c6db95b69-t7tlg
sh-4.2$ cd /etc/config
sh-4.2$ ls -al
drwxrwsrwx. 3 root 1000130000 78 Oct 1 20:32 .
drwxr-xr-x. 1 root root 20 Oct 1 19:37 ..
drwxr-sr-x. 2 root 1000130000 25 Oct 1 20:32 ..2018_10_01_20_32_35.623973186
lrwxrwxrwx. 1 root root 18 Oct 1 19:37 config.yaml -> ..data/config.yaml
lrwxrwxrwx. 1 root 1000130000 31 Oct 1 20:32 ..data -> ..2018_10_01_20_32_35.623973186> more config.yaml
log.level: debug
> exit
Note the double indirection: when the config file is updated, it’s the link from ..data
to the real directory that is overwritten. In other words, the link from config.yaml
to ..data/config.yaml
does not change, but that’s the ..data
link that switches from one folder to another one.
ConfigMap update support with spf13/viper
It’s now time to see how a ConfigMap file can be reloaded in a Golang application using the spf13/viper library*.
In the application code, the configuration is built using Viper such as below:
In short, when initializing a new Configuration
, the log level is first set to info
by default (line 14). Next, the ./config.yaml
config file is read from the filesystem (line 19) and if it contains an attribute with the log.level
key, then this updates the varLogLevel
entry in the configuration. Finally, a file watcher is setup (line 29) and each time a change is detected on the filesystem, the logger is set to the new level (line 32).
After the ConfigMap update shown in the previous section was applied (i.e., setting the log.level
value from info
to debug
), the application picked up the change as soon as the new version of the /etc/config/config.yaml
file was published in the pod:
> oc logs -f
...
time="2018-10-01T20:32:35Z" level=warning msg="Config file changed" file=/etc/config/..2018_10_01_20_32_35.623973186
time="2018-10-01T20:32:35Z" level=warning msg="setting log level"
level=debug
Et voilà! 🍿
Note that, as the official Kubernetes documentation says:
When a ConfigMap already being consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted ConfigMap is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the ConfigMap. As a result, the total delay from the moment when the ConfigMap is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of ConfigMaps cache in kubelet.
… which explains why it can take a few seconds (or more) until the change becomes effective in the pods.
[*] You’ll need a version of spf13/viper that contains the 8e194e
commit, which corresponds to the merge of the pull request to support WatchConfig on Kubernetes.